Originalism is a huge grift

Go back in time with me to the 1860s. The Fourteenth Amendement to the U.S. Constitution has just been drafted, containing the following text:

“All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside.”

This is one of the most important sentences in American history. It affords to anyone born on U.S. soil the full rights of U.S. citizenship – or does it?

Conservatives including Jeff Sessions, Ann Coulter, and Michael Anton (who wrote an odious op-ed for The Washington Post on this very topic) have long crusaded against this clause of the Fourteenth Amendment, despite its clear and obvious meaning and intent. The amendment was first meant to ensure that descendants of slaves were Americans, but its power extends far beyond that cause, as it simplifies the entire project of citzenship without requiring the complex and often controversial de sanguis (by blood) systems of many other countries. It makes assimilation easy.

So why does the hard right rail against its original intent? Because they think it’s a giveaway to “illegal immigrants” who can enter the country, have children, and be certain those children are Americans.

You’ll notice I put quotes around “illegal immigrants.” That’s because, in the 1860s, when the 14th Amendment was ratified, there was no such thing as an “illegal immigration.” It’s a modern concept and would have been incomprehensible in 19th century America, where virtually every ancestor of any person calling himself/herself an American arrived via a method that we would, going by latter-day legislation, in theory call “illegal” but don’t because, well, then everyone’s perceived legitimacy in the country would be at stake.

So an originalist reader of the 14th Amendment would clearly have to say that, nope, you can’t interpret it as something meant to exclude “illegal immigrants” and their families from the rights of citizenship, since no such distinction between legal/illegal migration existed at the time. You’ll be shocked to learn that conservative originalists – i.e., people in the legal community who purport to interpret the Constitution in the context of its original meaning at the time of enactment – don’t hold this position.

They’re not just hypocrites – they’re subscribers to an incoherent worldview. Originalism is often contrasted with “living constitutionalism,” the practice of reading the Constitution as a living document whose meaning changes with time and requires new readings aligned with the culture at-large.  the implication is oftne that while living constitutionalists (read: liberals) are “legislating from the bench” as “judicial activists,” conservative originalists are simply following the letter of the law. This is absurd, and not just because of the hypothetical 14th Amendment issue I raised. There are so many cases in which this comes through:

2nd Amendment

Perhaps the most infamous, the 2nd Amendment contains (indeed, starts with!) the phrase “A well-regulated militia, being necessary…” Gun ownership is framed right then and there in the context of military service, not an individual right to own as many assault rifles (which didn’t even exist in the 1700s) as possible. Yet the latter has become the bog-standard position of conservative “originalists.”

15th Amendment

A refresher; this amendment says:

“The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of servitude.”

And yet it so often is, by voter ID laws and other bullshit that is often explicitly targeted at black voters. Original intent would preclude any such barriers, yet conservative “originalists” are invariably the ones pushing policies in this realm, from statehouses to the judiciary.


The endless conservative assautls on the Voting Right Act demonstrates the bad faith of originalists, who are happy to ignore both the intent of the Reconstruction Amendments (13th-15th) and the will of the Congress that enacted the voting-specific legislation (as explicitly autohrized by the text of those amendments) to instead read arcane theories about the “sovereign dignity” of the states into a Constitution that doesn’t contain them. Shelby County v. Holder is the relevant case here.

Don’t fall for originalist jargon from the likes of Neil Gorsuch or pending Supreme Court nominee Brett Kavanaugh. They’re conservative reactionaries opposed to equality, and that’s that.


Court packing is a great idea. Here’s why.

The retirement of U.S. Supreme Court Associate Justice Anthony Kennedy under a GOP president was one of the most predictable crises of recent years, but one that nevertheless seems to have triggered, almost overnight, a sea-change in how the left talks about the judiciary. Sure, there have been the occasional pleas for “moderate” nominees from President Donald Trump. Yet even milquetoast progressive legal commentators such as Ian Millhiser of Think Progress – whose greatest hits include calling Antonin Scalia a “great scholar” – have now called for court packing. There has been a notable shift in the Overton Window about the Supreme court.

“Court packing” has a negative connotation – it stirs up images of strongmen trying to work the system by rigging its membership in their favor. It shouldn’t, though; from almost day one of the United States, court packing has been a germane concern for the other two branches of government, and for wholly practical reasons. Court packing is an American as apple pie, and it’s needed more than ever.

The Supreme Court: A cure worse than the disease

The case that created the Supreme Court as we know it was about court packing. Marbury v. Madison concerned the last-ditch effort of outgoing President John Adams to fill up the judiciary with Federalist judges before Thomas Jefferson took office. Jefferson’s inauguration marked the first time the Federalists would not have control of the executive branch after 12 years of George Washington plus John Adams, and they were scared – like any American conservative party reliquinshing its control of governmnet, they saw the unaccountable, life-appointed judiciary as the rearguard of its power.

The Supreme Court’s ruling in Marbury v. Madison established the precedent of judicial review, which permits the judiciary to take another look at executive and legislative actions. Nowhere in the U.S. Constitution is the Supreme Court afforded the power to strike down laws or even to deterine their constitutionality; though that right is hinted at in some of the Federalist Papers, it was primarily engineered by Chief Justice John Marshall in Marbury v. Madison to resolve one of the many intractable problems arising out of the separation of powers fundamental to the U.S. political system: How should a new administration execute the commissions of judges already nominated and approved by members of the opposing party?

Of its many flaws, the U.S. Constitution’s lack of foresight about the rise of political parties is one of the most significant. It’s amazing in retrospect that the “geniuses” behind it didn’t think that, say, having the Senate and the presidency divided between fiercely opposed factions might grind the government to a halt. Even though the Supreme Court was almost certainly not intended to be a super-legislature, it became that in part because the uniquely inefficient design of the American government often leaves no clear resolution to partisan disputes.

Following Marbury v. Madison, the Supreme Court took on an activist role it has since never let go of. It has weighed in on every issue from the Fugitive Slave Act (“it’s good, basically,” to paraphrase the court led by Jackson appointee Roger Taney) to Japanese interment (“also good,” to summarize the Harlan Stone court’s opinion in Korematsu v. United States), almost always on the wrong side morally.  Why the consistently awful opinions?

Because the Supreme Court is a fundamentally conservative institution. It’s the least accountable branch of the federal government due to lifetime tenure, plus it’s consistently staffed by the bourgeoise since many presidents have preferred highly credentialed lawyers or previous politicians as nominees. That’s a recipe for dominance by conservative white men, who can issue whatever opinions they like with virtually no fear of consequences.

The Supreme Court is, as such, a horrible cure to the disease of divided government: Whenever party disputes grind the participatory political system to a halt, the court steps in to give the reactionary perspective of the upper classes. Its conservatism is the product of not only of its insularity from democratic society, but also because of its links to two anti-democratic instiutions: The Senate and the Electoral College, whose power it reinforces.

These institutions over-represent rural white populations and have helped sustain conservatism in the U.S. despite immense social change. For most of its history, the U.S. has lacked anything resembling a liberal political party, in part because of the constraints created by the Senate and Electoral College. Accordingly, elections often go to conservative politicians who nominate and approve conservative judges.

The first somewhat liberal party to emerge was the Republican Party of the 1860s, which was founded with the goal of abolishing slavery. However, it quickly ran up against the obstacle of an entrenched judiciary appointed by multiple Democratic and Whig presidents. In response, Abraham Lincoln temporarily packed the court to ensure a favorable majority at a time when more than half the country was committed to the cause of abolition via victory over the Confederacy. He took a step to ensure the judiciary was better aligned with the popular will and moment.

Following the presidency of Rutherford Hayes, the country sunk back into almost compelte conservatism, with a neoconfederate Democratic Party and a business-friendly Republican Party. The courts of this era paved the way for Lochnerism, the anti-regulatory almost libertarian doctrine that prevailed until the 1930s, when President Franklin Roosevelt pushed back against judicial hostility to the New Deal. The Supreme Court finally relented after FDR drew up a plan to pack it with Democrats, who by this time were becoming much more progressive.

How the Supreme Court became one of the highest-stakes battlegrounds

Most presidents have won the popular vote at least once, and the Senate used to be slightly more equitable in the days before a handful of states like California, Texas, Florida, and New York (with a combined population larger than Germany’s) came to dominate the population distribution, although this was offset by the fact that until the 1910s it wasn’t even directly elected. Over time, though, changes in political coalitions and demographics have enabled small pluralities or even minorities of the electorate to elect the politicans who in turn appoint life-tenured judges who basically sit on an unaccountable super-legislature.

That’s a disaster for democracy. The real turning point came in 1968. From the 1950s until then, the 20 consecutive years of Democratic rule under FDR and Harry Truman meant the courts were stacked with liberal appointees, who in turn oversaw an anomalous streak of progressive rulings such as Brown v. Board of Education and Miranda v. Arizona. The traditionally conservative tide of the judiciary had ebbed, and you better believe that America’s reactionaries noticed. When it came time for Lyndon Johnson to replace Earl Warren, Senate Republicans filibustered Abe Fortas’ nomination as Chief Justice, leaving both him and Warren on the court to eventually be replaced by Richard Nixon, who had won 43.4 percent of the vote and promptly went on to nominate 4 Supreme Court justices including future Chief Justice William Rehnquist.

Since 1969, there has not been a single day when a majority of the court’s justices had been appointed by Democratic presidents. The Supreme Court has remained a vanguard of conservative power even as Democrats have come to dominate presidential elections. With the ongoing polarization of the two major parties, the conditions have long been right for a court that is even more divorced from public opinion that it normally is, since even a narrow electoral victory -– like Nixon in 1968, Bush in 2000, and Trump in 2016 – can now lead to a complete partisan transformation of the judiciary. The stakes for every presidential election have been dramatically raised as the Supreme Court became less accountable, more activist (since Congress now neglects many of its traditional responsibilities in areas like immigration and trade), and – ironically – more burnished with the veneer of respectability, since many across the political spectrum see judges as uniquely credible and nonpartisan, despite all evidence to the contrary.  Here’s what Thomas Jefferson had to say about judges after Marbury v. Madison [emphasis mine]:

“You seem to consider the judges as the ultimate arbiters of all constitutional questions; a very dangerous doctrine indeed, and one which would place us under the despotism of an oligarchy. Our judges are as honest as other men, and not more so. They have, with others, the same passions for party, for power, and the privilege of their corps…. Their power [is] the more dangerous as they are in office for life, and not responsible, as the other functionaries are, to the elective control. The Constitution has erected no such single tribunal, knowing that to whatever hands confided, with the corruptions of time and party, its members would become despots. It has more wisely made all the departments co-equal and co-sovereign within themselves.”

The Supreme Court has also held on to a semblance of legitimacy because its median vote for the past 50 years has been a country club-style Republican, whether Sandra Day O’Connor or Anthony Kennedy, who while basically conservative has been willing to vote with liberals in enough cases to keep both sides of the political spectrum somewhat content. Court-oriented liberal activism on issues such as LGBTQ+ rights has flourished in the period as Congress has stagnated and overall governmental gridlock has worsened.

With Kennedy’s retirement, that careful balance between liberals, conservatives, and a handful of nominal swing votes is gone. We’re now on the verge of a president who won a lower percentage of the popular vote than Michael Dukakis in 1988 having appointed 2 of the 9 justices, while 2 more were appointed by George W. Bush, who also lost the popular vote. Moreover, these judges are all documented ideologues from the Federalist Society, a professional group of conservative lawyers committed to something called originalism.

Originalism purports to be interpreting the Constitution “strictly,” which in practice means “conservatively.” Why anyone in 2018 would want to interpret literally and narrowly a document containing a clause saying slaves are 3/5ths of a person is beyond me, but the right wing as well as large chunks of the media seem to think that this is a more legitimate approach to the law than the “living constituionalism” of liberal judges who account for practical changes in society since the 18th century. Originalism has given us Clarence Thomas, Samuel Alito, and Neil Gorsuch, among may others.

A court filled by such individuals will be hostile to progressive legislation, often bending over backward to find tortured reasons to weaken or overturn it. For example, Chief Justice John Roberts’ opinion in Shelby County v. Holder, the case nullifying large parts of the Voting Rights Act of 1965, is basically a rewrite of the logic in the Dred Scott v. Sanford decision that was so hated even in the 1860s that it inspired multiple constitutonal amendments (the 13th-15th). The Supreme Court’s origins as a racist institution are alive and well as long as the Federalist Society is dictating terms to GOP presidents.

Why court packing is the answer

Court packing by a Democratic administration in conjunction with a Demoratic Congress could expand the Supreme Court to any size, canceling out the appointments by minority-rule presidents. It’s the only way to reign in an institution that has become increasingly removed from democracy and accountability. If it results in the decline of judicial review as a tool, even better – despite a handful of good activist decisions (like Obergefell v. Hodges), the Supreme Court should not be deciding which laws are and aren’t good enough, often in contravention of the public.

A Supreme Court whose size can be changed at any time is one that is far more answerable to Congress, without any need for the impractical solution of impeachment, the nominal check on judges in the Constitution.  I don’t think the Constitution is all that great as either a political or moral document, and its failings are a major reason why we’re in this situation of having to pray that octogenarian judges don’t retire and get replaced by 40something neoconfederates. Short of amending the Constitution, court packing is the best solution and should absolutely be on the table for the next Democratic government.

Why I can’t recommend finasteride for hair loss

In the Bernard Shaw play “Caesar and Cleopatra,” Cleopatra whips up a baldness “cure” for Caesar consisting of a wild mix of ingredients including burnt mice and horse’s teeth. Shaw himself notes that he doesn’t understand the ingredients and of course it doesn’t work because nothing does.

In the decades that have passed since the play and the millennia since the historic Caesar and Cleopatra lived, there hasn’t been much progress in treating androgenic alopecia, the most common form of baldness, especially compared to advances in, say, antibiotics. The 1990s saw the mass release of Rogaine and Propecia as well as markèd improvements in hair transplantation surgery, but a Shavian cure is apparently still far off. None of the current remedies provide a definitive solution and they can at best hold progression of balding at bay for a few years.

I became very interested in androgenic alopecia in the early 2010s when I noticed some thinning of my own hair. During my research, the insufficiency of the associated treatments struck me: Both Rogaine and Propecia have to be taken indefinitely to maintain their benefits, which are often subtle to begin with. Transplantation is expensive and often must be supplemented with Propecia. Other treatments are by and large outlandish and unproven.

The fundamental problem, as I see it, is that baldness requires both a scientific and and an artistic solution. It’s not enough for the underlying scientific process to be sound in disrupting the mechanisms of androgenic alopecia; the results of the treatment must also be dramatic and cosmetically acceptable.

This double requirement is why hair transplantation results vary so much between surgeons, some of whom are good artists and others not. It’s also why many treatments that seemingly work in vitro – like cloning ones hairs – don’t carry over to the real world, since it’s difficult to ensure that the right size, color, and direction can be achieved in vivo. Baldness, at its core, is an artistic concern.

Unsurprisingly, given its unique difficulties, baldness has inspired a truly weird set of treatments:

  • A prescription-only pill that doubles as urinary retention medication for elderly men (Propecia).
  • A blood pressure medication that grows hair for reasons that are still not fully understood (Rogaine, originally known as Loniten).
  • A form of alternative medicine (low-level laser therapy).
  • Re-injection of one’s own processed blood into the scalp (platelet rich plasma).
  • Artistic rearrangement of follicles (transplantation).

Of these treatments, by far the most discussed and the most controversial is Propecia, also known by its chemical name, finasteride. It interrupts the conversion of testosterone to DHT, a more potent compound that attacks follicles in the genetically susceptible. That’s a pretty basic process to screw with, at least in males.

Nevertheless, reactions to finasteride are wide-ranging, with some takers reporting horrible side effects such as permanent erectile dysfunction and depression, while others praise it as the best cosmetic “medication available – a “happy pill. Personally, as someone who took it for years, I think it falls somewhere in between.

Its side effects are considerable and run the gamut from the subtle (difficulty sleeping) to the overt (erectile dysfunction). The original clinical trials for its approval reported very low rates of side effects, which have been repeatedly held up as proof that the many people complaining about its adverse effects are lying. At the same time, its benefits are slight compared to other “lifestyle” drugs such as Accutane, which while boasting an even worse side effect profile can dramatically resolve cystic acne for good; in contrast, finasteride must be taken continuously just to preserve the status quo.

The experience of taking finasteride reminded me of taking antidepressants years ago. I remember feeling lousy on both medications and attributing my feelings to them not having kicked in yet. In reality, they were the sources of my problems, including loss of sex drive and weight gain. I only learned years later that estimates of their sexual side effects in particular were vastly underestimated; my prescribing psychiatrist refused to believe they could have these effects, but later research has drawn similarities between the long term health issues caused by SSRI inhibitors snd finasteride, both of which have complex effects on the brain.

In any case, finasteride’s side effects piled up for me over time, culminating in higher blood pressure and substantial weight gain yet again. I quit cold turkey and felt the same liberation I had back in 2006 when I ditched antidepressants and entered a much better phase in my life.

Finasteride is not an essential medication, even for its other indication for benign prostatic hyperplasia. It doesn’t save lives. Its potential for side effects, especially over the long term, and the possibly wide extent of these effects in the central nervous system and the liver give me pause. I don’t trust it anymore and so I won’t be taking it again. I wouldn’t recommend it to anyone unless he was truly desperate, as apparently I was years ago when I started.

Stopping it has freed from fretting so much about my hair, a concern whose hold on me I didn’t even appreciate until I finally let it go. I still do a few minor things to keep it styled and looking healthy, but if it goes, so what? I feel like hair anxiety is such a 20-something thing and, moreover, such a straight thing – so many message board posts about balding are about “oh women won’t like me anymore once I’m bald.” This doesn’t apply to me, obviously, as a married gay man in his 30s. I don’t want to be chasing my youth instead of simply accepting aging and being grateful for an ongoing healthy life.

The Fragility of Video Games as Art

Years ago, I joined the conversation about whether video games constitute “art.” The late Roger Ebert spawned a thousand hot takes by refusing to classify them as such, arguing that their winnability set them aside from classical art forms that cannot be won or lost, only experienced. I wrote this on the subject almost five years:

“Classic [Nintendo Entertainment System, hereafter “NES”] and [Super Nintendo Entertainment System, hereafter “SNES”] games are nowadays mostly playable only via emulation. Imagine if you could only watch The Thief of Baghdad or The Birth of a Nation by “emulating” (or actually using!) an early 20th century era projector and screen. Of course, that isn’t the case – you can watch either one on an device that has Netflix on it. Similarly, imagine if the works of Shakespeare could only be read on 17th century folio paper and were essentially illegible on anything printed after that time. Such a reality would be absurd, but it’s basically the issue that plagues video games: their greatness, with precious few exceptions, isn’t transferrable across eras.”

If you are not a frequent gamer, allow me to take a step back and walk us through what either of us would need to do in order to play, say, Excitebike, a game that launched alongside the NES in 1985. I basically have three options, which I will present in descending order of fidelity:

  1. Play the game from a physical cartridge on either an original NES or one of the systems it was ported to, such as the Game Boy Advance.
  2. Play it from the NES Classic, an official Nintendo product launched in 2016 with 30 built-in games remastered for HDTVs.
  3. Emulate it using specialized software on a PC/Mac (a hassle if you aren’t technically minded) or within a web browser, both of which are legally dubious.

None of these options are ideal if you are accustomed to the seamless on-demand exprience of video/audio streaming and digital books in particular. And would you believe that Excitebike is probably a relatively easy game to dust off, since it: a) was released before the era of online gaming and downloaded content and b) is maintained by Nintendo, one of the world’s most historically conscious and nostalgic companies. Many games will not hold up as well.


As I see it, there are at least three major obstacles to the preservation of video games as art:

1. Disappearance of specialized hardware

Most games are designed to exploit the particular hardware of a given system. Super Mario 64 was constructed around the Nintendo 64’s distinctive analog stick, while GoldenEye 007 forever altered video game control schemes through its use of the trigger-like Z button on the same console. The Wii is home to countless games requiring motion controls, including its pack-in, Wii Sports, which is the best-selling console game of all time. Smartphone/tablet games are no different, with controls incorporating taps, swipes, and other gestures.

What happens when all this hardware is no longer readily available? We already know the answer, given the enormous demand that has chased the limited supply of NES Classic and SNES Classic consoles that bundle their respective titles into ready-to-play hardware. People will likely not play or experience those games anymore, unless they have a really convenient option for doing so (and DIY emulation doesn’t count).

Games that are emulated or ported to other platforms lose some of their original design, in a way that a book, painting, album, or movie cannot. For example, if I play Excitebike on my comptuer with a keyboard and infinite save states, that’s a very different experience than playing it on an original NES. In comparison, the differences between watching Citizen Kane on my phone and in an arthouse cinema seem minor.

2. Online functionality

Online gaming took center stage beginning in the late 1990s, with consoles such as the Sega Dreamcast and Microsoft Xbox incorporating internet connectivity infrastructure right out of the box (previous systems had required various aftermarket peripherals). The spread of broadband interent further fueled the rise of franchises that not only had online multiplayer functionality, but in some cases had nothing but that (the massively popular Destiny 2 is online-only, for example).

Of course, a sustainable online-only or online-mostly game requires a healthy community. Some games, such as World of Warcraft, have sustained their fanbases for years, while others have shut their doors after interest waned, rendering them unexperiencable to posterity.

Nintendo offers some prime examples of the tenuous nature of online games. Its Nintendo Wi-Fi Connection service, which powered many games on both the Wii and the DS, shut down in 2014 becuase it had been hosted on 3rd-party servers that were acquired in a merger. No one can go online anymore in Advance Wars: Days of Ruin or any other title reliant on the Wi-Fi Connection platform. Similarly, the company shut down Miiverse recently, leaving the lobby of the online shooter Splatoon weirdly vacant; it had previously been populated by virtual characters who, if you approached them, presented drawings made by players and saved to Miiverse servers.

3. Software updates

This flaw is not one I considered in my 2013 post, but I now think it may be the most significant of the three. To understand why, we have to ask first: Why even bother with game consoles in the first place?

A console is basically a shortcut. Instead of having to build your own gaming PC or purchase a super high-end mobile device and keep updating it every few years, you can purchase a standardized piece of hardware that will be good for at least 5-7 years before a successor is released. Plus, you can reset assured that any title released for the system will work on the hardware you purchased.

Consoles were once super distinct from PCs, since they had essentially no user-facing operating system. You couldn’t dig into their data management setups, change their network connections, or do anything you take for granted on other platforms, since they didn’t have any such features.

That began to change when consoles became internet-enabled and gained media playback capabilities, with the DVD-playing PlayStation 2 and Ethernet-equipped Xbox perhaps the first real inflection points. Today’s games often require enormous patches or updates to remain playable and secure, as do the system OSes they run on.

Updates are a particular weakness for phone/tablet games. Consider the iPhone: Every single year, it receives multiple new models, with fresh software APIs, updated chips, different screen resolutions/sizes, etc. Like clockwork, the presenters at the Apple keynotes talk about how these new features will make the device “console-level.” Yet iOS and Android are still most synonymous with free-to-play gambling games, which account for enormous amounts of all platform revenue, than with more in-depth gameplay. Why?

I think the endless upgrade cycle is partly to blame. One iOS game developer decided to leave the App Store altogether recently, saying (emphasis mine):

“This year we spent a lot of time updating our old mobile games, to make them run properly on new OS versions, new resolutions, and whatever new things that were introduced which broke our games on iPhones and iPads around the world. We’ve put months of work into this, because, well, we care that our games live on, and we want you to be able to keep playing your games. Had we known back in 2010 that we would be updating our games seven years later, we would have shook our heads in disbelief.”

There’s simply no guarantee that a game developed for any mobile platform will run even a few years later without proactive updates to save it from obsolescence. This issue doesn’t exist as much on consoles (since they are designed to be fixed systems with long lifespans), and especially not on older consoles. I can put a cartridge in a 1998 Game Boy and, barring any electrical or technical issues, be certain it will load and play as intended. I can’t say the same about an iOS game that hasn’t been updated since 2016.

The future of gaming history

The software update issue was raised by a blogger, Lukas Mathis, in a post about the wrongness of various other tech bloggers’ predictions about Nintendo. Between approximately 2011 and 2016, it was very fashionable to proclaim that Nintendo was failing and headed the way of Sega, i.e., toward being a software developer for other people’s hardware, instead of a hardware maker in its own right (Sega exited the console business in 2001, only ten years after its sweeping success with the Sega Genesis). A few choice quotes (all emphasis mine):

John Gruber in 2013, in a post comparing Nintendo to BlackBerry: “No one is arguing that 3DS sales haven’t been OK, but they’re certainly not great…Here is what I’d like to see Nintendo do. Make two great games for iOS (iPhone-only if necessary, but universal iPhone/iPad if it works with the concept). Not ports of existing 3DS or Wii games, but two brand new games designed from the ground up with iOS’s touchscreen, accelerometer, (cameras?), and lack of D-pad/action buttons in mind. (“Mario Kart Touch” would be my suggestion; I’d buy that sight unseen.) Put the same amount of effort into these games that Nintendo does for their Wii and 3DS games. When they’re ready, promote the hell out of them. Steal Steve Jobs’s angle and position them not as in any way giving up on their own platforms but as some much-needed ice water for people in hell. Sell them for $14.99 or maybe even $19.99.”

MG Siegler that same year: “I just don’t see how Nintendo stays in the hardware business. … I just wonder how long it will take the very proud Nintendo to license out their games.”

Marco Arment, responding to Siegler: “I don’t think Nintendo has a bright future. I see them staying in the shrinking hardware business until the bitter end, and then becoming roughly like Sega today: a shell of the former company, probably acquired for relatively little by someone big, endlessly whoring out their old franchises in mostly mediocre games that will leave their old fans longing for the good old days.

There’s endless more material like these pronouncements, all of it built on several (in my opinoin flawed) assumptions about the future of gaming: First, that it will from now on be irreversibly dominated by buttonless pieces of glass (i.e., phone and tablet screens) and the race-to-the-bottom pricing they encourage; second that gaming-specific hardware eventually won’t matter, since everything will be done on general-purpose computing devices; and third that developers like Nintendo can build sustainable businesses selling high-quality games for $20 or less, despite the enormous resources required to make something as daring as Super Mario Odyssey.

If the assumptions are correct, there seems little prospect of even today’s most famous games being preserved as “art,” since they’ll have to be endlessly redeveloped and remonetized to be sustainable. But what if the assumptions aren’t correct? What if mobile no more cannibalizes consoles that PCs did in the 1990s?

The punchline to those quotes is that Nintendo ended up selling 70 million 3DSes (almost on par with the PlayStation4 at the end of 2017) and saw the Switch have the best first year sales of any home console in U.S. history. It accomplished all of that while keeping online functionality and software updates relatively minimal in its first-party titles and going all-in on the bizarre, distinctive hardware of the Switch.


It’s hard to describe what the Switch does if you don’t own one. It’s essentially a console that works like any other, hooked up to a TV, but that can be also picked up and taken with you without any degradation in picture or play quality. It has a touchscreen tablet that can be combined with two hardware controllers with numerous buttons and joysticks (they slot onto the sides of the tablet), or simply used on its own as a Hulu Plus media player.

My first encounter with the Switch had me going back to my phone and thinking of the latter “this feels old.” Perhaps tapping on a phone screen isn’t the “end of history” of video gaming it has sometimes been presented as; maybe there’s a place for more sophisticated hardware after all. I hope so, since the production and preservation of such systems will be crucial if we are to ever have a real “art history” of video gaming.

The Battle of the Books

[Note: I’m going through my enormous “drafts” folder and seeing if I can salvage any of the posts without changing their titles or opening lines. This is my first try]

Every generation has its battle between, on one hand, those who pine for the “old days” and, on the other, proponents of progress who inevitably think better things are preordained. I once probably found the former camp more irritating, due to their hollow affection for activities – like hanging out in a Wal-Mart parking lot or going after much younger romantic obsessions – they’ve outgrown; they make the past appear like baby clothes: impossible to fit back into it, but not impossible to recycle on someone else or hold up in reverie. Maybe even with the immense powers of the empty brain, they can make bygones keep happening.

But the progress camp – purveyors of “optimism porn,” as someone on Twitter once quipped about Harvard professor Stephen Pinker – have made a strong run of their own in the annoyance dept. For the unfamiliar, optimism porn is all about context; it thrives on Twitter in particular as a rejoinder to (very accurate) tweets bemoaning wealth inequality, racial injustice, and warmongering. “Hey, look at these charts showing there have been fewer wars since 1945!” Yes, that’s a form of progress, but it might also be an historic anomaly, sustained only by norms around nuclear missiles, as Dan Carlin noted in a gripping podcast episode about the history of weapons of mass destruction.

Years ago, I entitled this post “The Battle of the Books” in hopes of discussing Jonathan Swift’s work of the same name, which features a debate between the Ancients and Moderns, each represented by equally fussy books in the St. James Library; hence my own much clumsier attempt to juxtapose the “glory days” crowd in opposition to the technoutopians. The piece focuses on how each camp thinks its particular era is the golden age of arts and letters. They’re allegorized by a spider (Moderns) and a bee (Ancients) who debate each other, prior to the actual authors of each era (everyone from Homer to Hobbes) engaging in actual violent combat.

While short, this satricial piece is, in my view, among the tightest and most quotable works of prose in English. It leads with a stunning self-referential opening line [all emphasis throughout is mine] – “Satire is a sort of glass wherein beholders do generally discover everybody’s face but their own” – and never relents.

The quip “anger and fury, though they add strength to the sinews of the body, yet are found to relax those of the mind” comes to mind equally during vigorous exercise or the frustrating angry exchanges of email and other internet-connected tools that do nothing for the body while sending the mind into a tailspin.

This segment reminds me of Elizabethan language about daggers and spears, but in my opinion supersedes Shakespeare et al. in the nuance it conveys about how writing has both an empowering and destructive effect on its most talented executors: “[I]nk is the great missive weapon in all battles of the learned, which, conveyed through a sort of engine called a quill, infinite numbers of these are darted at the enemy by the valiant on each side, with equal skill and violence, as if it were an engagement of porcupines. This malignant liquor was compounded, by the engineer who invented it, of two ingredients, which are, gall and copperas; by its bitterness and venom to suit, in some degree, as well as to foment, the genius of the combatants.

He then progresses to talk about the unbearable process of insisting your argument is better than anyone else’s, but notes that even the most definitive “trophy” of literary achievement ultimately become artifacts of controversy to be potentially dissolved by latter debates, like the groups I mentioned earlier who are ever looking forward:  “These trophies have largely inscribed on them the merits of the cause; a full impartial account of such a Battle, and how the victory fell clearly to the party that set them up. They are known to the world under several names; as disputes, arguments, rejoinders, brief considerations, answers, replies, remarks, reflections, objections, confutations. For a very few days they are fixed up all in public places, either by themselves or their representatives, for passengers to gaze at; whence the chiefest and largest are removed to certain magazines they call libraries, there to remain in a quarter purposely assigned them, and thenceforth begin to be called books of controversy. In these books is wonderfully instilled and preserved the spirit of each warrior while he is alive; and after his death his soul transmigrates thither to inform them.

This is exquisite commentary on the ever-living characteristics of books: “a restless spirit haunts over every book, till dust or worms have seized upon it.”

On the high ambitions but limited abilities of the Moderns; sounds like this could have been penned about proponents of perpetually underwhelming tech like virtual reality and autonomous cars: “for, being light-headed, they have, in speculation, a wonderful agility, and conceive nothing too high for them to mount, but, in reducing to practice, discover a mighty pressure about their posteriors and their heels.”

Swift also effortlessly shifts to some of the best speculative writing I’ve encountered, on par if not better than what he pulled off in “Gulliver’s Travels.” Witness this passage about a spider and a bee: The avenues to his castle were guarded with turnpikes and palisadoes, all after the modern way of fortification. After you had passed several courts you came to the centre, wherein you might behold the constable himself in his own lodgings, which had windows fronting to each avenue, and ports to sally out upon all occasions of prey or defence. In this mansion he had for some time dwelt in peace and plenty, without danger to his person by swallows from above, or to his palace by brooms from below; when it was the pleasure of fortune to conduct thither a wandering bee, to whose curiosity a broken pane in the glass had discovered itself, and in he went, where, expatiating a while, he at last happened to alight upon one of the outward walls of the spider’s citadel; which, yielding to the unequal weight, sunk down to the very foundation.”

A highly recognizable critique of filibustering senators and “contrarians” of all sorts who like nothing more than argument itself, undercutting the very “trophies” they were earlier cited as “At this the spider, having swelled himself into the size and posture of a disputant, began his argument in the true spirit of controversy, with resolution to be heartily scurrilous and angry, to urge on his own reasons without the least regard to the answers or objections of his opposite, and fully predetermined in his mind against all conviction.

The spider poetically describes a bee: “[B]orn to no possession of your own, but a pair of wings and a drone-pipe. Your livelihood is a universal plunder upon nature; a freebooter over fields and gardens; and, for the sake of stealing, will rob a nettle as easily as a violet.”

More on the temporarity of literary achievement and fame, of trophies than can easily fade,: “Erect your schemes with as much method and skill as you please; yet, if the materials be nothing but dirt, spun out of your own entrails (the guts of modern brains), the edifice will conclude at last in a cobweb; the duration of which, like that of other spiders’ webs, may be imputed to their being forgotten, or neglected, or hid in a corner.”

On what Ancients see in the itinerant art of the bee, which behaves like a poet searching for magical inspiration but knowing that legwork (literally, in this case) is necessary: “As for us, the Ancients, we are content with the bee, to pretend to nothing of our own beyond our wings and our voice: that is to say, our flights and our language. For the rest, whatever we have got has been by infinite labour and search, and ranging through every corner of nature; the difference is, that, instead of dirt and poison, we have rather chosen to till our hives with honey and wax; thus furnishing mankind with the two noblest of things, which are sweetness and light.”

Setting the table with cosmic implications: “Jove, in great concern, convokes a council in the Milky Way. The senate assembled, he declares the occasion of convening them; a bloody battle just impendent between two mighty armies of ancient and modern creatures, called books, wherein the celestial interest was but too deeply concerned.”

A fantastical personification of criticism as a vicious and ill-informed goddess: “Meanwhile Momus, fearing the worst, and calling to mind an ancient prophecy which bore no very good face to his children the Moderns, bent his flight to the region of a malignant deity called Criticism. She dwelt on the top of a snowy mountain in Nova Zembla; there Momus found her extended in her den, upon the spoils of numberless volumes, half devoured. At her right hand sat Ignorance, her father and husband, blind with age; at her left, Pride, her mother, dressing her up in the scraps of paper herself had torn. There was Opinion, her sister, light of foot, hood- winked, and head-strong, yet giddy and perpetually turning. About her played her children, Noise and Impudence, Dulness and Vanity, Positiveness, Pedantry, and Ill-manners. The goddess herself had claws like a cat; her head, and ears, and voice resembled those of an ass; her teeth fallen out before, her eyes turned inward, as if she looked only upon herself; her diet was the overflowing of her own gall; her spleen was so large as to stand prominent, like a dug of the first rate; nor wanted excrescences in form of teats, at which a crew of ugly monsters were greedily sucking; and, what is wonderful to conceive, the bulk of spleen increased faster than the sucking could diminish it.”

The best critique of “grammar hounds” and anyone else more obsessed with technical features than with clear meaning: “[B]y me beaux become politicians, and schoolboys judges of philosophy; by me sophisters debate and conclude upon the depths of knowledge; and coffee-house wits, instinct by me, can correct an author’s style, and display his minutest errors, without understanding a syllable of his matter or his language; by me striplings spend their judgment, as they do their estate, before it comes into their hands. It is I who have deposed wit and knowledge from their empire over poetry, and advanced myself in their stead. And shall a few upstart Ancients dare to oppose me?”

A thrilling description of Criticism influencing the discourse, with an especially striking line about “now desert” bookshelves: “The goddess and her train, having mounted the chariot, which was drawn by tame geese, flew over infinite regions, shedding her influence in due places, till at length she arrived at her beloved island of Britain; but in hovering over its metropolis, what blessings did she not let fall upon her seminaries of Gresham and Covent-garden! And now she reached the fatal plain of St. James’s library, at what time the two armies were upon the point to engage; where, entering with all her caravan unseen, and landing upon a case of shelves, now desert, but once inhabited by a colony of virtuosos, she stayed awhile to observe the posture of both armies.

Even amid the verbal pyrotechnics, Swift finds time to be unforgettably funny: “Then Aristotle, observing Bacon advance with a furious mien, drew his bow to the head, and let fly his arrow, which missed the valiant Modern and went whizzing over his head; but Descartes it hit; the steel point quickly found a defect in his head-piece; it pierced the leather and the pasteboard, and went in at his right eye. The torture of the pain whirled the valiant bow-man round till death, like a star of superior influence, drew him into his own vortex.”

Even better, about Virgil struggling with an ill-fitting helmet and appealing to Dryden for help: “The brave Ancient suddenly started, as one possessed with surprise and disappointment together; for the helmet was nine times too large for the head, which appeared situate far in the hinder part, even like the lady in a lobster, or like a mouse under a canopy of state, or like a shrivelled beau from within the penthouse of a modern periwig; and the voice was suited to the visage, sounding weak and remote.”


A memorable closing line to pair with the opening: “Farewell, beloved, loving pair; few equals have you left behind: