In 2000, Brett DiCrescenzo of Pitchfork wrote one of the most infamous album reviews that still has a live URL on the internet. Assessing Radiohead’s “Kid A,” he straddled a line between the faux-literary (“The butterscotch lamps along the walls of the tight city square bled upward into the cobalt sky, which seemed as strikingly artificial and perfect as a wizard’s cap.”) and the musically incoherent (“Comparing this to other albums is like comparing an aquarium to blue construction paper.”), while tossing in some vague ethnic stereotypes (“The Italians surrounding me held their breath in communion (save for the drunken few shouting “Criep!”)”) and useless similes (“The primal, brooding guitar attack of “Optimistic” stomps like mating Tyrannosaurs.”), too. It’s a textbook example of the limits of popular music critcism.
Why is it so limited? For starters, it is heavily reliant on adjectives, Narrating what actually happens on any song – e.g., “The song opens in G Major and uses the following chords and key changes: [lists them all]” – would lose a lot of readers and likely stretch the review to a length that, along with the technical subject matter, would tank the page view stats of a site like Pitchfork (or Resident Advisor, or Tiny Mix Tapes, or any other music-centric website). So instead we get lots of adjectives; guitar solos are described as “fluid,” electronic instrumentation as “soundscapes,” and songs themselves as “airy” or “breathless” or “chugging.” These words usually make sense to me in context – like, I can see how a guitar solo might progress such that it seemed “fluid” – but they are somewhat removed from what’s actually going on. Modern music criticism would confound expectations if it began talking about what musicians were doing on their records – playing X, singing Y – and so it often resorts to elaborate descriptions, as well as the protracted narrative frames and cross-references to other pop culture that DiCrescenzo couldn’t avoid.
In contrast, book criticism cannot usually afford such ornate digressions. Any review of a book will naturally grapple with plot details and the author’s particular style, making it oddly both bread-and-butter and academic in comparison, without any of the criticism-as-art-itself that many reviews turn into (indeed, it’s hard now to read DiCrescenzo’s review outside the context of Pitchfork’s larger culture of “artsy” music reviews that were only minimally concerned with the records in question, and instead focused on building PItchfork’s distinctive brand during the early days of the web, when other music criticism sites were extremely barebones and newspaper-like).
Somewhere in between the extremes of music criticism and book criticism is the muddled middle of film criticism, which I’ll define as criticism of both movies and TV. Film critics inevitably must recite what happens on screen, similar to how a book critic can’t escape divulging some plot details; but they also frequently fall into the same rabbit hole that troubled DiCrescenzo, leaning on nebulous adjectives such as “languid” or “swoony” to describe a film’s appearance, or resorting to cliches about self-evident choices, such as the plot being “fast-paced.”
My theory is that the easier a medium is to consume, the more given it is to adjective-centric criticism:
- Books cannot be multitasked and can take days, months, or even years to complete reading.
- A movie can be watched in a single sitting, but will usually take at least an hour to finish; a TV show requires even less exertion, and is often a second screen to the viewer’s phone/laptop.
- An album can be listened to in under an hour, plus it can be consumed “out of order” in a way that a book or film cannot; it is almost meant for multitasking, as the soundtrack for nearly anything.
Book critics have to focus on the plot because they cannot assume that anyone has read it. Meanwhile, music critics can be flashy since they are often speaking to people who have already listened to what they’re reviewing (and thus know the “plot,” as it were, of the album or song). The music critic’s task becomes not so much to provide guidance on whether the album or song should be consumed at all (as in the case with book criticism) but instead to tell the reader what cultural pigeonhole it fits into and if it is OK to like it all.
In this respect, music criticism is highly identitarian. DiCrescenzo’s review was a forerunner of the endless paeans to Beyonce that barely engage with the songs at hand but instead try to situate the subject as something beyond the possibility of different viewpoints: Liking it is Right, disliking it is Wrong. His “Kid A” tract labeled all other albums in music history as blue construction paper. Similarly, The Guardian’s review of “Lemonade” incoherently described the songs (“The songs, though, are not just prurient catnip, but actual dynamite”) and similarly railed against an illusory set of doubters or would-be competitors (“Cynics will cry foul, that Beyoncé remains an entitled superstar, raging at a paper tiger. Those cynics will be ignoring one of this year’s finest albums.”), recalling DiCrescenzo’s weird aquarium quip.
Meanwhile, film critics act like they are dealing with a medium as elitist and as private as the book, but in reality they are critiquing works that are more akin to music in terms of its publicity and ease of consumption. At the same time, they have to work within the significant accumulated institutional cruft – the Oscars, “prestige TV,” the “golden era of TV,” the Cannes Film Festival (and its many derivatives), HBO (and especially “Game of Thrones”), Netflix originals, the insane desire for critical validation of once-scorned superhero movies – that is really like some of the worst vestiges of the book critcism realm, for example the notion of a definable “Western canon” that must be defended by critics like Harold Bloom.
But film is not like print. Here’s what I mean: an obscure film is more approachable than a well-known book; for example anyone could see even a marginal piece of queer cinema with less effort than it would take to plow through either the widely known Infinite Jest or The Decline and Fall of the Roman Empire. To really feel the relative difficulty of consuming any book, consider the case of The Satanic Verses by Salman Rushdie.
It is likely the most controversial book of the last century, earning a death threat for its author from the leader of Iran, visibly straining relations between Iran and the United Kingdom, and resulting in the deaths of several of its translators. But how many people have ever actually discussed the content of the book? The fact that it is written in a dense, Joycean style that makes even the first pages hard to get through? How its controversial occurs in a dream sequence?
The original New York Times review of it is instructive, for both its clear descriptions of plot and its acknowledgement of the divide between the book’s vast reputation and its meager readership:
“The book moves with Gibreel and Chamcha from their past lives in Bombay to London, and back to Bombay again. For Gibreel, there is many an imaginary journey on the way – most notably to a city of sand called Jahilia (for ignorance), where a very decent, embattled businessman-turned-prophet by the name of Mahound is rising to prominence…
[M]uch of the outrage has been fueled by hearsay. Some of the noisiest objections have been raised by people who have never read the book and have no intention of ever reading it…
It is Mr. Rushdie’s wide-ranging power of assimilation and imaginative boldness that make his work so different from that of other well-known Indian novelists, such as R. K. Narayan, and the exuberance of his comic gift that distinguishes his writing from that of V. S. Naipaul.”
The Satanic Verses is virtually “hot take”-proof, since even the effort required to blow through it and write a quick blog about “Here’s What Salman Rushdie Doesn’t Get About Islam” or “Why Bernie Bros Have Been Praising “The Satanic Verses This Week” is too much for most writers. But if “The Satanic Verses” were a film, everyone would have seen it, given its reputation, and the takes would be endless.
To get a sense of how limited the scope of book criticism is within pop culture, consider the common Twitter joke of responding to anyone comparing anything to “Harry Potter” by simply saying “read another book.” There is no work of fiction that has such a tight hold on the imagination, but there are numerous films – “Star Wars,” “Jurassic Park,” “The Godfather,” etc. – that serve similar roles for understanding events.
It is precisely this ubiquity of major films that makes film criticism much more reactionary than either book or music criticism. A scathing book review is only mildly rewarding for its writer because the audience for any book is so relatively small, plus the intensely private experience of reading – setting your own pace, especially – means that each person’s opinion of a book is better insulated against contrarianism and reassessment than a similar opinion of a movie or TV show. Music reviews are after-the-fact and must contend with the strong identitarism of music taste (e.g., “am I still in good standing with [x] community if i like [y artist’s music]?).
But film is often consumed in public (at a theater) or socially (in a living room), and so there is more incentive to signal to others that they have the Right or Wrong opinions about it. The massive coverage of the Oscars (and the myriad issues about the backgrounds of who got nominated) and the enormous budgets of film studios and streaming services also mean that film critics have unique incentives to engage intensely with the conventional wisdom on any work. Inevitably, a lot of this engagement ends up reading like an angrier version of DiCrescenzo’s “Kid A” novella.
Take this Slate piece on “Lion,” which is only intermittently about the movie, but mixes in lots of personal backstory as well as a milder aquarium/blue construction paper constast. It goes from an odd concept of how gentrification actually happens (“If I had a nickel for every time someone asked me where my real parents were or if I intended to go back home, I could gentrify the Chinese province I was born in.”) to a riff on the “putting unrealistic words in a kid’s mouth” (“Even at that age , people would ask me if I knew my “real” family, and, if not, when I planned on meeting up with them at Starbucks.”), to undergraduate term paper-ese (“Our collective and shared understanding of identity continues to grow more and more complex, nuanced, and perhaps less grounded in traditional notions of what our “self” is. 2016 feels like one of the most crucial years for art in the context of artists from marginalized backgrounds asserting their voices—not asking to be “understood” per se but to be respected for the nuances of and intricacies of their identities.”).
The same author also posted an interaction with someone else (to whom I’ll refer to in the transcription below as “B” to his “A”) to his Twitter feed excorciating the same movie:
B: “I guess the last decent film I saw was…Lion? The one with Dev Patel. I thought it was good…could have been a little more emotionally powerful but they did a good job.”
A: “I hate that movie. I think it’s garbage and deeply reductive and offensive. A mauldin, little tale for white tears. Lion is like deeply terrible.”
B: “It was sort based off a true story though which was cool, but yeah they could’ve done a better job.”
A: “Something based on a true story doesn’t change the manipulative techniques the story uses. It perpetuates a really annoying, very white narrative that the families of adopted children don’t count as real, that the core identity of adoptess is based on a biological imperative. It is across the board garbage.”
B: “Right, stories are pure manipulation though. The dude made sure his mother knew that she was his mom, the one that took care of him all those years. Wouldn’t one want to understand where he or she biologically came from? Whether that be an adopted child or the child of immigrants that met in America. There’s so much underlying psychology that comes with your blood. It would be advantageous to know your nature as well as how you were nurtured.”
A: “That’s such a drearily lazy argument. Anyone can say that, anyone can make a straw man argument and deflect actual engagement with a cultural text. Of course art is manipulative, if your base understanding of manipulation (in art) is make the audience do anything. But art can engender and invoke feelings in an audience and exist in complexity. ‘Make you cry” is not necessarily emotional complexity, and while not all films may necessarily call for that, Lion specifically mores itself in low-key racist tropes and has a fundamental distinterest in the nuances of adopted identity. It reduces the identity of an adopted person, and what constitutes family, as a one-dimensional thing, without bothering to explore the political and personal implications of trans-racial/cross-cultural adoption. It offensively relies on adoptee and racialized identity that are superficial, that are without depth. Patel isn’t a character so much as he is a MacGuffin, moving the plot along from point A to point B, unconcerned with the ambiguities. Lion says that in order to be, as an adopted person, a person, you need to find your ‘real’ family, that only your biological family counts as who you are, completely ignoring the way that environment and upbringing and socialization within whitness has/has not shaped him as a person.”
B: Right, I get what you’re saying. But I’m saying that at is base, ‘Lion’ is about a boy who got lost, accidentally got adopted, and eventually tried to find his way back home. With epic cinematogrpahy along the way.”
To me, the telling part of this exchange is how “B” (the critic), after opening weakly with declarations about “Lion” being “garbage”, completely loses his footing after “A” says “There’s so much underlying psychology that comes with your blood,” capturing the nature/nurture divide in what amounts to a latter-day hippie-like aphorism. Everything in “A”‘s response from “That’s such a drearily lazy…” to “tropes” is word salad, although he regains his composure a bit with his critique of identity.
His scorn for Patel being a plot device (if you’re unfamiliar with the term, a MacGuffin is a goal or object that moves a plot along with or without any accompanying narrative exposition) and his mocking (in one of the actual tweets; not in the above transcript) of the “epic cinematography” remark are also revealing. The mechanics of the movie – how its plot works, what it looks like – are entirely subverted to riffing about its identity politics, which is somewhat incoherent, since the critic wants race to be non-determinative for adoptees, but not for “white” people (the word “white” is doing a vast amount of unexplained work in that exchange; it is not so much a word as a MacGuffin, moving the screed from point A to point B).
The entire rant reminded me of a seemingly endless stretch back in the mid 2000s when I was in college, when a friend would go on each Saturday morning to our brunch group about “Little Miss Sunshine,” bemoaning its prestige at the Oscars. There is really no equivalent to this behavior among book or music critics, since both fields are so atomized compared to film, which continues to have a much centralized academy of critics, producers, directors, etc. What book would an angry book reviewer would rail against in casual conversation (other than “Harry Potter,” which has almost exhausted the possibilities on this front, especially with the backlash to J.K. Rowling’s politics providing a delicious new reading of the series)? What album could attract such intense diatribes in a public forum?
Film critics, from Roger Ebert to Pauline Kael to our writer above, are reactionaries because the specter of the “wrong” type of art gaining prestige and adoration is so much more prominent than it is in the book or music spheres. A movie that a critic dislikes getting feted at the Oscars, or receiving an ovation at the end of a screening (a la “Star Wars”), must engender a feeling similar to a Republican voter seeing “Hollywood celebs” on TV or thinking about a “liberal” enjoying same-sex relations or marijuana: derision, motivating a desire to correct the record. This tendency even seeps into the work of coherent writers like NYT film critic Wesley Morris, who used to complain about ill-defined “elitists” (a central term of conservative discourse) who didn’t appreciate popular film. The unbearableness of so much film criticism is why I agree with Noah Smith that cinema is a dying art with diminishing public relevance, in part because its critical institutions are such a mess.
The famous William Faulkner quote about “the past not even being past” has staying power not only because it contests the idea that time is a one-dimensional line that moves “forward,” but also because it reveals how ancient decisions shape our lives even to the current second. Most people alive today weren’t even born when Margaret Thatcher became Prime Minister of the United Kingdom in the 1979 general election. She died years ago. But her ideas are very much alive.
Thatcher ended decades of postwar consensus that had seen the rise of social welfare systems across Europe and North America. Her zeal for high defense spending, low taxes, and less regulation kept Labour out of power for a generation while providing a blueprint for the ascendance of Ronald Reagan – who would take power less than two years later – across the Atlantic.
Many of us have no recollection of a time when it wasn’t assumed that everything had to be run like a business, in a “competitive” environment in which everyone is on her own and “the government” is some dark entity that must be reduced, instead of the people and institutions that make life bearable. Sure, these ideas had long gestated among the economists of the morally bankrupt Chicago School (mainly Milton Friedman) but Thatcher turned their academic papers into reality, crushing the miner unions and setting off a prolonged run of privatization and deregulation.
Even the distinctive brand of military adventurism that has fascinated Western governments and cable news channels since the Gulf War is derived from Thatcher’s decision to fight with Argentina over the Falklands. Almost all military campaigns since then – from Grenada to the Iraq War – have followed the same lead of confronting a clearly outmatched foe, to achieve morally and/or strategically dubious aims.
Although both the U.K. and the U.S. have had small-‘l’ liberal governments post-1979 – Blair and Brown in Britain, Clinton and Obama in America – the truth is that the Thatcher consensus has gone largely unchallenged. The centrism of Blair and the rebranding of Clinton’s party as “New Democrats” were signals of how they operated as much within the Thatcher/Reagan mold as Eisenhower had within the constraints of the then-dominant New Deal regime. Blair’s affinity for military adventures in the Balkans and Iraq and Clinton’s willingness to pursue “welfare” “reform” were both ripped straight from the small-‘c’ conservative playbook. It’s no accident that Thatcher herself identified “New Labour, with its scrubbed mentions of national ownership of industry in its party constitution, as her greatest achievement.
The two countries have followed similar paths for the last 40 years. Both Thatcher and Reagan decisively won all their general elections and then handed the reins to their competent but less charismatic successors, John Major and George H.W. Bush, respectively. Those two continued in a similar but slightly more moderate vein, only to lose in landslides in the 1990s to candidates (Blair and Clinton, respectively) from revamped center-left parties (i.e., Labour and the Democrats), institutions that would have been unrecognizable to party rank-and-file in the 1970s.
While both Blair and Clinton were electoral juggernauts, they both had much less success than either Thatcher or Reagan in laying the groundwork for their successors. Blair resigned and helped the unpopular Gordon Brown become prime minister; he lasted not even 3 years, losing power to a Tory-LibDem coalition in 2010. Clinton’s efforts could not get his VP Al Gore or his wife Hillary Clinton over the finish line. Like Brown, they both lacked the political acumen or popularity to stop the reactionaries who narrowly defeated them (Cameron in the U.K., Bush 43 and Trump in the U.S.). The only point at which the two histories diverge is with Obama, but he largely governed within the standard Reagan model, with sprinkles of Clintonism, including many of Clinton’s own personnel.
At a glance, Thatcherism and its numerous derivatives seem to be in strong health. Both the Conservatives in the U.K. and the GOP in the U.S. control the government. Both continue to pursue the same right-wing policies of the 1980s, arguably with even more aggression than their predecessors – just look at Theresa May’s fixation on a “hard Brexit” (that is, with maximal breaks from EU immigration rules and economic integration) and Trump’s almost comically plutocratic commitment to taking away people’s health insurance to finance tax cuts for billionaires.
“Comical” – there is something weirdly humorous about what the right-wing parties of the West have become, though, isn’t there?
The Conservatives campaigned on a platform of “Strong and Stable” leadership, but their last two PMs – Cameron and May – have taken monumental gambles (the Brexit referendum and the 2017 snap elections) that spectacularly backfired. Having lost their parliamentary majority to a Labour surge led by one of the furthest left MPs in Britain – Jeremy Corbyn, whom they labeled a “terrorist sympathizer”- they must now form a coalition with the Democratic Unionist Party of Northern Ireland, a hard-right creationist party with deep ties to loyalist paramilitaries (a fancy term for white terrorists who kill Catholics).
Meanwhile, the GOP, the home of the heirs to Jerry Falwell’s Moral Majority and nominally the party of strong national defense, is led by a former reality tv host who once went bankrupt running a casino (“you mean people just come in and give you their money in exchange for nothing? Sorry, I’m going to need a better business model” – no one ever, except possibly Donald Trump) and has been caught on tape confessing to routine sexual assault. Plus, party membership from top to bottom is deeply enmeshed with Russian spies and businesses.
And both parties have lost control of the issue of “terrorism,” once easily controlled by right-wing leaders like George W. Bush, to the point that May is literally negotiating with loyalists militias and the GOP has cheered an ISIS attack against Iranian civilians.
Whether these flaws matter to core right-wing partisans is debatable, but it is clear that the ubiquity of conservative policies and their demonstrable failure – visible not only in May being forced to align with terrorists and Trump with Russian autocrats, but also in the collapse of the global banking system after decades of Thatcherist deregulation – has energized the left in a way that had nearly passed out of living memory.
Bernie Sanders is the most popular politician in America and ended up a few hundred delegates short of snagging the Democratic nomination and likely becoming president. Corbyn went even further and humiliated May, turning predictions of a massive Tory majority heading into Brexit negotiations into a hung parliament. Given the tenuous Tory-DUP coalition, it is probable and perhaps inevitable that Corbyn will eventually be PM.
Both of these men are senior citizens who for most of their careers were dismissed as “unserious” leftists who would never enter the mainstream. Instead, they have a golden opportunity in their twilight years to finally eradicate Thatcherism root and branch by unseating two truly awful politicians. May and Trump are almost like the store brand versions of Thatcher and Reagan, far less compelling despite their surface resemblance (May as another “Iron Lady,” Trump as another celebrity GOP president and “Great Communicator”; in reality, they are the Tin Lady and the Great Convfefer).
The levels of media anger and disbelief at both Corbyn and Sanders deserves its own entry, but it is ultimately indicative of how much of both British and American society remains captured by Thatcherism and “centrism,” neither of which has been seriously challenged until now. New Labour, New Democrats, it’s all eroding and exposing the decrepit foundations of Thatcherism and Reaganism. The fact that the “terrorist” attacks in Britain did not hurt Labour but instead exposed the incompetence of Tory security policy (as Home Secretary, May literally cleared one of the London Bridge stabbed to go fight in Libya!) was a turning point in how I viewed the staying power of Western conservatism. It seems weak, its unpopular ideas barely propped up by cynical appeals and a dying electoral coalition. After almost 40 years, we’re finally seeing what Winston Churchill – that old Conservstive – might be comfortable labeling “the beginning of the end” of Thatcherism.
“Predictions are hard, especially about the future.” – Yogi Berra, but possibly apochryphal
Imagine living in Europe circa 1900. Someone asks you to predict the state of the world in 1950. Are you going to be able to tell them confidently that the continent at that time will be divided into two spheres of influence: One dominated by the United States of America and the other by a successor state to Tsarist Russia modeled on a militarized version of Karl Marx’s philosophy, all of this having taken shape after the second of two catastrophic wars, the most recent one having ended with the U.S.A. dropping a pair of radioactive bombs on Japan that killed hundreds of thousands of civilians?
If your prediction was way off in 1900, you would have been in good company. Conventional wisdom at the time maintained that the economies of Europe were too integrated to ever lead to war, much less a conflict that would first be deemed The Great War and then renamed after its successor was even worse. But there was one realm in which the catastrophe of World War I was foreseen with startling clarity: literature. H.G. Wells’ serialized 1907 novel “The War in the Air” contemplated the immense resources being poured into then-unprecedented war machines (emphasis added; note the prophecy of a decaying Russia and a militant Germany at the end, and the hints of the eventual end of the British Empire throughout):
“It is impossible now to estimate how much of the intellectual and physical energy of the world was wasted in military preparation and equipment, but it was an enormous proportion. Great Britain spent upon army and navy money and capacity, that directed into the channels of physical culture and education would have made the British the aristocracy of the world. Her rulers could have kept the whole population learning and exercising up to the age of eighteen and made a broad-chested and intelligent man of every Bert Smallways in the islands, had they given the resources they spent in war material to the making of men. Instead of which they waggled flags at him until he was fourteen, incited him to cheer, and then turned him out of school to begin that career of private enterprise we have compactly recorded. France achieved similar imbecilities; Germany was, if possible worse; Russia under the waste and stresses of militarism festered towards bankruptcy and decay. All Europe was producing big guns and countless swarms of little Smallways.”
Why did Wells predict the carnage of World War I so accurately – and in a work of fiction, no less – while his peers were distracted by what they wrongly deemed a dawning golden era of global cooperation?
The question brings me back to an old saw of mine: Google’s obsession with science fiction, a genre Wells was instrumental in modernizing. The company’s ambitious “moonshots” division once required that new projects have some sort of basis in or resemblance to sci-fi. Efforts such as flying cars, robots, you name it: all of it was a computer science exercise in catching up to the fantasies of pulp writers from decades ago. Hell, the dummy-piloted taxi cab from “Blade Runner” (a movie released in 1990) is still far out ahead of the billions upon billions of dollars being spent on self-driving cars today by Google and its peers.
Google is not alone; the tech industry often comes off as highly certain of what the future will look like. Predictions about the dominance of automated vehicles, “the rise of the robots,” and so much more are collectively the fuel upon which a thousand “influencer” conferences run. Such events and the companies that participiate in them are at the same time highly dismissive of the value of humanistic education, instead prizing “technical” knowledege above all else. Yet the irony of them fervently chasing ideas from storybooks persists.
At some level, we all seem to trust in the power of fiction to tell us what the future is, whether we trust the explicitly “futurist” visions of sci-fi, or the eschatology of books such as the Bible and the Koran. In regard to tech in paticular, I was startled a few months ago to read Rana Dasgupta’s “Tokyo Cancelled,” a 2005 novel that sort of retells the Arabian Nights – as well as various fairy tales, such as Bluebeard – for the 21st century.
In one of its discrete stories, a man accepts a new job as an editor of people’s memories. He curates thoughts that they have (which have been captured via surveillance) and puts together a retrospective to present to them on individualized CDs. However, he has to be careful to edit out bad memories:
“We have short-listed around a hundred thousand memories that you can work from. They’ve been selected on the basis of a number of parameters – facial grimacing, high decibel levels, obscene language – that are likely to be correlated with traumatic memories….Apply the logic of common sense: would someone want to remember this? Think of yourself like a film censor; if the family can’t sit together and watch it, it’s out.”
Now here’s a Facebook employee, in 2015, announcing the introduction of filters into its On This Day service, which sends you a notification each day linking you to your photos and status updates from past years:
“We know that people share a range of meaningful moments on Facebook. As a result, everyone has various kinds of memories that can be surfaced — good, bad, and everything in between. So for the millions of people who use ‘On This Day,’ we’ve added these filters to give them more control over the memories they see.”
So while Dasgupta was essentially predicting an advanced Facebook service at a time when Facebook itself didn’t even exist yet (“Tokyo Cancelled” was written well before 2005, and Facebook itself was launched in 2004), what were the leading lights of tech predicting? Um…
-Steve Jobs in 2003: music streaming services are terrible and will never work
-Reality: in 2016, streaming drove an 8.1 percent increase in music industry revenue, and virtually everyone has heard of or used Spotify and Apple Music
The gulf between Dasgupta’s futurism and these now-laugable prediction brings me back to the vitality of the often-maligned cultural studies fields. I am reminded again and again of how we have to think about culture as a whole – not just scientific advances, which are undoubtedly important to human improvement, but also the flow of literatures, social mores, art, etc. – to sense where we are going and where we are going to. For example: Max Weber once positioned the Protestant work ethic – a totally incidental characteristic associated with adherence to a specific religion – as a central cog in the growing success of capitalism, which was reshaping Europe in his time. Yes, the Industrial Revolution and the creation of the steam engine, electricity, coal-fired ships, etc. were all vital to the creation of global capitalism, but would it have coalesced into a coherent social system without the cultural glue of Protestantism?
Just as Weber saw religion as an essential way to make sense of and corral new modes of industrial production, Dasgupta saw, by writing speculatively about it, the struggle to deal with information at vast scale (imagine all the CDs needed to contain the memories of the characters in “Tokyo Cancelled”) as a defining issue of the busy yet personally isolating environment of the modern international airport, in which the book takes place. When we give up on studying the humanities (and all “the channels of physical culture” whose underinvestment Wells bemoaned in the passage above), we create huge blind spots for ourselves and miss futures like these that should have been apparent to us all along, whether they sprouted from an Edwardian sci-fi novel or a 21st century fairy tale.
I haven’t published all year. That’s going to change: I have a few topics I’ll be looking at in the coming weeks to get back into things:
-How fiction is often the best predictor of the future, with a focus on Rana Dasgupta’s 2005 novel “Tokyo Cancelled”
-A new translation of the Aristophanes play “Wealth,” which I am producing with my former Greek language instructor. My focus on Aristophanes will be also be a good chance to revisit one of my older posts about his play “The Frogs” and its treatment of literary criticism.
It seems absurd to think about, doesn’t it? After all, John McCain lost in a landslide to Barack Obama in 2008, winning a mere 46% of the vote while losing the entire Midwest and Eastern Seaboard with the exceptions of South Carolina and Georgia, in which he held on by single digits. Obama even won electoral votes in three states – Virginia, Indiana, and Nebraska – in which Democratic presidential candidates had been shut out since LBJ wiped out Goldwater in 1964.
The Obama victory in 2008 had two important causes: 1) the incompetence of the Bush 43 administration, which culminated in the late 2000s financial crash and 2) the charisma and focus of Obama’s messaging. Obama knew how to work specific issues, such as opposition to Big Ag in Iowa and NAFTA in Ohio, better than any Democratic candidate since LBJ.
Wih these two drivers in mind, it’s actually not hard to imagine a situation in which McCain could have prevailed. I see three changes that could have enabled a McCain victory:
- The Democratic superdelegates, much like they did in 2016 with the Hillary Clinton and Bernie Sanders race, decide to heavily rig the primaries by surpressing media coverage of Obama’s insurgent candidacy, arranging odd debate schedules, and disproportionately pre-aligning themselves with one candidate. Clinton wins the primary, but fails to capture the “Hope and Change” spirit of 2008 and instead treads out something of similar dubious value to “America is Already Great.”
- Meanwhile, McCain stays on message and distances himself from the Bush administration, reminding everyone of his primary challenge to the president in 2000 and his disdain for conservative institutions such as the Christian right. He picks a relatively low-profile swing state GOPer like John Kasich as VP instead of Sarah Palin, who alienated millions. Aligned against both Bushism and Clintonism, he manages to become the “outsider” despite being a member of the incumbent presidential party.
- The collapse of Lehman Brothers, which really propelled Obama’s candidacy over the top, doesn’t happen until December 2008, by which time the election is already settled. This is the hardest of all the changes to imagine, but bear with me.
So McCain defeates Clinton and enters office in January 2009. What next?
Many policies such as the stimulus bill would have still gone through on his watch, with the help of a moderate Democratic majority in both houses of Congress. Healthcare refrom probably would not have happened, though.
The biggest mystery, though, is what would have become of the mortgage crisis he inherited from Bush. The wide-reaching economic despair that the financial meltdown wrought on the entire country would likely have continued for years as it did under Obama, assuming an even quasi-typical GOP response of tax cuts and bailouts for banks. It would have, in others words, become fertile ground for various dissent movements.
Indeed, this situation could have profoundly reshaped the 2010 midterms, which in reality turned out to be a landslide for the newly formed Tea Party. Would the Tea Party have even emerged without the monolithic target of the Obama administration and the Democratic Congress of 2009-2010, both of them overseeing the reeling economy? Would a Tea Party of the Left have sprouted up instead, perhaps spearheaded by Bernie Sanders (who toyed with the idea of running for president in 2012)? Would a 76 year-old McCain have been able to win re-election with a rickety economy and potentially gridlocked Congress in 2012?
Considering the political situation in the U.S. after 2016, it’s tempting to imagine that maybe the fallout from a McCain administration – with the GOP owning the tumultous early 2010s years – might actually have forestalled the party’s descent into madness and left the country on sounder institutional footing. But the price would have come at the expense of many people’s lives and rights, especially vulnerable populations such as the poor and the LGBTQ community, who might not have seen the great particular advances of the Obama administration.
I plan to map out a few of these counterfactual scenarios about politics in future posts. This one, about 2008, is the one nearest to my heart, though, since it’s the first time I was ever excited about a presidential race, and it all happened at a pivotal moment in my life, when I was moving to Chicago for the first time. I did an absentee vote for Obama in Kentucky. However, his first term coincided with the hardest years of my life, when I struggled to find work. I don’t think my life would have been easier under a McCain presidency but sometimes I wonder about the implications.