Advertisements

Author Archive: Alex

Sgt. Pepper vs … ?

This year was the 50th anniversary of “Sgt. Pepper’s Lonely Hearts Club Band.” Coincidentally, it was also the 20th anniversary of Radiohead’s “OK Computer.” These two albums were compared in a gushing retrospective review of the latter in Uproxx, by Steven Hyden. Here’s the key passage:

Screen Shot 2017-10-14 at 2.36.06 PM.png

I don’t agree with this assessment, but first let me say: Notice how in the intro I specified that “OK Computer” was by Radiohead, but didn’t specify the artist of “Sgt. Pepper.” I think that that difference demonstrates the ongoing gap between the two albums and their respective places in the cultural lexicon: Virtually everyone knows what “Sgt. Pepper” is, but it’s possible that you’ve never even heard of “OK Computer.”

“OK Computer” was released in the summer of 1997 and quickly became one of the most acclaimed albums of the year, the decade, and eventually, of all-time, or at least as far as pop music criticism extends – to roughly the mid 1960s.

That’s a significant date. Most of the “greatest albums ever” lists have few if any entries before 1964. A recent list of the best albums by female artists used 1964 as its cutoff date. Why did “albums” become major events in the 1960s?

“Sgt. Pepper,” released in 1967, is a major reason why. Granted, it wasn’t the first album to be created by artists who were conscious of sequencing and flow, in such a way that they thought of their release as a coherent work rather than a collection of singles:

  • In 1966, The Mothers of Invention had released “Freak Out!” which ended with a multipart suite that featured aural collages.
  • At that time, Bob Dylan had also settled into the habit of closing his albums – “Blonde on Blonde” and “Highway 61 Revisted” are the best examples – with much longer songs than appeared on the rest of those records (a phenomenon I’ll call The Big Finish; it’s been widely imitated).
  • The Beach Boys also released the thematic “Pet Sounds” in 1966, with a loose concept of teenage angst paired with a more sonically adventurous direction than they had previously explored.

However, “Sgt. Pepper” greatly accelerated these trends:

  • The whole record was essentially a suite, with seamless transitions between songs (a ubiquitous feature in pop and especially rap albums ever since, but at that point something found largely only in jazz records such as John Coltrane’s “Meditations”).
  • It had a theme song (the title track) that was reprised and which segued directly into a Big Finish (“A Day In The Life”). Its concept featured a fictional band performing a stylistically eclectic set of songs.
  • It contained the Beatles’ most far-out instrumentation to date, with sitars, harpsichords, orchestras, clarinets, tape effects, sampled noises, and aggressive electric guitar (at a time when that was only starting to emerge with Jimi Hendrix).

There is no argument for “OK Computer” having anywher near the same influence on how “albums” were thought of. In fact, its first two songs – “Airbag” and Paranoid Android” blend into each other, in the vein of the title track and “With A Little Help From My Friends” on “Sgt. Pepper.” It also has a Big Finish with “The Tourist,” although the song is of comprable length to “Paranoid Android.” It is an album solidly in the “Sgt. Pepper” mold.

At this point, it’s possible to object and say something like: “Well, sure, “Sgt. Pepper” was a fancy hippie concept album about ‘meter maids and circus workers’ (as Hyden writes), but its music is overrated. It doesn’t have the feels and the depth of something like ‘OK Computer’ or [My Bloody Valentine’s] ‘Loveless.'”

If anything was an influential as the album’s concept, it was its music:

  • Sgt. Pepper refined all the experiments from the previous two Beatles albums – “Rubber Soul” and “Revolver” – into consistently musical results. There’s hard guitar pop (title track, “Good Morning Good Morning”), Indian classical music that presaged trip-hop and electronica (“Within You Without You”), lush psychedelia that influenced Radiohead, Pink Floyd and many others (“Lucy In The Sky With Diamonds,” “A Day In The Life”), and melodic ballads (“When I’m Sixty-Four,” “She’s Leaving Home”).
  • Songs are packed with modulations, hooks, and creative arrangements. The approach is different from one song to the next.
  • Having already reinvented string arrangemetns in pop music with “Eleanor Rigby” the year before, they took them in a different direction with “A Day In The Life,” somehow ending the most famous album of all-time with sustained orchestral noise, followed by a thunderous piano E chord and chopped-up voice samples.

Up against the conceptual and musical influence of “Sgt. Pepper,” what does “OK Computer” bring to the table?

It starts well with the “Airbag”/”Paranoid Android” duo, incorporating some unusual influences such as the rapidly shifting hip-hop of DJ Shadow, which spill over into the multiple phases of “Paranoid Android.” The third song, “Subterranean Homesick Alien,” was apparenlty influenced by the electric pianos on Miles Davis’ “Bitches Brew,” but its much more listenable than that album, even if I usually forget its melody if I don’t listen to it for a while.

The album gets weaker after that. “Exit Music For A Film” is long and tedious, with an endlessly repeated “Let you choke” that should prompt questions about what “OK Computer” is actually even about (at least we can tell that “Sgt. Pepper” is about a fake band). “Let Down” is a nice recovery with some Beatles/Byrds-esque chiming guitars.

But then there’s “Karma Police,” which rips its chords from The Beatles song “Sexy Sadie” and drags on into a noisy finish. “Fitter Happier” is two minutes of nonsense read through a Mac computer voice, concluding with “A pig in a cage on antibiotics,” a sentiment very similar to the “Despite all my rage I’m still just a rat in a cage” from The Smashing Pumpkins song “Bullet With Butterfly Wings” from two years earlier. “Electioneering” is like something off Radiohead’s usually ignored debut album, “Pablo Honey,” with loud guitars and incoherent lyrics (one line simplys states: “cattle prods and the IMF”).

“Climing Up The Walls” is better. It is heavily indebted to The Beatles in general and to “Sgt. Pepper” in particular, with its Lennon-esque vocal effects, harsh strings, and psychedelic atmosphere. “No Surprises” is a pleasant lullaby with lyrics that don’t make a lot of sense (“I’ll take a quiet life/A handshake of carbon monoxide.”) “Lucky” is a guitar-based song that the band had worked on a few years earlier, with a thrilling vocal and finish. “The Tourist” aims for a Big Finish but is a nondescript waltz.

I think “OK Computer” is a merely OK album with an inexplicable reputation for being a milestone. The New York Times observed that it seemed to be Radiohead’s attempt to engage with the legacy of The Beatles, and it is definitely indebted to them. I’m not sure how it outdoes “Sgt. Pepper” in any way, since it cannot replicate the circumstances that allowed “Sgt. Pepper” to define both the album as a form and pop rock as a genre. That’s just a fact for an album released in 1997, well after the heyday of rock.

I’m not sure why Hyden is confident that “OK Computer” will eclipse “Sgt. Pepper” as a conversation starter about “great albums.” Maybe he thinks that as the 1960s receded into memory and the Baby Boomers who came of age during the Summer of Love in 1967 grow older, “Sgt. Pepper” will diminish in stature. Maybe, but there are two major objections to consider:

  • “OK Computer” is roughly to Generation X what “Sgt. Pepper” was to the Baby Boomers (although even then, it is nowhere near the representative statement that so completes encapsulates its era); however, Generation X is much, much smaller and less culturally influential than the Baby Boomer set. Quintessential Gen X milestones like the novels of Douglas Copeland – like Radiohead, obsessed with various vaguely corporate and technological demons – and Nirvana’s “Nevermind” have become obscure and less influential over time, respectively.
  • It’s hard to compare different types of art. But predicting that “Sgt. Pepper” will give away to “OK Computer” sounds to me like saying the works of William Shakespeare will be replaced by the works of George Bernard Shaw or another playwright as the central reference point for English-language drama. It didn’t happen, even after 300+ years had passed. The fundamental idea of writing a play in English – the forms used and the gravity/importance intended – is inextricable from Shakespeare, just as the album form is from “Sgt. Pepper.” Shakespeare is essential the DNA of English drama, just as “Sgt. Pepper” is to the entire notion of “the album” as a statement.

Of course, one could prefer “OK Computer” to “Sgt. Pepper,” but that’s not realy the question at hand. The question is which one is the touchstone for debates about the album, and I think “Sgt. Pepper” has to prevail since its story is the story of the album, and “OK Computer” owes its entire format and ambition to the mold of “Sgt. Pepper.”

One last point I wanted to talk about: “technology.” A long time ago, I wrote an entry about tech writing and about “technology” as a category, saying:

“When I see “technology” in a sentence, I move pretty quickly past it and don’t think much about it. If I do, though, it’s like I rounded a corner and saw a forked roads leading into three turnabouts – the generality is crushing. Are we talking strictly about the actions of hardware, software, and networks? Are these actions autonomous? What if we just assigned all of these machinations to the category of “machinery and artisanal crafts” and spoke of the great, world-changing, liberating power of “powerful industrial machinery”? It doesn’t have the same ring to it, does it?”

I bring it up now because critcism of “OK Computer” is often intertwined with commentary about technology (see Hyden’s remarks about “the prevalence of technology in our daily lives”), in that unique way that only music critics can do when they get bored talking about what’s actually happening in songs. But what is “OK Computer” even about?

Here are some lines from “Paranoid Android,” perhaps the album’s piece de resistance:

“Please could you stop the noise?
I’m trying to get some rest
From all the unborn chicken
Voices in my head

Rain down, rain down
Come on rain down on me
From a great height
From a great height
Height.”

How is this about “technology” or really about anything that’s unique to the 1990s or to the internet era or the hyperconnected smartphone future (that is, in the future for people in 1997)? Ditto for the opening lines of “Climbing Up The Walls”: “I am the key to the lock in your house/That keeps the toy in your basement.”

If there’s any coherent concept to “OK Computer,” its anxiety about transportation. The first song is entitled “Airbag” and “Fitter Happier” and “Lucky” refer to worries about automobile and airplane transport, respectively.  Is this theme about “technology”? If it is, then “Sgt. Pepper” is also an album about “technology,” with a very similar and similarly central fretting about transportation, as captured in “A Day In The Life” about not noticing the traffic lights at changed. Go figure.

 

 

 

Advertisements

Influences

The War On Drugs (TWOD) is an evocative name for a band. For anyone currently 30 or older, it likely dredges up memories of Nancy Reagan, D.A.R.E, and McGruff The Crime Dog. By extension, it’s also a powerful conveyor of the white rock musical ambience of the 1980s: Springsteen, Dire Straits, Tom Petty, and any band that liked booming drums and liberally sprinkled synths.

For music critics, TWOD is almost always assessed relative to their seemingly obvious influences. But despite the clear debts they owe to the commercial FM rock of 30+ years ago, TWOD is critically acclaimed; 2014’s “Lost In The Dream” was the most widely awarded of that year, and its followup – this year’s “A Deeper Understanding” – is off to a good start, according to Metacritic.

Music critics weren’t always so sanguine about such acts. NYC rockers Interpol were so frequently compared to Joy Division in the early 2000s that John Darnielle of the Mountain Goats made a list of 101 things to compare Interpol to instead of Joy Division. The rock music of the era was definitely characterized by references to the 1960s and 1970s, with The Strokes sounding a lot like The Velvet Underground (more so than Interpol sounded like Joy Division, which they really didn’t, despite the endless comparisons) and The Killers sounding like a melange of New Wave bands.

Re: The Killers, someone at the LA Times even wrote this about them after David Bowie died last year: “Good-looking guys doing disco-fied rock about outer space? Bowie basically invented that.” Moreover, The Killers were listed as one of 5 bands that “wouldn’t exist” without Bowie. I don’t know; I struggle with such. What if, instead, they would not have existed if they had actually given into their influences?  That’s basically Oscar Wilde’s stance, in “Dorian Gray”:

“There is no such thing as a good influence. Because to influence a person is to give him one’s own soul. He does not think his natural thoughts, or burn with his natural passions. His virtures are not real to him. His sins, if there are such thing as sins, are borrowed. He becomes an echo of someone else’s music, an actor of a part that has not been written for him.”

I always enjoyed TWOD more than Dire Straits, Interpol more than Joy Division, and Side A of The Killers’s “Hot Fuss” album more than Bowie’s work in the 1970s and later; “Mr. Brightside” seems to hold up better than any song from Bowie’s endless catalogue (which didn’t invent either space rock – a 1960s phenomenon originating with Pink Floyd and The Rolling Stones, among others – or disco, a highly American phenomenon that emerged while Bowie was in Berlin twiddling with “atmospheric” knobs with Brian Eno).

Were any of these latter-day acts “influenced” by their predecessors, at least in the Wildean sense? TWOD was much more obsessive about production, Interpol a superior command of lead-rhythm guitar interplay, and The Killers a better ear for melody and harmony.

The difficult part of Wilde’s quote is the “natural passions” bit. What are these feelings? Or are they feelings at all, or something even more primal, like the cries of a baby or the freedoms conferred by athletic ability or physical appearance? Or maybe they come through it what distinguishes any artist from earlier ones. Otherwise, we would truly be in a “no new thing under the sun” situation.

Ironically, Wilde’s sentiment seems to lessen the importance placed on originality. So many writers, musicians, and painters are acclaimed for the fact that they were first and influenced many others – i.e., original.  But what if their influence wasn’t actually positive, or if it led to subsequent artists actually outdoing – and in a sense, becoming independent of – the ones they were imitating?

I mean, I’ve always found the cult of praise around James Joyce unbearable since it feels like writers such as Salman Rushdie and William Faulkner took Joyce’s innovations in directions that were more readable (and re-readable) than Joyce’s endless references and word salads. Who cares that Joyce was first?

On the other hand, there are some artists, such as Jimi Hendrix, or William Shakespeare, whose pioneering works have proven remarkably resistent to any exact imitation, perhaps due to historical circumstances that cannot be reproduced. No one writes 5-act dramas in perfect blank verse for mass audiences anymore; likewise, no one can pick up an electric guitar today and have the same opportunity to “reinvent” it the way Hendrix did in 1966 and 1967.

Of course, Shakespeare himself has obvious influences, and even lifted entire plots from previous works. Obviously, he’s not remembered today as a copycat. I don’t have any lightbulb epiphany to end on here; the influence question seems hard to answer. Maybe we realize that a lot of what influences us is subconscious – unintentional, really – and the product of strange confluences of history, taste, and environment.

Music vs book vs film criticism

In 2000, Brett DiCrescenzo of Pitchfork wrote one of the most infamous album reviews that still has a live URL on the internet. Assessing Radiohead’s “Kid A,” he straddled a line between the faux-literary (“The butterscotch lamps along the walls of the tight city square bled upward into the cobalt sky, which seemed as strikingly artificial and perfect as a wizard’s cap.”) and the musically incoherent (“Comparing this to other albums is like comparing an aquarium to blue construction paper.”), while tossing in some vague ethnic stereotypes (“The Italians surrounding me held their breath in communion (save for the drunken few shouting “Criep!”)”) and useless similes (“The primal, brooding guitar attack of “Optimistic” stomps like mating Tyrannosaurs.”), too. It’s a textbook example of the limits of popular music critcism.

Why is it so limited? For starters, it is heavily reliant on adjectives, Narrating what actually happens on any song – e.g., “The song opens in G Major and uses the following chords and key changes: [lists them all]” – would lose a lot of readers and likely stretch the review to a length that, along with the technical subject matter, would tank the page view stats of a site like Pitchfork (or Resident Advisor, or Tiny Mix Tapes, or any other music-centric website). So instead we get lots of adjectives; guitar solos are described as “fluid,” electronic instrumentation as “soundscapes,” and songs themselves as “airy” or “breathless” or “chugging.” These words usually make sense to me in context – like, I can see how a guitar solo might progress such that it seemed “fluid” – but they are somewhat removed from what’s actually going on. Modern music criticism would confound expectations if it began talking about what musicians were doing on their records – playing X, singing Y – and so it often resorts to elaborate descriptions, as well as the protracted narrative frames and cross-references to other pop culture that DiCrescenzo couldn’t avoid.

In contrast, book criticism cannot usually afford such ornate digressions. Any review of a book will naturally grapple with plot details and the author’s particular style, making it oddly both bread-and-butter and academic in comparison, without any of the criticism-as-art-itself that many reviews turn into (indeed, it’s hard now to read DiCrescenzo’s review outside the context of Pitchfork’s larger culture of “artsy” music reviews that were only minimally concerned with the records in question, and instead focused on building PItchfork’s distinctive brand during the early days of the web, when other music criticism sites were extremely barebones and newspaper-like).

Somewhere in between the extremes of music criticism and book criticism is the muddled middle of film criticism, which I’ll define as criticism of both movies and TV. Film critics inevitably must recite what happens on screen, similar to how a book critic can’t escape divulging some plot details; but they also frequently fall into the same rabbit hole that troubled DiCrescenzo, leaning on nebulous adjectives such as “languid” or “swoony” to describe a film’s appearance, or resorting to cliches about self-evident choices, such as the plot being “fast-paced.”

My theory is that the easier a medium is to consume, the more given it is to adjective-centric criticism:

  • Books cannot be multitasked and can take days, months, or even years to complete reading.
  • A movie can be watched in a single sitting, but will usually take at least an hour to finish; a TV show requires even less exertion, and is often a second screen to the viewer’s phone/laptop.
  • An album can be listened to in under an hour, plus it can be consumed “out of order” in a way that a book or film cannot; it is almost meant for multitasking, as the soundtrack for nearly anything.

Book critics have to focus on the plot because they cannot assume that anyone has read it. Meanwhile, music critics can be flashy since they are often speaking to people who have already listened to what they’re reviewing (and thus know the “plot,” as it were, of the album or song). The music critic’s task becomes not so much to provide guidance on whether the album or song should be consumed at all (as in the case with book criticism) but instead to tell the reader what cultural pigeonhole it fits into and if it is OK to like it all.

In this respect, music criticism is highly identitarian. DiCrescenzo’s review was a forerunner of the endless paeans to Beyonce that barely engage with the songs at hand but instead try to situate the subject as something beyond the possibility of different viewpoints: Liking it is Right, disliking it is Wrong. His “Kid A” tract labeled all other albums in music history as blue construction paper. Similarly, The Guardian’s review of “Lemonade” incoherently described the songs (“The songs, though, are not just prurient catnip, but actual dynamite”) and similarly railed against an illusory set of doubters or would-be competitors (“Cynics will cry foul, that Beyoncé remains an entitled superstar, raging at a paper tiger. Those cynics will be ignoring one of this year’s finest albums.”), recalling DiCrescenzo’s weird aquarium quip.

Meanwhile, film critics act like they are dealing with a medium as elitist and as private as the book, but in reality they are critiquing works that are more akin to music in terms of its publicity and ease of consumption. At the same time, they have to work within the significant accumulated institutional cruft – the Oscars, “prestige TV,” the “golden era of TV,” the Cannes Film Festival (and its many derivatives), HBO (and especially “Game of Thrones”), Netflix originals, the insane desire for critical validation of once-scorned superhero movies – that is really like some of the worst vestiges of the book critcism realm, for example the notion of a definable “Western canon” that must be defended by critics like Harold Bloom.

But film is not like print. Here’s what I mean: an obscure film is more approachable than a well-known book; for example anyone could see even a marginal piece of queer cinema with less effort than it would take to plow through either the widely known Infinite Jest or The Decline and Fall of the Roman Empire. To really feel the relative difficulty of consuming any book, consider the case of The Satanic Verses by Salman Rushdie.

It is likely the most controversial book of the last century, earning a death threat for its author from the leader of Iran, visibly straining relations between Iran and the United Kingdom, and resulting in the deaths of several of its translators. But how many people have ever actually discussed the content of the book? The fact that it is written in a dense, Joycean style that makes even the first pages hard to get through? How its controversial occurs in a dream sequence?

The original New York Times review of it is instructive, for both its clear descriptions of plot and its acknowledgement of the divide between the book’s vast reputation and its meager readership:

“The book moves with Gibreel and Chamcha from their past lives in Bombay to London, and back to Bombay again. For Gibreel, there is many an imaginary journey on the way – most notably to a city of sand called Jahilia (for ignorance), where a very decent, embattled businessman-turned-prophet by the name of Mahound is rising to prominence…

[M]uch of the outrage has been fueled by hearsay. Some of the noisiest objections have been raised by people who have never read the book and have no intention of ever reading it…

It is Mr. Rushdie’s wide-ranging power of assimilation and imaginative boldness that make his work so different from that of other well-known Indian novelists, such as R. K. Narayan, and the exuberance of his comic gift that distinguishes his writing from that of V. S. Naipaul.”

The Satanic Verses is virtually “hot take”-proof, since even the effort required to blow through it and write a quick blog about “Here’s What Salman Rushdie Doesn’t Get About Islam” or “Why Bernie Bros Have Been Praising “The Satanic Verses This Week” is too much for most writers. But if “The Satanic Verses” were a film, everyone would have seen it, given its reputation, and the takes would be endless.

To get a sense of how limited the scope of book criticism is within pop culture, consider the common Twitter joke of responding to anyone comparing anything to “Harry Potter” by simply saying “read another book.” There is no work of fiction that has such a tight hold on the imagination, but there are numerous films – “Star Wars,” “Jurassic Park,” “The Godfather,” etc. – that serve similar roles for understanding events.

It is precisely this ubiquity of major films that makes film criticism much more reactionary than either book or music criticism. A scathing book review is only mildly rewarding for its writer because the audience for any book is so relatively small, plus the intensely private experience of reading – setting your own pace, especially – means that each person’s opinion of a book is better insulated against contrarianism and reassessment than a similar opinion of a movie or TV show. Music reviews are after-the-fact and must contend with the strong identitarism of music taste (e.g., “am I still in good standing with [x] community if i like [y artist’s music]?).

But film is often consumed in public (at a theater) or socially (in a living room), and so there is more incentive to signal to others that they have the Right or Wrong opinions about it. The massive coverage of the Oscars (and the myriad issues about the backgrounds of who got nominated) and the enormous budgets of film studios and streaming services also mean that film critics have unique incentives to engage intensely with the conventional wisdom on any work. Inevitably, a lot of this engagement ends up reading like an angrier version of DiCrescenzo’s “Kid A” novella.

Take this Slate piece on “Lion,” which is only intermittently about the movie, but mixes in lots of personal backstory as well as a milder aquarium/blue construction paper constast. It goes from an odd concept of how gentrification actually happens (“If I had a nickel for every time someone asked me where my real parents were or if I intended to go back home, I could gentrify the Chinese province I was born in.”) to a riff on the “putting unrealistic words in a kid’s mouth” (“Even at that age [6], people would ask me if I knew my “real” family, and, if not, when I planned on meeting up with them at Starbucks.”), to undergraduate term paper-ese (“Our collective and shared understanding of identity continues to grow more and more complex, nuanced, and perhaps less grounded in traditional notions of what our “self” is. 2016 feels like one of the most crucial years for art in the context of artists from marginalized backgrounds asserting their voices—not asking to be “understood” per se but to be respected for the nuances of and intricacies of their identities.”).

The same author also posted an interaction with someone else (to whom I’ll refer to in the transcription below as “B” to his “A”) to his Twitter feed excorciating the same movie:

B: “I guess the last decent film I saw was…Lion? The one with Dev Patel. I thought it was good…could have been a little more emotionally powerful but they did a good job.”

A: “I hate that movie. I think it’s garbage and deeply reductive and offensive. A mauldin, little tale for white tears. Lion is like deeply terrible.”

B: “It was sort based off a true story though which was cool, but yeah they could’ve done a better job.”

A: “Something based on a true story doesn’t change the manipulative techniques the story uses. It perpetuates a really annoying, very white narrative that the families of adopted children don’t count as real, that the core identity of adoptess is based on a biological imperative. It is across the board garbage.”

B: “Right, stories are pure manipulation though. The dude made sure his mother knew that she was his mom, the one that took care of him all those years. Wouldn’t one want to understand where he or she biologically came from? Whether that be an adopted child or the child of immigrants that met in America. There’s so much underlying psychology that comes with your blood. It would be advantageous to know your nature as well as how you were nurtured.”

A: “That’s such a drearily lazy argument. Anyone can say that, anyone can make a straw man argument and deflect actual engagement with a cultural text. Of course art is manipulative, if your base understanding of manipulation (in art) is make the audience do anything. But art can engender and invoke feelings in an audience and exist in complexity. ‘Make you cry” is not necessarily emotional complexity, and while not all films may necessarily call for that, Lion specifically mores itself in low-key racist tropes and has a fundamental distinterest in the nuances of adopted identity. It reduces the identity of an adopted person, and what constitutes family, as a one-dimensional thing, without bothering to explore the political and personal implications of trans-racial/cross-cultural adoption. It offensively relies on adoptee and racialized identity that are superficial, that are without depth. Patel isn’t a character so much as he is a MacGuffin, moving the plot along from point A to point B, unconcerned with the ambiguities. Lion says that in order to be, as an adopted person, a person, you need to find your ‘real’ family, that only your biological family counts as who you are, completely ignoring the way that environment and upbringing and socialization within whitness has/has not shaped him as a person.”

B: Right, I get what you’re saying. But I’m saying that at is base, ‘Lion’ is about a boy who got lost, accidentally got adopted, and eventually tried to find his way back home. With epic cinematogrpahy along the way.”

To me, the telling part of this exchange is how “B” (the critic), after opening weakly with declarations about “Lion” being “garbage”, completely loses his footing after “A” says “There’s so much underlying psychology that comes with your blood,” capturing the nature/nurture divide in what amounts to a latter-day hippie-like aphorism. Everything in “A”‘s response from “That’s such a drearily lazy…” to “tropes” is word salad, although he regains his composure a bit with his critique of identity.

His scorn for Patel being a plot device (if you’re unfamiliar with the term, a MacGuffin is a goal or object that moves a plot along with or without any accompanying narrative exposition) and his mocking (in one of the actual tweets; not in the above transcript) of the “epic cinematography” remark are also revealing. The mechanics of the movie – how its plot works, what it looks like – are entirely subverted to riffing about its identity politics, which is somewhat incoherent, since the critic wants race to be non-determinative for adoptees, but not for “white” people (the word “white” is doing a vast amount of unexplained work in that exchange; it is not so much a word as a MacGuffin, moving the screed from point A to point B).

The entire rant reminded me of a seemingly endless stretch back in the mid 2000s when I was in college, when a friend would go on each Saturday morning to our brunch group about “Little Miss Sunshine,” bemoaning its prestige at the Oscars. There is really no equivalent to this behavior among book or music critics, since both fields are so atomized compared to film, which continues to have a much centralized academy of critics, producers, directors, etc. What book would an angry book reviewer would rail against in casual conversation (other than “Harry Potter,” which has almost exhausted the possibilities on this front, especially with the backlash to J.K. Rowling’s politics providing a delicious new reading of the series)? What album could attract such intense diatribes in a public forum?

Film critics, from Roger Ebert to Pauline Kael to our writer above, are reactionaries because the specter of the “wrong” type of art gaining prestige and adoration is so much more prominent than it is in the book or music spheres. A movie that  a critic dislikes getting feted at the Oscars, or receiving an ovation at the end of a screening (a la “Star Wars”), must engender a feeling similar to a Republican voter seeing “Hollywood celebs” on TV or thinking about a “liberal” enjoying same-sex relations or marijuana: derision, motivating a desire to correct the record. This tendency even seeps into the work of coherent writers like NYT film critic Wesley Morris, who used to complain about ill-defined “elitists” (a central term of conservative discourse) who didn’t appreciate popular film. The unbearableness of so much film criticism is why I agree with Noah Smith that cinema is a dying art with diminishing public relevance, in part because its critical institutions are such a mess.

The slow death of Thatcherism

The famous William Faulkner quote about “the past not even being past” has staying power not only because it contests the idea that time is a one-dimensional line that moves “forward,” but also because it reveals how ancient decisions shape our lives even to the current second. Most people alive today weren’t even born when Margaret Thatcher became Prime Minister of the United Kingdom in the 1979 general election. She died years ago. But her ideas are very much alive.

Thatcher ended decades of postwar consensus that had seen the rise of social welfare systems across Europe and North America. Her zeal for high defense spending, low taxes, and less regulation kept Labour out of power for a generation while providing a blueprint for the ascendance of Ronald Reagan – who would take power less than two years later – across the Atlantic. 

Many of us have no recollection of a time when it wasn’t assumed that everything had to be run like a business, in a “competitive” environment in which everyone is on her own and “the government” is some dark entity that must be reduced, instead of the people and institutions that make life bearable. Sure, these ideas had long gestated among the economists of the morally bankrupt Chicago School (mainly Milton Friedman) but Thatcher turned their academic papers into reality, crushing the miner unions and setting off a prolonged run of privatization and deregulation. 

Even the distinctive brand of military adventurism that has fascinated Western governments and cable news channels since the Gulf War is derived from Thatcher’s decision to fight with Argentina over the Falklands. Almost all military campaigns since then – from Grenada to the Iraq War – have followed the same lead of confronting a clearly outmatched foe, to achieve morally and/or strategically dubious aims.

Although both the U.K. and the U.S. have had small-‘l’ liberal governments post-1979 – Blair and Brown in Britain, Clinton and Obama in America – the truth is that the Thatcher consensus has gone largely unchallenged. The centrism of Blair and the rebranding of Clinton’s party as “New Democrats” were signals of how they operated as much within the Thatcher/Reagan mold as Eisenhower had within the constraints of the then-dominant New Deal regime. Blair’s affinity for military adventures in the Balkans and Iraq and Clinton’s willingness to pursue “welfare” “reform” were both ripped straight from the small-‘c’ conservative playbook. It’s no accident that Thatcher herself identified “New Labour, with its scrubbed mentions of national ownership of industry in its party constitution, as her greatest achievement.

The two countries have followed similar paths for the last 40 years. Both Thatcher and Reagan decisively won all their general elections and then handed the reins to their competent but less charismatic successors, John Major and George H.W. Bush, respectively. Those two continued in a similar but slightly more moderate vein, only to lose in landslides in the 1990s to candidates (Blair and Clinton, respectively) from revamped center-left parties (i.e., Labour and the Democrats), institutions that would have been unrecognizable to party rank-and-file in the 1970s. 

While both Blair and Clinton were electoral juggernauts, they both had much less success than either Thatcher or Reagan in laying the groundwork for their successors. Blair resigned and helped the unpopular Gordon Brown become prime minister; he lasted not even 3 years, losing power to a Tory-LibDem coalition in 2010. Clinton’s efforts could not get his VP Al Gore or his wife Hillary Clinton over the finish line. Like Brown, they both lacked the political acumen or popularity to stop the reactionaries who narrowly defeated them (Cameron in the U.K., Bush 43 and Trump in the U.S.). The only point at which the two histories diverge is with Obama, but he largely governed within the standard Reagan model, with sprinkles of Clintonism, including many of Clinton’s own personnel. 

At a glance, Thatcherism and its numerous derivatives seem to be in strong health. Both the Conservatives in the U.K. and the GOP in the U.S. control the government. Both continue to pursue the same right-wing policies of the 1980s, arguably with even more aggression than their predecessors – just look at Theresa May’s fixation on a “hard Brexit” (that is, with maximal breaks from EU immigration rules and economic integration) and Trump’s almost comically plutocratic commitment to taking away people’s health insurance to finance tax cuts for billionaires.

“Comical” – there is something weirdly humorous about what the right-wing parties of the West have become, though, isn’t there? 

The Conservatives campaigned on a platform of “Strong and Stable” leadership, but their last two PMs – Cameron and May – have taken monumental gambles (the Brexit referendum and the 2017 snap elections) that spectacularly backfired. Having lost their parliamentary majority to a Labour surge led by one of the furthest left MPs in Britain – Jeremy Corbyn, whom they labeled a “terrorist sympathizer”- they must now form a coalition with the Democratic Unionist Party of Northern Ireland, a hard-right creationist party with deep ties to loyalist paramilitaries (a fancy term for white terrorists who kill Catholics).

Meanwhile, the GOP, the home of the heirs to Jerry Falwell’s Moral Majority and nominally the party of strong national defense, is led by a former reality tv host who once went bankrupt running a casino (“you mean people just come in and give you their money in exchange for nothing? Sorry, I’m going to need a better business model” – no one ever, except possibly Donald Trump) and has been caught on tape confessing to routine sexual assault. Plus, party membership from top to bottom is deeply enmeshed with Russian spies and businesses.

And both parties have lost control of the issue of “terrorism,” once easily controlled by right-wing leaders like George W. Bush, to the point that May is literally negotiating with loyalists militias and the GOP has cheered an ISIS attack against Iranian civilians.

Whether these flaws matter to core right-wing partisans is debatable, but it is clear that the ubiquity of conservative policies and their demonstrable failure – visible not only in May being forced to align with terrorists and Trump with Russian autocrats, but also in the collapse of the global banking system after decades of Thatcherist deregulation – has energized the left in a way that had nearly passed out of living memory. 

Bernie Sanders is the most popular politician in America and ended up a few hundred delegates short of snagging the Democratic nomination and likely becoming president. Corbyn went even further and humiliated May, turning predictions of a massive Tory majority heading into Brexit negotiations into a hung parliament. Given the tenuous Tory-DUP coalition, it is probable and perhaps inevitable that Corbyn will eventually be PM. 

Both of these men are senior citizens who for most of their careers were dismissed as “unserious” leftists who would never enter the mainstream. Instead, they have a golden opportunity in their twilight years to finally eradicate Thatcherism root and branch by unseating two truly awful politicians. May and Trump are almost like the store brand versions of Thatcher and Reagan, far less compelling despite their surface resemblance (May as another “Iron Lady,” Trump as another celebrity GOP president and “Great Communicator”; in reality, they are the Tin Lady and the Great Convfefer).

The levels of media anger and disbelief at both Corbyn and Sanders deserves its own entry, but it is ultimately indicative of how much of both British and American society remains captured by Thatcherism and “centrism,” neither of which has been seriously challenged until now. New Labour, New Democrats, it’s all eroding and exposing the decrepit foundations of Thatcherism and Reaganism. The fact that the “terrorist” attacks in Britain did not hurt Labour but instead exposed the incompetence of Tory security policy (as Home Secretary, May literally cleared one of the London Bridge stabbed to go fight in Libya!) was a turning point in how I viewed the staying power of Western conservatism. It seems weak, its unpopular ideas barely propped up by cynical appeals and a dying electoral coalition. After almost 40 years, we’re finally seeing what Winston Churchill – that old Conservstive – might be comfortable labeling “the beginning of the end” of Thatcherism.

Tokyo Cancelled and predictions

“Predictions are hard, especially about the future.” – Yogi Berra, but possibly apochryphal

Imagine living in Europe circa 1900. Someone asks you to predict the state of the world in 1950. Are you going to be able to tell them confidently that the continent at that time will be divided into two spheres of influence: One dominated by the United States of America and the other by a successor state to Tsarist Russia modeled on a militarized version of Karl Marx’s philosophy, all of this having taken shape after the second of two catastrophic wars, the most recent one having ended with the U.S.A. dropping a pair of radioactive bombs on Japan that killed hundreds of thousands of civilians?

If your prediction was way off in 1900, you would have been in good company. Conventional wisdom at the time maintained that the economies of Europe were too integrated to ever lead to war, much less a conflict that would first be deemed The Great War and then renamed after its successor was even worse. But there was one realm in which the catastrophe of World War I was foreseen with startling clarity: literature. H.G. Wells’ serialized 1907 novel “The War in the Air” contemplated the immense resources being poured into then-unprecedented war machines (emphasis added; note the prophecy of a decaying Russia and a militant Germany at the end, and the hints of the eventual end of the British Empire throughout):

“It is impossible now to estimate how much of the intellectual and physical energy of the world was wasted in military preparation and equipment, but it was an enormous proportion. Great Britain spent upon army and navy money and capacity, that directed into the channels of physical culture and education would have made the British the aristocracy of the world. Her rulers could have kept the whole population learning and exercising up to the age of eighteen and made a broad-chested and intelligent man of every Bert Smallways in the islands, had they given the resources they spent in war material to the making of men. Instead of which they waggled flags at him until he was fourteen, incited him to cheer, and then turned him out of school to begin that career of private enterprise we have compactly recorded. France achieved similar imbecilities; Germany was, if possible worse; Russia under the waste and stresses of militarism festered towards bankruptcy and decay. All Europe was producing big guns and countless swarms of little Smallways.”

Why did Wells predict the carnage of World War I so accurately – and in a work of fiction, no less – while his peers were distracted by what they wrongly deemed a dawning golden era of global cooperation?

The question brings me back to an old saw of mine: Google’s obsession with science fiction, a genre Wells was instrumental in modernizing. The company’s ambitious “moonshots” division once required that new projects have some sort of basis in or resemblance to sci-fi. Efforts such as flying cars, robots, you name it: all of it was a computer science exercise in catching up to the fantasies of pulp writers from decades ago. Hell, the dummy-piloted taxi cab from “Blade Runner” (a movie released in 1990) is still far out ahead of the billions upon billions of dollars being spent on self-driving cars today by Google and its peers.

Google is not alone; the tech industry often comes off as highly certain of what the future will look like. Predictions about the dominance of automated vehicles, “the rise of the robots,” and so much more are collectively the fuel upon which a thousand “influencer” conferences run. Such events and the companies that participiate in them are at the same time highly dismissive of the value of humanistic education, instead prizing “technical” knowledege above all else. Yet the irony of them fervently chasing ideas from storybooks persists.

At some level,  we all seem to trust in the power of fiction to tell us what the future is, whether we trust the explicitly “futurist” visions of sci-fi, or the eschatology of books such as the Bible and the Koran. In regard to tech in paticular, I was startled a few months ago to read Rana Dasgupta’s “Tokyo Cancelled,” a 2005 novel that sort of retells the Arabian Nights  – as well as various fairy tales, such as Bluebeard – for the 21st century.

In one of its discrete stories, a man accepts a new job as an editor of people’s memories. He curates thoughts that they have (which have been captured via surveillance) and puts together a retrospective to present to them on individualized CDs. However, he has to be careful to edit out bad memories:

“We have short-listed around a hundred thousand memories that you can work from. They’ve been selected on the basis of a number of parameters – facial grimacing, high decibel levels, obscene language – that are likely to be correlated with traumatic memories….Apply the logic of common sense: would someone want to remember this? Think of yourself like a film censor; if the family can’t sit together and watch it, it’s out.”

2005, ok?

Now here’s a Facebook employee, in 2015, announcing the introduction of filters into its On This Day service, which sends you a notification each day linking you to your photos and status updates from past years:

“We know that people share a range of meaningful moments on Facebook. As a result, everyone has various kinds of memories that can be surfaced — good, bad, and everything in between. So for the millions of people who use ‘On This Day,’ we’ve added these filters to give them more control over the memories they see.”

Wow.

So while Dasgupta was essentially predicting an advanced Facebook service at a time when Facebook itself didn’t even exist yet (“Tokyo Cancelled” was written well before 2005, and Facebook itself was launched in 2004), what were the leading lights of tech predicting? Um…

-Steve Jobs in 2003: music streaming services are terrible and will never work
-Reality: in 2016, streaming drove an 8.1 percent increase in music industry revenue, and virtually everyone has heard of or used Spotify and Apple Music

-Bill Gates in 2004: email spam will be over by 2006
-Reality: spam is still 86 percent of all email as of Jan. 2016

The gulf between Dasgupta’s futurism and these now-laugable prediction brings me back to the vitality of the often-maligned cultural studies fields. I am reminded again and again of how we have to think about culture as a whole – not just scientific advances, which are undoubtedly important to human improvement, but also the flow of literatures, social mores, art, etc. – to sense where we are going and where we are going to. For example: Max Weber once positioned the Protestant work ethic – a totally incidental characteristic associated with adherence to a specific religion – as a central cog in the growing success of capitalism, which was reshaping Europe in his time. Yes, the Industrial Revolution and the creation of the steam engine, electricity, coal-fired ships, etc. were all vital to the creation of global capitalism, but would it have coalesced into a coherent social system without the cultural glue of Protestantism?

Just as Weber saw religion as an essential way to make sense of and corral new modes of industrial production, Dasgupta saw, by writing speculatively about it, the struggle to deal with information at vast scale (imagine all the CDs needed to contain the memories of the characters in “Tokyo Cancelled”) as a defining issue of the busy yet personally isolating environment of the modern international airport, in which the book takes place. When we give up on studying the humanities (and all “the channels of physical culture” whose underinvestment Wells bemoaned in the passage above), we create huge blind spots for ourselves and miss futures like these that should have been apparent to us all along, whether they sprouted from an Edwardian sci-fi novel or a 21st century fairy tale.