In his seminal essay, “Politics and the English Language,” George Orwell talked about how the overuse of Latin words in English had become like “soft now” falling up on facts, “blurring the outline and covering all details.” The result of such snowiness was that “political speech and writing are largely the defence of the indefensible.” It was true in 1946, and it’s true in 2014. We have awful, long-winded “you’re all fired” letters, so stuffed with bromides that they open with “hello there” to blunt what their authors must realize is widespread pain infliction. We also have the word “adjunct.”
This word is almost always succeeded by “professor” in English. In a terrible twist of irony, it comes from the Latin adjunctus, which means “closely connected.” But there is no brotherhood between the “adjunct” and the institution she serves. An “adjunct” (I will continue to put this word in quotes because I don’t want it to be normalized) is a reverse mercenary; she joins in because she’s forced to, and there’s nothing to gain. She teaches for a pittance – I worked for $1,700 a semester – at whatever institution (I also prefer this word to “college” or “university” since it has fewer august trappings) has done enough cost-cutting to justify her hire. The job is likely one of many similar gigs.
I “adjuncted” for much of 2010. As an “adjunct,” I spent the equivalent of a part-time work week each week during that summer of 2010 preparing syllabi, lectures, and assignments, and none of that time or effort was paid. I prepared everything from my studio apartment because I was not given an office until the school year began in August (and even, only once a week for a short pre-class window, for office hours). I was not asked to participate in any departmental meetings and was not awarded any insurance.
Accordingly, I was upset at the picture of the “whining adjunct” painted by one Catherine Stukel in a recent letter to the Chronicle of Higher Education. Though she didn’t extend her critique to “adjuncts” as a group, her decision to go after an extreme case makes me worry that she would not hesitate to put down the thousands of “adjuncts” who are in bad situations due to injustices beyond their control. The “whining” individual in question was Margaret Mary Vojtko, an “adjunct” French instructor who passed away at 83 after years of virtually pro bono service to Duquesne University. She had been working well into her 80s.
Stukel’s argument, such as it is, seems to be:
- Society is full of “entitled young adults” that are unjustified in their complaints about full-time work prospects.
- Vojtko, as an 80-something professor struggling to hold down work, was a poor model for this same “entitled” children, and may have perpetuated such ungratefulness.
- Vojtko’s lack of tenure or even full-time work was likely due to inter-office politics or, worse, a lack of passion (“Maybe she was unhappy?”), not ruthless corporatization of the post-secondary education system in the U.S. over the past 30 years (a figure that Stukel coincidentally drops in her paean to her own history of lifelong gainful employment).
- Life is about compromises – in this case, settling for the middle ground of “adjuncting,” after not attaining a dream job but having the wherewithal to avoid literal unemployment.
Let’s go through these points.
The myth of entitled youth
I covered this point elsewhere, but to recap: Calling the current young generation “entitled” is blaming the victim, and it is the most clichéd move of all time (everyone going back to the Homeric epics has derided children for having laxer standards than their parents). Self-sufficiency for students and instructors alike is an enchanting myth that leave out how institutions have become corporatized factories that A) discipline their students through non-dischargeable debt, private sector business models, and segmenting of populations into groups that are assigned varying levels of respect; B) use adjuncts to do it. The cage is so large that the students and teachers in it can’t even see the bars anymore.
“That’s part of the business model,” wrote Noam Chomsky. “It’s the same as hiring temps in industry or what they call “associates” at Walmart, employees that aren’t owed benefits. It’s a part of a corporate business model designed to reduce labor costs and to increase labor servility. When universities become corporatized, as has been happening quite systematically over the last generation as part of the general neoliberal assault on the population, their business model means that what matters is the bottom line. The effective owners are the trustees (or the legislature, in the case of state universities), and they want to keep costs down and make sure that labor is docile and obedient. The way to do that is, essentially, temps. Just as the hiring of temps has gone way up in the neoliberal period, you’re getting the same phenomenon in the universities.”
Professors aren’t and shouldn’t be role models
Stukel has a hard time imagining that a lifelong “adjunct” like Votjko could be a suitable example for the “young.”This argument is strange; college students, though perhaps “young” depending on one’s own age, are adults. Many of mine were older than I as when I began “adjuncting” at age 24. They are past the stage of needing role models.
There’s immense irony in Stukel’s lazy arguments about “entitled” kids and how “adjuncts” enable their worst tendencies:
No, everyone does not get a trophy. “Adjuncts” certainly don’t, unless I missed my pick-up of the No One Cared Memorial Vase. Plus, to the extent that “adjuncts” dole out inflated grades – maybe those pass for trophies, I don’t know – they do so because low grades could cost them their jobs.
What does a “self-sufficient” (Stukel’s word) professor look like? He makes more money than an “adjunct,” but full-time professors have positions that their students cannot realistically aspire to (since we’re looking at Stukel’s career-centrism) and which most people could never stomach their ways through. The political tit-for-tats alone are so far beyond the quaint office scenarios that Stukel imagines as standard fare (to be fair, she is “a career- and technical-education professor,” rather than a traditional academic) that it’s naive for anyone to expect such machinations to produce anything resembling justice or for the involved professors to come out looking like anything other than competitors in Hobbes’ state of nature.
“Adjuncts” are past the political stage
The politicking situations – the back and forth inter-office banter, the spectacle of a committee meeting – that Stukel takes for granted are at odds with the lonely, nomadic experience of many adjuncts, who, whether by choice or necessity do not linger at their institutions beyond class time. Why should they?
To the institution, the “adjunct” is a non-union, fireable-for-any-reason employee, one who could be replaced by someone from the legions of desperate, overqualified humanities major out there. Plus, it’s common for adjuncts to perform enormous commutes just to get enough classes to scrape by. Imagine spending $200 a month on gas and 8 hours a week in a car going back and forth between institutions.
My situation wasn’t that extreme, but I did endure a bus-train transfer each time over, often spending 30 minutes per day standing on the Red Line platforms waiting for trains going north and south. I woke up before 6 most days to wear the tie, dress shirt, and slacks I bought specifically for the job and give myself enough time in the case of CTA bus or train delays. My commute, while relatively mild, was often longer than my time in the classroom.
Yet, Stukel is concerned with “meetings” and “events.” There’s no time for such niceties for many adjuncts, and even when there is, the context is more likely “we’re letting you go/a student complained/we added a course” than “tell me what you did last weekend.”
Moreover, “adjuncts” in the classroom, the makeshift office, or the department building are not participating in a political contest in which the stakes include long-term employment. Most “adjuncts” go in knowing that the position is in no way on a track toward a six-figure salary, paid time for research, and general job security. Which brings me to…
“Adjuncting” is a destination, not a journey
“Adjuncting” is often thankless work that may benefit a few students, but rarely their instructors. Pay is non-existent, the workload is high, and “adjuncts” have to live with the constant knowledge that they are replaceable despite their hard-earned degrees and often sophisticated teaching techniques. An “adjunct” with a master’s degree has worse career prospects than a Teach for America alum like Michelle Rhee, who once taped her students’ mouths shut. How does that make any sense?
Votjko’s age also speaks for itself. If a senior citizen can’t overcome the vile postsecondary system after decades of excellence and experience, who can? “Adjunct” is such a terrible word for the entire experience that instructors have to put up with. May I suggest another Latin derivative: intern. Ideally intern professors will take the fight to the institutions like unpaid interns already have.
Have you ever gone to gmail.com…on your iPhone? Ever felt the urge to load the Instagram website from your Nexus 5?
Web protocols – hello, HTTP – may be central to how we experience mobile software, but mobile browsers have lost the race to hybrid and native apps. Whereas one could conceivably spend an entire workday on a Mac or PC inside Safari or Chrome, the same workflow would be unbearable on an iOS or Android device.
Mobile browsers seem like library stacks – you probably wouldn’t wander into one unless you had to retrieve something, whether a book or a URL. There’s a reason for all those annoying “download our app” banners when you stumble upon some TV station’s mobile website. Apps are better.
If a service offers both a mobile website/Web app and an app that can be downloaded from an app store, I just about always go for the latter. Its hybrid/native app – i.e., the one from the App Store and Google Play that has its own springboard icon and takes up the full screen when launched – is almost certainly faster and definitely more immersive.
Everything from Yelp to various Reddit clients are much better than their corresponding mobile Web experiences. On Android, the incentive to “go native” is even greater since some links can be opened in apps rather than Chrome (e.g., click a URL that goes to Google+ and opt to open up the Google+ Android app rather than the website).
But there seems to be at least one big blue exception to the normal Web versus hybrid/native app rule.
Facebook’s mobile apps, especially for Android, have always been a mess. The early versions were just Facebook.com repackaged with multiple hamburger buttons. While the visual style has gotten flatter with the years and the app has been rebuilt with native code, it’s still got a lot of flaws:
- It’s a big battery drain. Deleting Facebook alone is probably the best generalist advice for saving battery on Android other than just keeping the screen off all the time.
- It’s inefficient. Photos can take forever to load on even modest network connections, with performance that lags behind even bandwidth-intensive apps such as Netflix.
- It’s inconvenient. The recent separation of Facebook Messenger means that Facebook is not one but increasingly many apps taking up space and battery on your phone.
Facebook’s Android app is what happens when the immovable object of a massively popular PC-era app means the unstoppable force of mobile app usage. It manages to hold onto all the legacy features – post filtering, sponsored posts, tons of menus – while taking chic 2010s aesthetic cues. The result is something that is at best a mercifully unobtrusive and at worst crashy and unusable.
There’s plenty of irony about Facebook’s operations, including its fundamental devotion of massive computer science expertise to the art of ad-clicking. With app design in particular, it’s amazing that the lowest-risk way to use Facebook on Android in 2014 is via Chrome.
The Facebook Web app, especially on a big screen Android phone, has several advantages over its cousin:
- First, it leaves you alone. There are no push notifications, though you can still check your notifications and your messages through the familiar row of icons at the top. Once you leave the site, you’re basically out of Facebook’s world.
- Facebook Messenger? It’s right there in that same row and it actively updates with new messages without requiring a page refresh.
- On middling Internet connections, it seems to load media better than its Android counterpart. I’ve seen big rows of gray boxes in the place of photos on the Android app, whereas the website on the same network is fine.
Granted, going to Facebook.com on an LTE-equipped phone feels like bringing modern hardware back to 2006. But in every way it feels like the healthiest choice for the phone and for my own sanity.
Entitling an article “The Internet’s Original Sin” is pretentious, but I’m guessing that it is an Atlantic editor’s attempt at sounding weighty while driving traffic on behalf of the publication’s ads. Irony of reading Ethan Zuckerman’s post about the consequences of Web ads aside, the author makes a compelling case that reliance of websites and social media on advertising has had unsavory side effects. The most notable is heightened surveillance as Facebook, Google et al try to discover more about who uses their services so that they can better target their ads.
Web advertising has been a vital revenue stream for big businesses and small-time website owners alike for roughly 20 years. Yahoo, Google, and Facebook were all built atop ad-supported monetization that is frequently annoying and irrelevant. Even sites like this one run ads that readers likely have little use for. Ads, in addition to the money they bring in, are good reminders that for all the incessant talk of “innovation,” that many of the Web’s biggest players have a business model not all that different from 1950s broadcast televison. People have sat through commercials for everything from Kool-Aid to Budweiser while watching TV, and now they endure sponsored content (i.e., highbrow informercials) and sidebar ads for AT&T and Groupon.
Zuckerman proposes fees that would support Web properties while removing the baggage that comes with ads. There are plenty of examples of such an approach, including Pinboard (a fee that increases fractionally for each new user), Zoho’s various services (including its ad-free webmail), and Pocket (annual subscription). Of course, paying for things upfront is a very “analog” thing to do, seen as out-of-step with the freemium economics of “digital” media. Hearing at least one prominent voice speak out for the return of Paying For Things and be applauded as forward-looking for doing so speaks volumes about the highly political, neoliberal construct commonly referred to as “the Internet.”
When many individuals talk about “the Internet,” they aren’t talking about basic IP connectivity and moreover they’re not talking about a medium in the same sense that ones speaks of “television” or “radio,” both of which are treated basically as dumb conduits for content and programming. No, the Internet is a whole suite of ideas about Whig history and neoliberal economics, one that is almost always referred to positively as a non-human champion for progress. Even its flaws – surveillance, ads – are seen as the morally wrong actions of individuals trying to ruin an objectively good thing. It’s absurd to think of talking about any other communications medium this way – no one is going to write about the original sin of TV or how radio is disrupting X or Y. Those media aren’t regarded as singular forces.
I have long wondered why this was the case. Was “the Internet” really unique? It’s essentially an extension of technologies dating back to the telegraph, and its impact on human welfare is less than that of humble inventions such as the washing machine. But I was overlooking the obvious answer: “the Internet” is an enormous revenue opportunity for the private sector, particularly Silicon Valley. This sentence from Zuckerman’s piece resonated:
“Most investors know your company won’t grow to have a billion users, as Facebook does. So you’ve got to prove that your ads will be worth more than Facebook’s”
Nothing wrong with this sentence. It’s a great breakdown of the weird pressures currently shaping monetization on the Web. But did you notice something odd about this sentence, and about most of the article? It’s exclusively about private services stewarded by for-profit corporations. It’s almost as if the only organizations that exist are startups, and that issues with “the Internet” are moral rather than political.
It seems taboo to talk about the possibility of, say, a public and free equivalent of Facebook, Reddit, or Google. It’s cliche to refer to “the Internet” as the largest library ever, but it’s really not, at least not in its heavily politicized state in 2014. Libraries are generally run for the public good, or for the benefit of a smaller group of people (university students and professors) who have subsidized in other settings and can utilize it as a space for thinking, without seeing ads everywhere or trading data for personalized book recommendations. In contrast, “The Internet” is a cash machine for the private sector. Likewise, “the Internet” isn’t akin to an essential utility like electricity or water for similar reasons, plus it’s used mostly for leisure (another indication of the level of value it contributes to society).
It seems short-sighted to propose an end to free/privatized services so that we can have paid/privatized services, as if these two business models were all there were to the Web. Since the Web is so often used to look up information and instituted as a human right (absurdly, I think, but that’s another conversation), why not treat it like water or electricity or any of the other essentials that it is compared to when speaking of “the Internet”? Why not make it a public library? Right, because there’s too much money at stake, and so much political power rides upon treating “the Internet” as an all-powerful force best left to the private sector. In the West, we’ve been knee-deep in neoliberalism so long that it’s hard to realize that inquiry really could extend beyond how we pay for things and instead take up the questions of who benefits, and should they.
I thought about calling this entry “the 10 most overrated albums ever,” but that would be stupid, because, as Kip would say, “like anyone could even know that.” Understanding why someone else like something and liking it yourself are two totally different things, meaning that an album, book, movie, whatever could be “great” to critics (who are people, too, remember) while being dead to any given person. I first felt the disconnect with books, when I felt nothing but indifference to Infinite Jest (I liked some of Wallace’s other works), William T. Vollmann, or Amy Tan. With film, my feelings were less strong, so no lists of “most overrated/underrated/best ever” will be forthcoming.
Music has such a low bar to entry for criticism, though, that it’s as easy to slaughter sacred cows as it is erect them in the first place as monuments to one’s own demographic, historical and stylistic biases. Publications like Rolling Stone and Pitchfork have each single-handedly created pantheons around middling albums from Slanted & Enchanted to Neon Bible, without discussing much other than the cultural contexts in which these works were created. I came up with a list of the albums with critical reputations out of line with their music, at least to my ears. As always, a reminder that “overrated” doesn’t necessarily mean “bad.”
Nirvana – Nevermind
Smoothing over 1980s punk and indie with 1970s production and commercialism was the most cliched move possible at Nirvana’s time, but they did it anyway, following the example of the Pixies, Soul Asylum, Goo Goo Dolls and many others. “Smells Like Teen Spirit” is not only a copy of Boston’s “More Than a Feeling,” but also the basis for “Drain You” and “On a Plain,” making the latter two copies of a copy. The band is musically limited and its song structures conventional; hard to tell why they were picked out of the sea of bands that had all the same tricks.
Radiohead – Kid A
Reviews and retrospectives of this album are given to ridiculous hyperbole about its Importance as well as pearl-clutching about the decline of the album, 9/11…everything. The music itself? Sloppily produced folktronica, the obvious result of a rock band listening to the Warp catalog a few times and feeling like it had loops, textures, and sampling down pat. Much of it – the title track, “Treefingers,” “Morning Bell” – sound like 1970s acts such as Mike Oldfield or Klaus Schulze, except produced with a harsh commercial sheen 30 years after the fact – what’s so great about that? Every Radiohead album has been worse than its predecessor.
My Bloody Valentine – Loveless
Whale sounds, gossamer, whatever – reading writing about this album is like reading a wine reviewer’s notes. MBV was heavily reliant on volume and production, which means that the group’s sporadic output and epic hiatus aren’t hard to understand – there’s not much in the songwriting well. This album sounds like a noisy take on the Cocteau Twins more than it does any of the outre sounds ascribed to it.
Sex Pistols – Never Mind the Bollocks, Here’s the Sex Pistols
Does anyone listen to this anymore? It always felt like an obligatory influence or touch point to cite, but listening to it start to finish was an afterthought. It’s not even that edgey – The Clash were more political, American acts like Blondie and Television were more forward-looking and ultimately more influential, and even Stones records like Sticky Fingers and Let it Bleed were full of references to drugs and violence years before. Anyone going to argue that “God Save the Queen” has aged well, or “EMI” for that matter?
Sufjan Stevens – Illinoise
In 2005, I was baffled by the almost universal acclaim that this record received, figuring that they might just be admirations of its cute artwork and super-long song titles (which indicate quirkiness AND importance). It’s way too long and keeps an even tone almost the whole way, with busy but uninteresting arrangements, tiresome lyrics, and flat production that between them add up to something that can maybe listened to one or twice before moving on. As an Illinoisan, this hurts.
The Strokes – Is This It
Indeed, was that it? As a 15-year old, I remember disliking this album for its grating, filtered vocals. I gave it a second shot recently and was surprised that my reaction hadn’t really changed in a decade plus. They’re not cosmopolitan enough to sound quite like The Velvet Underground, and the results instead are repetitive guitar lines mastered and done much better years before by The Cars and Blondie. Like Nevermind, it is a record heavily dependent on its production. Not bad, really, but nowhere near the masterpiece it was hailed as.
Tom Waits – Rain Dogs
I first heard Tom Waits as an 18-year old, and I didn’t get him at the time. I later got into Mule Variations, which was full of varied yet coherent and tuneful songs given gorgeous production and lyrical wit, but I never warmed to this record. It repeated the innovations of Swordfishtrombones, albeit across an exhausting 19 songs of screaming, yelling, and other vocal interpretations that are just takes on the Howlin Wolf’ blues tradition filtered through Captain Beefheart – it’s only “weird” to sheltered boomers or lily-white indie critics. The one song I kept listening to again – “Hang Down Your Head” – is just the old standard “Tom Dooley” reconfigured.
Arcade Fire – Neon Bible
Arcade Fire do very little with a whole lot. They use tons of instruments to cover up for straightforward chord-progressions and plodding tempos. Win Butler’s voice is screechy and the lyrics strive hard for Seriousness but in the musical context end up wearing me out. I remember hearing “Keep the Car Running” while in a car in 2007 and thinking of how it seemed endless despite its 3:28 running time, probably because of the repetitive musicianship.
Led Zeppelin – IV
I never got into Led Zeppelin, so take that into account. They seemed too masculine, too self-regarding, too blues-masters-y to really connect with me. However, thank to my sister’s Zep phase, I listened to this album many times in car trips. It’s strong in the first half, with “Rock N Roll” and “Black Dog” and “Stairway to Heaven.” But Side 2 is like a joke – the pointless hippie paean “Going to California” and especially the turgid blues rip off “When the Levee Breaks.” Zep always seemed like a consolidation of the past (Cream, Hendrix, Robert Johnson) than something really new.
Wilco – Yankee Hotel Foxtrot
This record’s backstory and accidental 9/11 relevance totally overshadowed its content. There are some nice, catchy songs here – “Kamera,” “Jesus Etc.,” “Heavy Metal Drummer” – but there are also heavy-handed touches like “I Am Trying to Break Your Heart” and “Poor Places.” It’s a record that feels like it badly wants to be experimental, but can’t make the leap. Still, “I Am the Man Who Loves You” is better than anything Pavement ever did.
I’ve written about the “comfort zone” a few times, once as the subject of its own entry and another time in the context of my decision to leave a programming boot camp. But I don’t think my arguments against it were that well-formed. It’s a cliche, yes, and one ought to be skeptical of all such constructions since they imply political view points that have been systematized and turned into placeholders for original thought. Beyond that, I struggled to articulate my reasons for disliking this mantra in particular.
After thinking about it, I feel like I’ve arrived at a clearer reason for my distaste:
Habits aren’t necessarily the results of comfort
Think about a daily routine such as getting up (despite feeling groggy), dreading work (not all people “love what they do”), cooking breakfast, setting out on a long commute, putting in a few hours, and then going home. Consider also the effects of this regimen, from feeling tired to wanting to avoid conflict and distraction that would add to the stress. Do any of these behaviors sound like they were caused by “comfort?” What if they were instead the products of pressures that the individual couldn’t control, forces that had made unhappiness instead of comfort the main form of stasis?
In popular imagination, stepping out of the “comfort zone” is often a low stakes, recreational endeavor – trying out for a sports team or doing an improv routine – or one that is decidedly upper-middle class (asking for a raise). Other actions such as entering an immersive language environment might also qualify, but could just as easily be framed as necessities driven by motives other than the voluntary maneuver of moving past the artificial confines of the “comfort zone.” For example, was learning it necessary to get a visa or reunite with a family member? These motives would potentially extend beyond her own aspirations and encompass restrictions, rules, and regulations that she could not shape to her own ends. In contrast, Being nudged to step outside the “comfort zone” assumes that the subject has some level of luxury, i.e., that her problems, as they were, consist mostly in habits that are formed of her volition and easily changed. That’s not realistic for a lot of people.
Like so many cliches – “think outside the box” also comes to mind – the “comfort zone” is propaganda that promises freedom while papering over the confines that are being ever-extended around the subject by mechanisms including its own dishonest words. “I was in a zone? I was in a box? No matter, with some solutionizing language I can escape!” They create a weak, artificial barrier that can be broken with a superficial solution (there’s that word again).