In the Bernard Shaw play “Caesar and Cleopatra,” Cleopatra whips up a baldness “cure” for Caesar consisting of a wild mix of ingredients including burnt mice and horse’s teeth. Shaw himself notes that he doesn’t understand the ingredients and of course it doesn’t work because nothing does.
In the decades that have passed since the play and the millennia since the historic Caesar and Cleopatra lived, there hasn’t been much progress in treating androgenic alopecia, the most common form of baldness, especially compared to advances in, say, antibiotics. The 1990s saw the mass release of Rogaine and Propecia as well as markèd improvements in hair transplantation surgery, but a Shavian cure is apparently still far off. None of the current remedies provide a definitive solution and they can at best hold progression of balding at bay for a few years.
I became very interested in androgenic alopecia in the early 2010s when I noticed some thinning of my own hair. During my research, the insufficiency of the associated treatments struck me: Both Rogaine and Propecia have to be taken indefinitely to maintain their benefits, which are often subtle to begin with. Transplantation is expensive and often must be supplemented with Propecia. Other treatments are by and large outlandish and unproven.
The fundamental problem, as I see it, is that baldness requires both a scientific and and an artistic solution. It’s not enough for the underlying scientific process to be sound in disrupting the mechanisms of androgenic alopecia; the results of the treatment must also be dramatic and cosmetically acceptable.
This double requirement is why hair transplantation results vary so much between surgeons, some of whom are good artists and others not. It’s also why many treatments that seemingly work in vitro – like cloning ones hairs – don’t carry over to the real world, since it’s difficult to ensure that the right size, color, and direction can be achieved in vivo. Baldness, at its core, is an artistic concern.
Unsurprisingly, given its unique difficulties, baldness has inspired a truly weird set of treatments:
- A prescription-only pill that doubles as urinary retention medication for elderly men (Propecia).
- A blood pressure medication that grows hair for reasons that are still not fully understood (Rogaine, originally known as Loniten).
- A form of alternative medicine (low-level laser therapy).
- Re-injection of one’s own processed blood into the scalp (platelet rich plasma).
- Artistic rearrangement of follicles (transplantation).
Of these treatments, by far the most discussed and the most controversial is Propecia, also known by its chemical name, finasteride. It interrupts the conversion of testosterone to DHT, a more potent compound that attacks follicles in the genetically susceptible. That’s a pretty basic process to screw with, at least in males.
Nevertheless, reactions to finasteride are wide-ranging, with some takers reporting horrible side effects such as permanent erectile dysfunction and depression, while others praise it as the best cosmetic “medication available – a “happy pill. Personally, as someone who took it for years, I think it falls somewhere in between.
Its side effects are considerable and run the gamut from the subtle (difficulty sleeping) to the overt (erectile dysfunction). The original clinical trials for its approval reported very low rates of side effects, which have been repeatedly held up as proof that the many people complaining about its adverse effects are lying. At the same time, its benefits are slight compared to other “lifestyle” drugs such as Accutane, which while boasting an even worse side effect profile can dramatically resolve cystic acne for good; in contrast, finasteride must be taken continuously just to preserve the status quo.
The experience of taking finasteride reminded me of taking antidepressants years ago. I remember feeling lousy on both medications and attributing my feelings to them not having kicked in yet. In reality, they were the sources of my problems, including loss of sex drive and weight gain. I only learned years later that estimates of their sexual side effects in particular were vastly underestimated; my prescribing psychiatrist refused to believe they could have these effects, but later research has drawn similarities between the long term health issues caused by SSRI inhibitors snd finasteride, both of which have complex effects on the brain.
In any case, finasteride’s side effects piled up for me over time, culminating in higher blood pressure and substantial weight gain yet again. I quit cold turkey and felt the same liberation I had back in 2006 when I ditched antidepressants and entered a much better phase in my life.
Finasteride is not an essential medication, even for its other indication for benign prostatic hyperplasia. It doesn’t save lives. Its potential for side effects, especially over the long term, and the possibly wide extent of these effects in the central nervous system and the liver give me pause. I don’t trust it anymore and so I won’t be taking it again. I wouldn’t recommend it to anyone unless he was truly desperate, as apparently I was years ago when I started.
Stopping it has freed from fretting so much about my hair, a concern whose hold on me I didn’t even appreciate until I finally let it go. I still do a few minor things to keep it styled and looking healthy, but if it goes, so what? I feel like hair anxiety is such a 20-something thing and, moreover, such a straight thing – so many message board posts about balding are about “oh women won’t like me anymore once I’m bald.” This doesn’t apply to me, obviously, as a married gay man in his 30s. I don’t want to be chasing my youth instead of simply accepting aging and being grateful for an ongoing healthy life.
Years ago, I joined the conversation about whether video games constitute “art.” The late Roger Ebert spawned a thousand hot takes by refusing to classify them as such, arguing that their winnability set them aside from classical art forms that cannot be won or lost, only experienced. I wrote this on the subject almost five years:
“Classic [Nintendo Entertainment System, hereafter “NES”] and [Super Nintendo Entertainment System, hereafter “SNES”] games are nowadays mostly playable only via emulation. Imagine if you could only watch The Thief of Baghdad or The Birth of a Nation by “emulating” (or actually using!) an early 20th century era projector and screen. Of course, that isn’t the case – you can watch either one on an device that has Netflix on it. Similarly, imagine if the works of Shakespeare could only be read on 17th century folio paper and were essentially illegible on anything printed after that time. Such a reality would be absurd, but it’s basically the issue that plagues video games: their greatness, with precious few exceptions, isn’t transferrable across eras.”
If you are not a frequent gamer, allow me to take a step back and walk us through what either of us would need to do in order to play, say, Excitebike, a game that launched alongside the NES in 1985. I basically have three options, which I will present in descending order of fidelity:
- Play the game from a physical cartridge on either an original NES or one of the systems it was ported to, such as the Game Boy Advance.
- Play it from the NES Classic, an official Nintendo product launched in 2016 with 30 built-in games remastered for HDTVs.
- Emulate it using specialized software on a PC/Mac (a hassle if you aren’t technically minded) or within a web browser, both of which are legally dubious.
None of these options are ideal if you are accustomed to the seamless on-demand exprience of video/audio streaming and digital books in particular. And would you believe that Excitebike is probably a relatively easy game to dust off, since it: a) was released before the era of online gaming and downloaded content and b) is maintained by Nintendo, one of the world’s most historically conscious and nostalgic companies. Many games will not hold up as well.
As I see it, there are at least three major obstacles to the preservation of video games as art:
1. Disappearance of specialized hardware
Most games are designed to exploit the particular hardware of a given system. Super Mario 64 was constructed around the Nintendo 64’s distinctive analog stick, while GoldenEye 007 forever altered video game control schemes through its use of the trigger-like Z button on the same console. The Wii is home to countless games requiring motion controls, including its pack-in, Wii Sports, which is the best-selling console game of all time. Smartphone/tablet games are no different, with controls incorporating taps, swipes, and other gestures.
What happens when all this hardware is no longer readily available? We already know the answer, given the enormous demand that has chased the limited supply of NES Classic and SNES Classic consoles that bundle their respective titles into ready-to-play hardware. People will likely not play or experience those games anymore, unless they have a really convenient option for doing so (and DIY emulation doesn’t count).
Games that are emulated or ported to other platforms lose some of their original design, in a way that a book, painting, album, or movie cannot. For example, if I play Excitebike on my comptuer with a keyboard and infinite save states, that’s a very different experience than playing it on an original NES. In comparison, the differences between watching Citizen Kane on my phone and in an arthouse cinema seem minor.
2. Online functionality
Online gaming took center stage beginning in the late 1990s, with consoles such as the Sega Dreamcast and Microsoft Xbox incorporating internet connectivity infrastructure right out of the box (previous systems had required various aftermarket peripherals). The spread of broadband interent further fueled the rise of franchises that not only had online multiplayer functionality, but in some cases had nothing but that (the massively popular Destiny 2 is online-only, for example).
Of course, a sustainable online-only or online-mostly game requires a healthy community. Some games, such as World of Warcraft, have sustained their fanbases for years, while others have shut their doors after interest waned, rendering them unexperiencable to posterity.
Nintendo offers some prime examples of the tenuous nature of online games. Its Nintendo Wi-Fi Connection service, which powered many games on both the Wii and the DS, shut down in 2014 becuase it had been hosted on 3rd-party servers that were acquired in a merger. No one can go online anymore in Advance Wars: Days of Ruin or any other title reliant on the Wi-Fi Connection platform. Similarly, the company shut down Miiverse recently, leaving the lobby of the online shooter Splatoon weirdly vacant; it had previously been populated by virtual characters who, if you approached them, presented drawings made by players and saved to Miiverse servers.
3. Software updates
This flaw is not one I considered in my 2013 post, but I now think it may be the most significant of the three. To understand why, we have to ask first: Why even bother with game consoles in the first place?
A console is basically a shortcut. Instead of having to build your own gaming PC or purchase a super high-end mobile device and keep updating it every few years, you can purchase a standardized piece of hardware that will be good for at least 5-7 years before a successor is released. Plus, you can reset assured that any title released for the system will work on the hardware you purchased.
Consoles were once super distinct from PCs, since they had essentially no user-facing operating system. You couldn’t dig into their data management setups, change their network connections, or do anything you take for granted on other platforms, since they didn’t have any such features.
That began to change when consoles became internet-enabled and gained media playback capabilities, with the DVD-playing PlayStation 2 and Ethernet-equipped Xbox perhaps the first real inflection points. Today’s games often require enormous patches or updates to remain playable and secure, as do the system OSes they run on.
Updates are a particular weakness for phone/tablet games. Consider the iPhone: Every single year, it receives multiple new models, with fresh software APIs, updated chips, different screen resolutions/sizes, etc. Like clockwork, the presenters at the Apple keynotes talk about how these new features will make the device “console-level.” Yet iOS and Android are still most synonymous with free-to-play gambling games, which account for enormous amounts of all platform revenue, than with more in-depth gameplay. Why?
I think the endless upgrade cycle is partly to blame. One iOS game developer decided to leave the App Store altogether recently, saying (emphasis mine):
“This year we spent a lot of time updating our old mobile games, to make them run properly on new OS versions, new resolutions, and whatever new things that were introduced which broke our games on iPhones and iPads around the world. We’ve put months of work into this, because, well, we care that our games live on, and we want you to be able to keep playing your games. Had we known back in 2010 that we would be updating our games seven years later, we would have shook our heads in disbelief.”
There’s simply no guarantee that a game developed for any mobile platform will run even a few years later without proactive updates to save it from obsolescence. This issue doesn’t exist as much on consoles (since they are designed to be fixed systems with long lifespans), and especially not on older consoles. I can put a cartridge in a 1998 Game Boy and, barring any electrical or technical issues, be certain it will load and play as intended. I can’t say the same about an iOS game that hasn’t been updated since 2016.
The future of gaming history
The software update issue was raised by a blogger, Lukas Mathis, in a post about the wrongness of various other tech bloggers’ predictions about Nintendo. Between approximately 2011 and 2016, it was very fashionable to proclaim that Nintendo was failing and headed the way of Sega, i.e., toward being a software developer for other people’s hardware, instead of a hardware maker in its own right (Sega exited the console business in 2001, only ten years after its sweeping success with the Sega Genesis). A few choice quotes (all emphasis mine):
John Gruber in 2013, in a post comparing Nintendo to BlackBerry: “No one is arguing that 3DS sales haven’t been OK, but they’re certainly not great…Here is what I’d like to see Nintendo do. Make two great games for iOS (iPhone-only if necessary, but universal iPhone/iPad if it works with the concept). Not ports of existing 3DS or Wii games, but two brand new games designed from the ground up with iOS’s touchscreen, accelerometer, (cameras?), and lack of D-pad/action buttons in mind. (“Mario Kart Touch” would be my suggestion; I’d buy that sight unseen.) Put the same amount of effort into these games that Nintendo does for their Wii and 3DS games. When they’re ready, promote the hell out of them. Steal Steve Jobs’s angle and position them not as in any way giving up on their own platforms but as some much-needed ice water for people in hell. Sell them for $14.99 or maybe even $19.99.”
MG Siegler that same year: “I just don’t see how Nintendo stays in the hardware business. … I just wonder how long it will take the very proud Nintendo to license out their games.”
Marco Arment, responding to Siegler: “I don’t think Nintendo has a bright future. I see them staying in the shrinking hardware business until the bitter end, and then becoming roughly like Sega today: a shell of the former company, probably acquired for relatively little by someone big, endlessly whoring out their old franchises in mostly mediocre games that will leave their old fans longing for the good old days.
There’s endless more material like these pronouncements, all of it built on several (in my opinoin flawed) assumptions about the future of gaming: First, that it will from now on be irreversibly dominated by buttonless pieces of glass (i.e., phone and tablet screens) and the race-to-the-bottom pricing they encourage; second that gaming-specific hardware eventually won’t matter, since everything will be done on general-purpose computing devices; and third that developers like Nintendo can build sustainable businesses selling high-quality games for $20 or less, despite the enormous resources required to make something as daring as Super Mario Odyssey.
If the assumptions are correct, there seems little prospect of even today’s most famous games being preserved as “art,” since they’ll have to be endlessly redeveloped and remonetized to be sustainable. But what if the assumptions aren’t correct? What if mobile no more cannibalizes consoles that PCs did in the 1990s?
The punchline to those quotes is that Nintendo ended up selling 70 million 3DSes (almost on par with the PlayStation4 at the end of 2017) and saw the Switch have the best first year sales of any home console in U.S. history. It accomplished all of that while keeping online functionality and software updates relatively minimal in its first-party titles and going all-in on the bizarre, distinctive hardware of the Switch.
It’s hard to describe what the Switch does if you don’t own one. It’s essentially a console that works like any other, hooked up to a TV, but that can be also picked up and taken with you without any degradation in picture or play quality. It has a touchscreen tablet that can be combined with two hardware controllers with numerous buttons and joysticks (they slot onto the sides of the tablet), or simply used on its own as a Hulu Plus media player.
My first encounter with the Switch had me going back to my phone and thinking of the latter “this feels old.” Perhaps tapping on a phone screen isn’t the “end of history” of video gaming it has sometimes been presented as; maybe there’s a place for more sophisticated hardware after all. I hope so, since the production and preservation of such systems will be crucial if we are to ever have a real “art history” of video gaming.
[Note: I’m going through my enormous “drafts” folder and seeing if I can salvage any of the posts without changing their titles or opening lines. This is my first try]
Every generation has its battle between, on one hand, those who pine for the “old days” and, on the other, proponents of progress who inevitably think better things are preordained. I once probably found the former camp more irritating, due to their hollow affection for activities – like hanging out in a Wal-Mart parking lot or going after much younger romantic obsessions – they’ve outgrown; they make the past appear like baby clothes: impossible to fit back into it, but not impossible to recycle on someone else or hold up in reverie. Maybe even with the immense powers of the empty brain, they can make bygones keep happening.
But the progress camp – purveyors of “optimism porn,” as someone on Twitter once quipped about Harvard professor Stephen Pinker – have made a strong run of their own in the annoyance dept. For the unfamiliar, optimism porn is all about context; it thrives on Twitter in particular as a rejoinder to (very accurate) tweets bemoaning wealth inequality, racial injustice, and warmongering. “Hey, look at these charts showing there have been fewer wars since 1945!” Yes, that’s a form of progress, but it might also be an historic anomaly, sustained only by norms around nuclear missiles, as Dan Carlin noted in a gripping podcast episode about the history of weapons of mass destruction.
Years ago, I entitled this post “The Battle of the Books” in hopes of discussing Jonathan Swift’s work of the same name, which features a debate between the Ancients and Moderns, each represented by equally fussy books in the St. James Library; hence my own much clumsier attempt to juxtapose the “glory days” crowd in opposition to the technoutopians. The piece focuses on how each camp thinks its particular era is the golden age of arts and letters. They’re allegorized by a spider (Moderns) and a bee (Ancients) who debate each other, prior to the actual authors of each era (everyone from Homer to Hobbes) engaging in actual violent combat.
While short, this satricial piece is, in my view, among the tightest and most quotable works of prose in English. It leads with a stunning self-referential opening line [all emphasis throughout is mine] – “Satire is a sort of glass wherein beholders do generally discover everybody’s face but their own” – and never relents.
The quip “anger and fury, though they add strength to the sinews of the body, yet are found to relax those of the mind” comes to mind equally during vigorous exercise or the frustrating angry exchanges of email and other internet-connected tools that do nothing for the body while sending the mind into a tailspin.
This segment reminds me of Elizabethan language about daggers and spears, but in my opinion supersedes Shakespeare et al. in the nuance it conveys about how writing has both an empowering and destructive effect on its most talented executors: “[I]nk is the great missive weapon in all battles of the learned, which, conveyed through a sort of engine called a quill, infinite numbers of these are darted at the enemy by the valiant on each side, with equal skill and violence, as if it were an engagement of porcupines. This malignant liquor was compounded, by the engineer who invented it, of two ingredients, which are, gall and copperas; by its bitterness and venom to suit, in some degree, as well as to foment, the genius of the combatants.
He then progresses to talk about the unbearable process of insisting your argument is better than anyone else’s, but notes that even the most definitive “trophy” of literary achievement ultimately become artifacts of controversy to be potentially dissolved by latter debates, like the groups I mentioned earlier who are ever looking forward: “These trophies have largely inscribed on them the merits of the cause; a full impartial account of such a Battle, and how the victory fell clearly to the party that set them up. They are known to the world under several names; as disputes, arguments, rejoinders, brief considerations, answers, replies, remarks, reflections, objections, confutations. For a very few days they are fixed up all in public places, either by themselves or their representatives, for passengers to gaze at; whence the chiefest and largest are removed to certain magazines they call libraries, there to remain in a quarter purposely assigned them, and thenceforth begin to be called books of controversy. In these books is wonderfully instilled and preserved the spirit of each warrior while he is alive; and after his death his soul transmigrates thither to inform them.”
This is exquisite commentary on the ever-living characteristics of books: “a restless spirit haunts over every book, till dust or worms have seized upon it.”
On the high ambitions but limited abilities of the Moderns; sounds like this could have been penned about proponents of perpetually underwhelming tech like virtual reality and autonomous cars: “for, being light-headed, they have, in speculation, a wonderful agility, and conceive nothing too high for them to mount, but, in reducing to practice, discover a mighty pressure about their posteriors and their heels.”
Swift also effortlessly shifts to some of the best speculative writing I’ve encountered, on par if not better than what he pulled off in “Gulliver’s Travels.” Witness this passage about a spider and a bee: The avenues to his castle were guarded with turnpikes and palisadoes, all after the modern way of fortification. After you had passed several courts you came to the centre, wherein you might behold the constable himself in his own lodgings, which had windows fronting to each avenue, and ports to sally out upon all occasions of prey or defence. In this mansion he had for some time dwelt in peace and plenty, without danger to his person by swallows from above, or to his palace by brooms from below; when it was the pleasure of fortune to conduct thither a wandering bee, to whose curiosity a broken pane in the glass had discovered itself, and in he went, where, expatiating a while, he at last happened to alight upon one of the outward walls of the spider’s citadel; which, yielding to the unequal weight, sunk down to the very foundation.”
A highly recognizable critique of filibustering senators and “contrarians” of all sorts who like nothing more than argument itself, undercutting the very “trophies” they were earlier cited as “At this the spider, having swelled himself into the size and posture of a disputant, began his argument in the true spirit of controversy, with resolution to be heartily scurrilous and angry, to urge on his own reasons without the least regard to the answers or objections of his opposite, and fully predetermined in his mind against all conviction.
The spider poetically describes a bee: “[B]orn to no possession of your own, but a pair of wings and a drone-pipe. Your livelihood is a universal plunder upon nature; a freebooter over fields and gardens; and, for the sake of stealing, will rob a nettle as easily as a violet.”
More on the temporarity of literary achievement and fame, of trophies than can easily fade,: “Erect your schemes with as much method and skill as you please; yet, if the materials be nothing but dirt, spun out of your own entrails (the guts of modern brains), the edifice will conclude at last in a cobweb; the duration of which, like that of other spiders’ webs, may be imputed to their being forgotten, or neglected, or hid in a corner.”
On what Ancients see in the itinerant art of the bee, which behaves like a poet searching for magical inspiration but knowing that legwork (literally, in this case) is necessary: “As for us, the Ancients, we are content with the bee, to pretend to nothing of our own beyond our wings and our voice: that is to say, our flights and our language. For the rest, whatever we have got has been by infinite labour and search, and ranging through every corner of nature; the difference is, that, instead of dirt and poison, we have rather chosen to till our hives with honey and wax; thus furnishing mankind with the two noblest of things, which are sweetness and light.”
Setting the table with cosmic implications: “Jove, in great concern, convokes a council in the Milky Way. The senate assembled, he declares the occasion of convening them; a bloody battle just impendent between two mighty armies of ancient and modern creatures, called books, wherein the celestial interest was but too deeply concerned.”
A fantastical personification of criticism as a vicious and ill-informed goddess: “Meanwhile Momus, fearing the worst, and calling to mind an ancient prophecy which bore no very good face to his children the Moderns, bent his flight to the region of a malignant deity called Criticism. She dwelt on the top of a snowy mountain in Nova Zembla; there Momus found her extended in her den, upon the spoils of numberless volumes, half devoured. At her right hand sat Ignorance, her father and husband, blind with age; at her left, Pride, her mother, dressing her up in the scraps of paper herself had torn. There was Opinion, her sister, light of foot, hood- winked, and head-strong, yet giddy and perpetually turning. About her played her children, Noise and Impudence, Dulness and Vanity, Positiveness, Pedantry, and Ill-manners. The goddess herself had claws like a cat; her head, and ears, and voice resembled those of an ass; her teeth fallen out before, her eyes turned inward, as if she looked only upon herself; her diet was the overflowing of her own gall; her spleen was so large as to stand prominent, like a dug of the first rate; nor wanted excrescences in form of teats, at which a crew of ugly monsters were greedily sucking; and, what is wonderful to conceive, the bulk of spleen increased faster than the sucking could diminish it.”
The best critique of “grammar hounds” and anyone else more obsessed with technical features than with clear meaning: “[B]y me beaux become politicians, and schoolboys judges of philosophy; by me sophisters debate and conclude upon the depths of knowledge; and coffee-house wits, instinct by me, can correct an author’s style, and display his minutest errors, without understanding a syllable of his matter or his language; by me striplings spend their judgment, as they do their estate, before it comes into their hands. It is I who have deposed wit and knowledge from their empire over poetry, and advanced myself in their stead. And shall a few upstart Ancients dare to oppose me?”
A thrilling description of Criticism influencing the discourse, with an especially striking line about “now desert” bookshelves: “The goddess and her train, having mounted the chariot, which was drawn by tame geese, flew over infinite regions, shedding her influence in due places, till at length she arrived at her beloved island of Britain; but in hovering over its metropolis, what blessings did she not let fall upon her seminaries of Gresham and Covent-garden! And now she reached the fatal plain of St. James’s library, at what time the two armies were upon the point to engage; where, entering with all her caravan unseen, and landing upon a case of shelves, now desert, but once inhabited by a colony of virtuosos, she stayed awhile to observe the posture of both armies.
Even amid the verbal pyrotechnics, Swift finds time to be unforgettably funny: “Then Aristotle, observing Bacon advance with a furious mien, drew his bow to the head, and let fly his arrow, which missed the valiant Modern and went whizzing over his head; but Descartes it hit; the steel point quickly found a defect in his head-piece; it pierced the leather and the pasteboard, and went in at his right eye. The torture of the pain whirled the valiant bow-man round till death, like a star of superior influence, drew him into his own vortex.”
Even better, about Virgil struggling with an ill-fitting helmet and appealing to Dryden for help: “The brave Ancient suddenly started, as one possessed with surprise and disappointment together; for the helmet was nine times too large for the head, which appeared situate far in the hinder part, even like the lady in a lobster, or like a mouse under a canopy of state, or like a shrivelled beau from within the penthouse of a modern periwig; and the voice was suited to the visage, sounding weak and remote.”
A memorable closing line to pair with the opening: “Farewell, beloved, loving pair; few equals have you left behind:
The Democratic Party has little power in the U.S. right now. Its rival, the GOP, controls all three branches of the federal government as well as most of the statehouses. This situtation is not unprecedented, and in fact it might be exactly what one would expect after 8 years of a Democratic president; opposition parties typically gain seats, like Democrats themselves did during the latter half of the George W. Bush presidency.
While we can probably count on the normal ebbs and flows of American political cycles to deliver a Democratic majority in one or both houses of Congress by next year, we can’t take for granted that Democrats will pursue an optimal agenda once in control. Let me present a few ideas that I think should be front and center in 2019 and beyond:
- Expansion of the Affordable Care Act, with a public option, risk corridors, and federalized Medicaid services, along with Medicare buy-in for Americans 55-64.
- Federal legalization of marijuana for all purposes.
- Statehood for Puerto Rico and D.C., even if it means abolishing the legislative filibuster to get it through.
- Higher taxes on corporations and the super-rich (basically anyone making $400k or more a year). In addition to higher marginal rates, loopholes such as the one for carried interest (which is a windfall for Wall Street) need to closed.
- More legal immigration to offset a stagnant birth rate and ensure the sustainability of programs for the retired and elderly; fewer arbitrary rules for deportation, such as making trivial errors on paperwork.
- The abolition of Immigration and Customs Enforcement (ICE) and Customs and Border Patrol (CBP). These agencies were created during the post-9/11 hysteria of the Bush administration and they have become ethnic cleansing police organizations committed to a nativist agenda that materially benefits no one.
- The impeachment of Supreme Court Justices Clarence Thomas and Neil Gorsuch. Thomas is a known harasser who never should have been seated; Gorsuch was appointed by a president who handily lost the popular vote, in place of the nominee of a previous president who never even received a Senate hearing.
- Hell, maybe even the rollback of judicial review itself, which Thomas Jefferson famously opposed after Marbury v. Madison.
- National legislation for maternity leave.
- Prohibition of right-to-work laws.
- Prhobition of voter ID laws, as required by both the 15th Amendment and the Voting Rights Act.
- Make Election Day and the Monday preceding it federal holidays.
- Expansion of Social Security benefits.
- Trillions of dollars in new infrastructure spending, without much reliance on public-private partnerships.
- Aggressive environmental policies, including carbon taxes on big polluters and the preservation of national monuments and parks from drilling/exploration.
This is just a start. Democrats are facing an ascendant right wing that is more powerful and extreme than at any time since the 1920s. They can’t afford to shy away from decisive actions and big visions.
Income inequality is an inescapable topic in American political discourse in 2017. It’s probably more accurate to talk about “wealth inequality,” though, since the most influential elites and corporations derive the bulk of their monies from the passive appreciation of assets (like stocks and bonds), rather than from paychecks. That quibble aside, why is inequality an issue worth talking about? Let’s look back to an event that ended a century ago this year – the First World War.
On the eve of World War I, the top 1% of British residents controlled a staggering 70 percent of its wealth. Similar gaps prevailed in France and Germany. These nations were the pivotal actors of the conflict, with Russia, the U.S., Austria-Hungary and Italy its secondary players. Inequality was an essential feature of all the pre-WWI societies in Europe and North America that had just emerged from the Gilded Age.
At the same time, many of these countries were in fact empires, overseeing vast territorial holdings spanning the globe. The U.K. and France were the preeminent colonial powers, but almost every industrialized country at the time, from the U.S. to Japan, had gotten in on the game starting in the late 1800s (indeed, the Anglo-Russian struggle for contorl of Central Asia was called “the Great Game”).
Inequality and imperialism were interrelated. With so much of all the western world’s wealth controlled by so few, there was an oversupply of money seeking out an inadequate amount of investment opportunities. The surplus of investible assets was driven by poor domestic aggregate demand stemming from inequality; hence, the need to continually look abroad for speculative openings offering high returns.
More specifically, colonial empires and massive militaries were the direct consequences of the disproportionate influence of a tiny, wealthy set of elites driving major policy decisions. Incidents such as the First Moroccan Crisis illustrated the high stakes of holding onto remote territory. Meanwhile, expansionism into Africa and Asia was reinforced by the growing power of corporate monopolies and cartels seeking to broaden their market pentetration to global scale.
We all know how World War I was resolved, with Germany in ruins, Russia converted to the USSR, and the U.S. with a newly assertive role in global politics, at least temporarily. But we don’t know how the next such crisis of inequality and imperialism – namely the one occurring right now – will end.
Since the Asian financial panic of the late 90s, the global economy has been dominated by speculative bubbles that were products of too much capital chasing too few opportunities. After Asia, there was the dotcom bust in 2001, the housing meltdown in 2008, and the current absurdities in crypto currency (e.g., Bitcoin) and Silicon Valley (raw water, anyone)?
Along the way, there has also been considerable consolidation in virtually every industry in the U.S. Mega mergers of hospitals, telecoms, retailers, etc. have concentrated growing amounts of power in fewer hands. Gigantic corporations including Microsoft, Comcast, AT&T, and Amazon, far from being forces for progress and inclusion as their modern PR-tailored images might suggest, have now aligned themselves with the far right-wing of the Republican Party to ensure low corporate tax rates. This is why you can’t separate the business aspects of the GOP from its racism; business supports provides the resources the party needs to exploit disadvantaged groups on other fronts.
Big business was central to the chaos that preceded WWI, primarily through its stake in colonial empires and military spending. Decades later, German companies were pivotal in convincing President Paul Von Hindenburg to appoint Hitler as chancellor, despite the latter’s defeat in the 1932 presidential election and his party’s lack of a governing majority in the Reichstag. It was big business, not the people led by “populism,” that enabled the most destructive warfare of all-time.
I’m not saying we’re heading for another 1914-1945 cataclysm. We should be wary, though, of how inequality is surging at a time when corporations are consolidating and supporting politicians who also favor enormous mlitary spending and possible adventurism in theaters such as Iran and North Korea.