Advertisements

Tag Archives: words

What’s wrong with tech writing?

Tech filler
“Technology” is a strange word. Its Greek root, techné, means “art” or “excellence,” and its usage in English is scarce until at least the 20th century. Its rise in popular discourse during the second Industrial Revolution, the movement that produced inventions such as the phonograph, makes sense. However, what’s usually glossed over is that “technology,” as a word, is filler, distracting us from the the reshaping of society from above.

What does it even mean to say that “technology changed everything” or to assign so much agency to vague, well, technological concepts such as “big data” or “the Internet of Things?” The vast discourse on technology is the best possible example of what Georg Lukacs called “reification,” the act of instilling human activities with the characteristics of things, creating what Lukacs himself called “a ‘phantom-objectivity,’ an autonomy that seems so strictly rational and all-embracing as to conceal every trace of its fundamental nature: the relationship between people.”

When I see “technology” in a sentence, I move pretty quickly past it and don’t think much about it. If I do, though, it’s like I rounded a corner and saw a forked roads leading into three turnabouts – the generality is crushing. Are we talking strictly about the actions of hardware, software, and networks? Are these actions autonomous? What if we just assigned all of these machinations to the category of “machinery and artisanal crafts” and spoke of the great, world-changing, liberating power of “powerful industrial machinery”? It doesn’t have the same ring to it, does it?

Words and classes
The history of words to talk about all of the basic concepts that undergird “tech writing” – the category that would seemingly include everyone from TechCrunch to PC World to Daring Fireball to this blog – is the history of taking words that belonged to the blue-collar working classes and reassigning them to the white-collar management classes. Take “software,” for instance. It derives from “hardware,” which once referred primarily to small metal goods. As early as the 18th century, one could talk about a “hardware store” as a place to buy metals.

Something similar, on a much broader scale, has gone on with the term “Internet.” As I explained in my entry on “Space Quest 6: The Spinal Frontier,” the entire discourse about “the Internet” is a retroactive reorganization of many separate traditions, spanning hardware, software, and networking, that once went by disparate names. Even the act of using “the Internet” was once similarly variable: it could be called “going into cyberspace” or “using virtual reality” well through the 1990s. Grouping everything under the banner of the “Internet” has had the desired effect of making changes affecting fields as diverse as education (via online learning) and transportation (via services like Lyft and Uber) seem inevitable.

It is reification writ large, as tight origin story compiled after the fact to create that very “phantom-objectivity” that Lukacs talked about. Likewise, “technology” itself, as a word, is a mini history on how mundane physical activities – building computers, setting up assembly lines – were reimagined to be on par with the high arts of antiquity. Leo Marx wrote, in his paper “Technology: The Emergence of a Hazardous Concept”:

“Whereas the term mechanic (or industrial, or practical) arts calls to mind men with soiled hands tinkering at workbenches, technology conjures clean, well-educated, white male technicians in control booths watching dials, instrument panels, or computer monitors. Whereas the mechanic arts belong to the mundane world of work, physicality, and practicality – of humdrum handicrafts and artisanal skills – technology belongs on the higher social and intellectual plane of book learning, scientific research, and the university. This dispassionate word, with its synthetic patina, its lack of a physical or sensory referent, its aura of sanitized, bloodless – indeed, disembodied – cerebration and precision, has eased the induction of what had been the mechanic arts – now practiced by engineers – into the precincts of the finer arts and higher learning.”

Making it, writing it
I love this passage since it captures so much of how the the rise of technology firms has been about word games and the institution of engineers and venture capitalists as, crucially, creators, and heirs to the traditions of straight male-dominated industry. Debbie Chacra did a great job out outlining the real shape of the Maker movement in a piece for “The Atlantic,” arguing that “artifiacts” – anything physical that could be sold for gain or accrue some sort of monetary value, seemingly on its own – were more important than people in today’s economic systems, especially people who performed traditionally female tasks like educating or caregiving.

Tech writing, vague as it is, exists in this uncomfortable context in which anything not associated with coding or anything “technical” is deemed less important – to businesses, to shareholders, to whomever is important for now but may be forgotten tomorrow – that what is more easily viewed (I mean this literally) as work that came from a predictable process (software from coding is the best example). Writers in this field have to continually prop up a huge concept – technology – that carries the baggage of decades of trying to be elevated to the status of fine arts like….good writing.

Talking about the agency of concepts is common, and tech writers – or anyone dabbling in writing about technology – have to play so many ridiculous games to cater to readers who long ago became lost in the reification of “technology” as an unstoppable force. Take this sentence, which I recently found via Justin Singer’s Tumblr:

“Big Dating unbundles monogamy and sex. It offers to maximize episodes of intimacy while minimizing the risk of rejection or FOMO [fear of missing out].”

Bleh. This passage is easy to make fun of, but its structure is so indicative of tech writing at large. There’s the capitalized concept (“Big Dating”) that is acting, via a buzzwordy verb (“unbundling” – what was the “bundle” in the first place? but “disrupt” is still the all-time champion in this vein) on The World As A Whole. Then there’s the shareholder language (“maximize”/”minimize”/”risk”) that speaks to the neoliberal economic ideas – most of them questionable – that have been the intellectual lifeblood of the tech industry as well as the governments that feebly regulate it (the weakening of political will is one reason Marx saw technology as a “hazardous” concept).

Aristotle and wrap-up
When I dipped my toes into Aristotle’s “On Interpretation” earlier, I talked about how he defined nouns as “sounds.” I then wondered if so much bad writing was the result of trying to write things that would sound absurd in speech (i.e., as sounds).

Tech writing in particular has this sort of not-real quality to it that makes it sound so silly when read aloud. It’s always trying to reify and create vast, unstoppable forces that aren’t even physically perceptible. Writing about “the Internet of Things” or “Big Dating” is to basically dress up everyday and unnovel concepts like networked devices and dating services in dramatic language.

You may as well have someone try to describe an sandstorm or flood to you as it were the result of a phantom-objective, all-powerful godlike force. Wait, that’s, like, 99 percent of religion right there. Well, when writing about “technology,” you’re always writing someone else’s scriptures, with all the opacity and word-gaming that that entails – who wants to read most of that?

Advertisements

Nouns, sounds, and the Internet of Things

Nouns and Greek texts
Looking back at elementary school, the earliest thing I remember learning was what a noun was. “A person, place, or thing” – that seems to cover all the bases. It’s the type of knowledge that quickly becomes secondhand, only coming to mind in cases like interpreting a sentence that contains a gerund, which is an English nouns that seem like a verb (e.g., “the happening is up ahead”).

Sixteen years after I learned what a noun was, I started reading Aristotle in Greek. Although Aristotle exerts tremendous influence on all of Western civilization – in every field from biology (which he started with his examinations of specimens brought to him by Alexander the Great) to theater criticism – I have never loved his ideas or stylistic flourishes as much as those of his teacher, Plato.

Some of his Greek texts seemed rough to me, requiring a lot of insertion of English words in the translation, whereas Plato’s writing was full of plays on words and syntactical arrangements that made it enjoyable in ways that English couldn’t reproduce. When translating, I felt like sometimes English was an upgrade for Aristotle, while it never was for Plato.

Nouns and sounds: Nounds?
I began reading Aristotle’s “On Interpretation” today, my first real brush with his work since 2007, when I was working with the “Nichomachean Ethics.” It won’t take me too long to finish, which is exciting, having recently read almost nothing but long philosophical tracts and novels.

Early on, Aristotle, like an elementary school teacher, sets the grounds rules by defining what he means by a noun. He says:

“By a noun we mean a sound significant by convention, which has no reference to time, and of which no part is significant apart from the rest.”

I don’t have the Greek text with me (I’ll try to find an image of it later) but isn’t it strange that a noun is defined as a sound? Obviously, nouns are also written, soundlessly, on paper and word processors, but, as Aristotle notes, “written words are the symbols of spoken words.” It all comes back to speech.

Sounds and good and bad writing
This makes sense when you start to think about bad writing, more so than good writing. So much bad writing and so many bad ideas emerge because they have no predecessors in speech and would sound close to nonsense if spoken aloud. I’m thinking of all that business writing about “full-service solutions providers.” Jason Fried tore into it several years ago for Inc.:

“One of my favorite phrases in the business world is full-service solutions provider. A quick search on Google finds at least 47,000 companies using that one. That’s full-service generic. There’s more. Cost effective end-to-end solutions brings you about 95,000 results. Provider of value-added services nets you more than 600,000 matches. Exactly which services are sold as not adding value?.”

All of these phrases sound horrible in conversation – even the people who write them wouldn’t utter them aloud in relaxed company. It’s like there’s nothing there; encountering the word “solutions” in text makes me instantly skip like 2 or 3 lines ahead to see if things get better. There may as well be no nouns on the page.

Inarticulate
Aristotle is helpful here, too, in a strange way:

“[N]othing is by nature a noun or name – it is only so when it becomes a symbol; inarticulate sounds, such as those which brutes produce, are significant, yet none of these constitutes a noun.”

It’s a weird image that comes to mind for me here, as I equate brutes raving inarticulately with business writers ranting about best-of-breed management structures in ghostwritten columns or ‘touching base’ in their emails. What counts as “inarticulate,” though? A liberal interpretation, I suspect, could capture so much that is bad and nebulous about writing, particularly writing about technology.

Some terms, like “the Internet,” have become so vast as to be meaningless without first trying to figure out what they’re not – what is the Internet not, when it comes to technology? As I noted a few posts ago, the term has come to bind together software, hardware, networks, and many other disparate technologies into a homogenous term.

If it’s not everything, then it’s trying to become so by incorporating every device possible, through the “Internet of Things.” Sensors, “analytics,” and, yep, valued-added services all pile into conversations about this term: All I know is that trying to write about “the Internet of Things” makes me sound like an inarticulate brute.

1000x

What is the difference between being good and being great? The question is a black hole. For starters, the word “good” is a powerful vehicle for imposing different kinds of moral philosophy, with a long history of being situated opposite of both “bad” and “evil” (not the same concept, and borne of two different cultures), as we looked at yesterday with Nietzsche’s The Genealogy of Morals.

“Great” isn’t as good as “good,” at least as words go. It has always seemed like a distraction; having established what “good” is – a vast undertaking beyond the strength of any individual, and one that, even accomplished, only yields an institution that can be challenged, like the Roman one that the Judaeo-Christian world fought against – a culture uses “great” as a layer on top of “good,” to draw attention away from the question of “what is good?” and instead transfix everyone with the notion of things that seem to be outside of the good-bad or good-evil dichotomy – i.e., things that are “great.”

But really, “great” does work on behalf of “good.” It is used to soften up our interpretative minds by directing us to things that, the bestower argues, are worthy of something akin to worship, which is a passive activity since the worshipper is not capable of constantly scrutinizing the worshipped and instead settles for assumptions – e.g., Jesus is the son of God; OK Computer is the best album of the 1990s – that invisibly take on the evaluative work once done by comparing “good” to its opposite du jour. It makes sense that a famous hymn is entitled “How Great Thou Art.”

Numbers, averages, and ease
Even if one believes that using “great” rather than “good” is just a matter of degree, there’s still the issue of drawing a line between the two. The difficulty of doing so – as opposed to the knee-jerk ease of calling something “bad” or “evil” and seeing that it is, in the mind, discrete from “good” – speaks to the odd place of “great” in the cultural lexicon. So naturally, numbers are often invoked to make it appear like there’s a clearer distinction.

A recent essay from Paul Graham attempted to show the difference in the context of computer programmers. He wrote:

“it’s easy to imagine cases where a great programmer might invent things worth 100x or even 1000x an average programmer’s salary.”

Now, for context, we’re talking about, possibly, someone who is “good” and merely keeps a codebase functional and someone who is “great” and invents something of uncertain cultural value like Snapchat. More generally, it seems to be a difference between workaday existence and the freedom conferred by turning code into a vast commercial enterprise run by a “great” programmer who, the mythology insists, had the wherewithal to create value on his own.

Accordingly, we can also see the bones of the old nobility-vs-everyone else morality grafted onto “great-vs-good,” or rather in this case “average,” a word with an interesting etymology. “Average” comes from the French “avarice,” meaning “damage to a ship.” To be average is literally to suffer financial loss, so “great” makes sense as a synonym, in this particular context only, for financial gain – a nice happy accident, if words are your game.

He also uses “ordinary,” which has roots meaning “orderly” or “customary.” This usage subtly, but only in passing, flips the script by making “great” non-conformist, despite it being invested with the most cliche value system possible – that the best idea is the one that makes the most money.

It’s notable that “bad” is missing in action here. The main foil is “average,” speaking to the growing economic inequality and current fixation on assessment, discussed in the book Average is Over: Powering America Beyond the Age of Great Stagnation, with a title implying that a “great” solution is needed to overcome a “great” problem. But the irony is that Graham’s claim of 1000x the value is random and unverified, and it seems hard to imagine a company hiring one “great” engineer over 1000 “average ones.”

Roger Ebert had his own ideas on “great,” in this case situated against “almost-great” rather than “good”:

“[T]rue geniuses rarely take their own work seriously, because it comes so easily for them. Great writers (Nabokov, Dickens, Wodehouse) make it look like play. Almost-great writers (Mann, Galsworthy, Wolfe) make it look like Herculean triumph. It is as true in every field; compare Shakespeare to Shaw, Jordan to Barkley, Picasso to Rothko, Kennedy to Nixon. Salieri could strain and moan and bring forth tinkling jingles; Mozart could compose so joyously that he seemed, Salieri complained, to be ‘taking dictation from God.'”

Again, the foil isn’t “good” per se, though Ebert’s “almost-good” is a lot different than Graham’s “average.” The line here isn’t financial value, but ease of creation, which is even harder to parse (unless, I guess, you’re truly “great” and know nothing but ease in your chosen endeavor). For me, painstaking work is sometimes terrible and sometimes good, while writing that comes easily also has wide variance of quality. I have noticed that some of my creative writing that came easy was better than that that came hard, but it also felt like the former owed a lot to the latter.

Next time, I’ll look at some passages to probe this dichotomy of “great” and “almost-great.” I doubt it will yield a 1000-to-1 difference as with Graham’s great-vs-average, or even anything concrete or generalizable, but it will give me a chance to talk about why anyone thinks public artistic and aesthetic critiques matter in the first place. Ebert’s point about taking work “seriously” may also have a role in this explanation.

Ignurnt: A wonderfully expressive word

During a phone call with my mom the other day, I used a word that I used to hear all the time in middle and high school but very rarely since. That word is “ignurnt.”

No, it’s not “ignorant,” tho etymologically – in a loose sense – that’s where it comes from. Technically, it’s just a Kentucky accented version of “ignorant,” but the meaning is different.

“We haveta to take off our jackets when we come into school? That’s ignurnt.”

“This homework assignment is ignurnt.”

Ignurnt almost means the opposite of ignorant. It is an epithet for work or other actions that are probably well thought-out and for everyone’s benefit but are perceived by the speaker as an exorbitant burden. At the same time, the word is synonymous with “stupid,” yet conveys so much more disdain – the target is construed as both idiotic and often from a different class or world entirely. Republican hatred of Obamacare or the faux outrage of “#gamergate” are well served by liberal usage of “ignurnt.”

The word ignorant is not usually applied to actions – instead it is thrown around when talking about people or ideas, usually. “Ignurnt” is more earthy and immediate. It isn’t abstract and is in fact concrete and immediate, a densely packed retort to something that touches a nerve out of nowhere. It is unguarded, off the cuff and brutal, yet full of complex power and expressiveness. I’m looking forward to building a character’s voice around the logic of the word and the types of folks who use it. Maybe it can be an English downhome counterpart to Sehnsucht