Monthly Archives: September, 2013

Google+ and Internet comments

Prologue
“Whenever you find yourself on the side of the majority, it is time to pause and reflect.” – Mark Twain

The Internet versus intelligence
The Internet is a black hole, sucking in anyone and everyone with the slightest curiosity about anything  – but a lot of the gold at the end of the rainbow is not gold at all. No, it’s not coal, or brass, or poisonous lead, it’s something worse: A pile of YouTube/Hacker News/TechCrunch comments.

YouTube comments in particular are a cesspool of humanity, full of gems like:

  • Can I get likes for no reason
  • check out my channel!
  • Seems legit
  • I see what you did there
  • You just went full retard. Never go full retard
  • Faith in humanity lost
  • No fucks where given that day
  • Still a better love story than twilight
  • Go home you’re drunk
  • Do you even lift?
  • Getting real tired of your shit
  • Dafuq did I just see
  • ‘Murica
  • Then suddenly a wild pokemon appears
  • Watch out bitches! coming through
  • A wild chess game appears!
  • NO.
  • Doesn’t matter, had sex
  • 10/10 would bang
  • That’s enough internet for today
  • You had ONE job
  • Jokes on you, still masturbated
  • You sir won the internetz
  • Comment with most likes is a *
  • Fuking grammer Nazi

(hat tip Verge forum user Micr0b3)

The Internet has facilitated such sentiment on an unprecedented scale. The opportunity for anyone to spew bottomless rage against Miley Cyrus, cast “doubt” on the president’s birthplace, or derail a conversation by discussing the finer points of home-brew console development…well, I’ll grant that that’s “unprecedented,” a word often applied to the Internet (damn, I did it earlier and didn’t realize it til now!)

Comments sections may be the best case against “openness” online, a vaguely defined term that nevertheless puts on the airs of “anyone can write anything with no consequences while darting between YouTube, Netflix and Reddit on a bandwidth-neutral Net.” Every commenter is an expert, or at the very least a potential conversation hijacker whose hastily gathered yet half coherent sentiments can trigger thousand-word outbursts from her faceless peers.

Popular Science and the damage to knowledge
Online commenters are not simply wailing in a vacuum – they’re frequently causing real damage to the whole of human knowledge from behind their often anonymous guises. The paradox is that the Internet’s promise of anonymity and even impersonality has resulted in the creation of countless communities that are defined almost completely by edgy personality.  Evolutionary cues like strength and appearance are worthless when anyone can feign virility from behind a screen name, and as such, anger has become the quintessential online emotion.

It would be sad enough if the Internet were just an enabler for millions of angry, sad persons. It’s worse, though, since comments sections have become news unto themselves, their poisonous din distracting from actual events and trying to erode any achievement by others as individuals try to feel better about their own narrow outlooks. Today, Popular Science (finally!) announced that it was shutting down its comments sections on news stories:

“[B]ecause comments sections tend to be a grotesque reflection of the media culture surrounding them, the cynical work of undermining bedrock scientific doctrine is now being done beneath our own stories, within a website devoted to championing science.”

The issue with comments is probably evolutionary. As if caving to some outdated instinct to follow the tribe lest they be eaten by wild animals, people easily surrender in the face of massive upvotes, agreement, and likes. Unfortunately, the comments section conventional wisdom isn’t good at much else other than estimating the weight of a bull. I mean, did you ever try to assess music albums on the old Rolling Stone forums? Anonymity made it nigh impossible to get anywhere without having to slog through some contrarian bile or irrelevant points-earning sideshow.

Google+ to the rescue?
In a happy coincidence (in many fora, someone would mistakenly call this “ironic” and receive a stupidly stern, pointless lecture from a language bully, which contributes no value to civilization and probably destroys some by making someone feel bad), Google also announced today that it would begin tying YouTube comments to Google+ accounts.

Google+ is more than a social network – more like an identity service. I have mixed feelings toward its increasingly comprehensive tracking of every online twitch or murmur,  but its commitment to real names (and who really is going to expend the effort to create many G+ personae?) means that YouTube’s comments sections will finally have accountability, which is what comments have always needed. If G+ can get YouTube under control and also remain a valuable photo backup service, it’ll have contributed more societal value than Facebook ever has/will.

Epilogue
Last!

Advertisements

Gen Y Doesn’t Feel “Entitled.”

“Depression is rage spread thin.” – George Santayana

Anger takes an odd form when widely dispersed. Witness Adam Weinstein’s startling rebuttal to Wait But Why’s admittedly annoying stick-figure column and its titular character Lucy (possible defense: “I’m not entitled, I’m just drawn that way“) – his incredible rage, at the author’s caricature (literally) of Gen Y as just a bunch of stuck-up, “entitled” hippies looking for Instagratification, floats uneasily atop a sea of sadness and resignation, at the state of the American economy, at inequality, at student loans, at everything. This is real depression: being angry to the point that that visceral energy is no longer the distinguishing mark of one’s behavior, but something that can be hurled only weakly at any number of targets, making no impact and reinforcing its own sadness with the very knowledge that it is, in fact, weak.

So maybe this complex psychological trap isn’t actually something that can be neatly assigned to one “generation” or the other, hence shattering the generation gap narrative that underpins so much discussion about economics and culture. Really, “generations” are a strange subject. Seemingly every historian/anthropologist/resident Gawker snarkist-in-chief since Herodotus has been obsessed with how behavior and habits change from one “generation” to the next, without having a working definition of what separates one from the other. Herodotus’ math makes it such that one generation is on the high side of 40 years – a generous range that would mean, for instance, that a Baby Boomer could be anyone born between 1946 and 1986, clearly an untenable construct for stick figure artists eager to lecture latter-day yuppies on the virtues of hard work.

Look: 40 years is no time at all. Historical periods that modern philistines lump together as “all sort of in the same time” – e.g., “Ancient Greece” – occurred over hundreds if not thousands (in the case of “Ancient Egypt”) of years (hell, Homer was ancient even to Aristotle). Compared even to the vast stretches of time that we are happy to lump-categorize, the calendar year difference between a diligent Boomer and a “I think I’m so special” Gen Y’er is insignificant. And yet the portrait of the America run by the “Greatest Generation” and their children may as well be that statue of Ozymandias, boasting of its mighty works to hapless underemployed college graduates as it inexorably fades from view, inspiring bewilderment and undeserved awe. That’s how far in the past, how ossified, idealized, and idolized, the generation gap narrativists (like Wait But Why) want to make the Boomers seem.

Really, the makeup of human ingenuity – something often reduced to lazily thrown-around epithets like “lazy” or “hard-working” – hasn’t changed in the span of time over which Ronald Reagan went from being a b-lister to an annoying presidential blister. Being born in 1946 does not mean that one is in fact entitled to talk down to “entitled” youngsters about hard work. Does Gen Y feel more “entitled” than the “generations” (Gen X and the Boomers) that preceded it? Who cares – even if it did, it would not represent a radical “hippie” (I use this word in a broad sense, not literally, but to refer to the vast seas of allegedly ungrounded, head-in-the-clouds idealists that the dominant political powers often employ as bogeymen) mindset but rather an affirmation of the most conservative of all American political narratives, that is, that America is a place in which one generation outperforms its predecessors due to apparently overwhelming opportunity.

The Wait Buy Why piece is so caught up in details about animated unicorns, self-esteem, and its own incredibly reductivist definition of happiness (sorry, but happiness is not “reality – expectations” – happiness is constructed and willed, it does not just derive from pseudo-mathematical principles) that is misses how Gen Y is if anything clinging to, rather than breaking with, the American prosperity mythos. It wants good paying jobs, stability, dignity. The fact that it hasn’t obtained them – one could go on endlessly with examples; my favorite is the long, tortured blog of Esq. Never, about futilely seeking work as an overqualified lawyer – is not an indictment of misguided effort, complacency, or “feeling special.” It’s instead a lesson in powerlessness.

I won’t stump about inequality or rage against the corporatized education system that has contributed to interwoven mass underemployment and indebtedness; to resort to cliche is to write in a de facto dead language, whose expressive possibilities have been exhausted.  But onlookers like Wait But Why have likely perceived “entitlement” in desperate Gen Y’ers seeking work (of any kind) because of the latter’s depression, which comes across as a sobering mix of anger tinctured with the inevitable passivity that results from having too many things to be angry about (consult the opening quote). They’re not “proactive” enough, to use one of the establishment’s favorite put-down words. They care about “fluffy” things like self-esteem, which are not held in high esteem by “the world” (itself the “fluffiest” construct of all), goes the clichéd narrative.

Analyzing the power shifts that brought Gen Y to this impasse are beyond the scope of this post. To take a rough stab at it, though, it feels like Gen Y is trapped in the American Dream apparatus of its forbearers. That is, the basic motivations of 20/30somethings are not that different from 50/60somethings, but they do not have a framework (in terms of educational avenues, professional opportunities, and social support) suited to the times, since the old one created so much wealth over the latter half of the 20th century that sheer inertia has carried it well past its prime.

So maybe I did come around to some kind of cliché about attacking “the system” blah blah (basically: “damn the man, save the empire!“, my favorite distillation of how rebelliousness is at some level about seeking a new establishment that better suits one’s needs than it is about pure anarchy). So yeah, Wait But Why should think about profitably stick-figuring the Western neoliberal economic tradition rather than taking it all out on poor Lucy. Good work if you can get it (and not trip up over prancing unicorns en route).

“Post-PC” has little to do with video game console sales

Lukas Mathis nails it down:

        “If the «post-PC era» truly had such a devastating effect on the console market that the Wii’s sales just deflated after 2008, it’s unlikely that the same effect would not also be seen in the PS3’s and Xbox 360’s sales. But Asymco’s huge 2008 peak mainly exists because the Wii peaked in 2008, and because back then, it outsold its competitors by large margin.

In other words, many consoles show Wii-like sales curves – but not the Wii’s direct competitors, the PS3 and the Xbox 360. If the Wii’s sales peak in 2008 was indeed mainly caused by the «post-PC era», you’d expect the Wii’s direct competitors to be similarly affected. They’re not.”

I remember thinking in 1998, with Age of Empires, that I would abandon consoles for immersive PC-only gaming, but here I am clinging to a 3DS and a Wii U. My PC became a distraction conduit for email, ICQ, AIM…well let’s just stop there.

But I think my anecdote shows why dedicated consoles can still work, at least for the tasteful gamer. For the dedicated gaming market to truly feel the pain, smartphones would need much better battery life than they currently have. Their very strengths – being Swiss Army knives with tons of radios – also means that they cannot muster the battery life (or rather, the singularity of vision and focus) to support the longer-form gaming experiences unique to consoles, especially the DS line.

Zero Escape and the 3DS

Great post up by Lukas Mathis, responding to John Gruber, about the 3DS and the temptation of pigeonholing it as a mobile device:

“I don’t think most people buy portable gaming systems with the intention of regularly carrying them in their pockets. I don’t think they ever did. I don’t remember knowing even a single person who routinely carried a portable gaming device in his or her pocket.”

I got my first Game Boy in August 1998. It was a Game Boy Pocket – apparently, Nintendo was giving all of its Tetris-playing Link’s Awakening-loving gamers the green light to start carrying their Game Boys everywhere. That was feasible, as long as the gamer also wanted to pack some batteries, game cases, and maybe a Game Boy Printer, too. The Game Boy ecosystem was huge, occupied by peripherals and palm-sized cartridges; it did not lend itself to mobility as well as even pre-iPod CD players and disc-carrying books, and in retrospect very little about it foreshadows what the breathless press now calls mobility, i.e., carrying a consolidated networked device.

The “Pocket” moniker was no declaration of revolutionary mobility – it simply showed that the gargantuan first-gen Game Boy had been succeed by something smaller but no less capable. Nintendo is not a company given to consolidation for its own sake, or even for the sake of forcing new technologies on its users (unlike Apple) – the slim Pocket and its upgraded partner, the Game Boy Color, gave way to the stockier widescreen Game Boy Advance, whose backward compatibility now meant that there was even more to carry around. The DS similarly introduced a easily losable stylus and backward compatibility with the Advance. These devices are not even trying to be smaller or more amenable to “on-the-go” players with limited attentions spans, or even to IT execs who think mobility will solve world poverty.

The 3DS, released in 2011, is often compared to the smartphones and tablets. The narrative goes: do-everything touchscreen devices have obviated the need for dedicated devices, and the 3DS (and presumptively, the PS Vita) are doomed. This line of reasoning betrays ignorance of the dedicated handheld market, a unique space that only Nintendo has ever really dominated. To see how the (3)DS is different from a smartphone or tablet, it’s necessary to look at one of its quintessential offerings, the Zero Escape series of adventure games/visual novels.

Zero Escape

Zero from the Zero Escape series.

Desktop adventure gaming declined long ago, but it has gotten new life in the last decade thanks to studios like Quantic Dream and from third-party DS developers. Games like Last Window have demonstrated the DS’s unique abilities to create an immersive, almost book-like experience – that game in particular required that you hold the DS upright, featured lots of text to read, and one of its most stunning puzzles could only be solved by closing the DS’s clamshell. However, Cing (the studio that made Last Window and its prequel, Hotel Dusk) closed its doors several years ago. In contrast, Aksys Games, the makers of Zero Escape, scored one of the original DS’s most unlikely hits with Zero Escape: Nine Hours, Nine Persons, Nine Doors, and had similar success with the beautiful 3DS/Vita sequel, Zero Escape: Virtue’s Last Reward. Buoyed by strong sales, a third game is apparently in the works.

Zero Escape is like little else in mainstream gaming, either on desktop or mobile. Most of your time will be spent reading; every now and then, you might solve an escape-the-room puzzle. Despite having little action and being on a traditionally family-friendly platform, it is also incredibly violent and nihilist. Without spoken dialogue (at least in much of the first game), it’s like a creepy, interactive silent movie. Or, as I alluded to earlier, a book – and here we may see exactly where the (3)DS resides in the device landscape.

The Zero Escape games, like the best ones on the platform, are games most easily played at home, where players do not have a set amount of time to kill, like reaching a certain subway stop, or finally getting called in by the doctor/receptionist. Those scenarios are perfect for smartphone/tablet games that can be suspended and resumed at any time, but the 3DS usually works better at home or with time to spare.

As Mathis points out, the home is an environment in which consumers typically favor dedicated devices, rather than the convenience of consolidation. If they didn’t, then PCs would have long ago cannibalized TVs, music players, game consoles, streaming boxes and much more. Non-consolidation also means that devices like the Kindle Paperwhite, which in theory should be under tremendous pressure from hi-res tablets, remain favorites even of Nintendo pessimists like MG Siegler.

With its sophisticated reading capabilities and false front as a “mobile” device, the Paperwhite, rather than Android and iOS hardware, may be the best comparison for the (3)DS. I’ve been skeptical about how long Amazon would continue selling reading-first/reading-mostly devices, but like the DS, they appear to serve a sizable, loyal audience that likes dedicated functionality. It can be easy to overlook that when one’s main perspective is mostly limited to the rapid iteration and refinement of phones and tablets, which follow different lines of logic and occupy a largely separate market at least for now.

Nintendo is not Apple

^ That’s a compliment, not an insult. The similarities between Nintendo and Apple seem overwhelming at first blush:

  • They both develop tightly integrated hardware/software experiences. Apple’s minimalist, Rams-inspired aesthetic is an unmistakable as Nintendo’s dorky neoclassicism.
  • They share conservative attitudes toward specs. The iPhone didn’t have LTE til late 2012, and still has considerably less RAM than its Android rivals; the best-selling Wii was standard-def.
  • They’ve both had to compete with Microsoft, with varying levels of success. Apple has basically defeated Microsoft in mobile; Nintendo won a surprising victory over the Xbox 360 in the seventh generation, but the Wii U’s prospects don’t look so good against the upcoming Xbox One.

For these superficial similarities, Nintendo attracts a lot of attention (most of it negative) from Apple-centric bloggers who are eager to suggest remedies for Nintendo’s current struggles (also, many of these individuals are of an age that would have been the prime audience for Nintendo’s gold/silver ages with the NES and SNES, respectively). Perhaps they also see Nintendo’s predicament as similar to Apple’s dismal 1997, when it needed Office and a cash injection from its main rival just to stay afloat.

But there are a number of differences that make the Apple/Nintendo comparison faulty:

  • Making one’s own hardware is a given for the dedicated gaming industry’s major players, and it alone does not make Nintendo special or different from its rivals. Starting with Atari, and continuing on to Sega, Nintendo, and Sony, if you made a gaming platform, you made your own own hardware. Even Microsoft – a software company, at least during its heyday – had to delve into hardware as an entry fee to the console business. In this respect, the gaming world is a lot different than the consumer/enterprise software realm, in which software-first or software-mostly companies like Microsoft, Google and Facebook can wield great influence without dabbling in hardware (tho that is certainly changing)
  • Accordingly, Nintendo is not a hardware company. It’s a software company that makes hardware that makes its software better. Look at the N64 controller: examining its analog stick and trigger button, you just know that Nintendo’s hardware team was future-proofing it for Ocarina of Time and Super Mario World 64. In this respect, Nintendo is the opposite of Apple, which is a hardware company that makes software that enhances its hardware – iOS is much like a virtuosic exercise in preserving battery and maximizing touch technology.
  • The suggested remedy for Nintendo – that they make iOS games – is appropriately the reverse of the remedy that Apple needed and got back in 1997, i.e., the porting of Microsoft Office to Mac OS. Since Nintendo is a software company at heart, it would seem to make sense that, if desperate, they take those assets to other platforms; by contrast, Apple is a hardware company, so dire straits fittingly translated into trying to attract more software to their own platform.
  • If it’s not clear by now, you should realize that Nintendo is uninterested in making a platform. It makes toys and the workshop/play space in which those toys are used. That’s the total opposite of what Apple has done, especially with iOS.

John Gruber and John Siracusa recently had a great deover Nintendo’s future. Gruber argued that the lucrative DS line could be jeopardized by its basic requirement that users carry a dedicated handheld in addition to their phones – I can definitely see this happening. But Siracusa hit upon some subtle advantages that Nintendo may still have, especially in terms of gaming experience.

Discussions of Apple vs. Nintendo (or Nintendo vs Nokia/RIM) often lead with anecdotal stories like “my kid doesn’t know what Nintendo is,” which I think are unhelpful. The tech literati are not really Nintendo’s audience, and their children are probably a small subset of all Nintendo fans. The recently announced 2DS is not a device to be analyzed with the same eye as a new iPhone or Nexus device. Still, I’ll contribute my own – I’m already fatigued of Android/iOS gaming. The limited input mechanism (touch) means that games cannot do as much with on-screen information or elements since fingers get in the way, and the freemium pricing of so many mobile games means that they often do not over immersive experiences but rather play–by-ear arcade-like ones.

Sure, there was a time when people defended BlackBerry’s hardware keyboard as a non-negotiable feature for plowing through “serious work” and email. But as Siracusa pointed out, hardware keyboards were superseded because software keyboards imitate their every last functionality while adding exclusive features like predicative typing. Touch screens cannot do that with gaming controls, if only because there’s no QWERTY-like standard for controls: every controller may have buttons, but their arrangements and numbers are radically different from one system to the next. The fact that Nintendo has realized this has been a historic source of strength – it’s hard to appreciate now, for example, how groundbreaking that N64 controller was in introducing analog sticks to the console world.

The variety of controller layouts is matched by the variety of software that they power. Games are, on the whole, a much more fragmented sector, in terms of design and input, than mobile apps. What are mobile devices used for? Standards like email and Web browsing, mostly similar social media clients with a standard set of gestures, passive content consumption. They don’t need varied controls or inputs because their specialty tasks don’t require them

Now, imagine Nintendo trying to bring its quirky, unique sense of sophisticated hardware-specific software to iOS, a platform which takes for granted that no third-party app is more special than any other one and as such. Even with an iOS controller peripheral, I don’t think it would work – not only would it de-incentivize customers to purchase Nintendo’s own hardware, but it would create a bad experience, topped off with the inevitable long string of 1-star App Store reviews bemoaning users’ unawareness that they needed a separately sold item to play the $14.99 app they purchased.

Whether Nintendo can make its traditional approach work going forward is a separate question from whether porting software to iOS would be a good idea. For now, the company appears to be in sound financial shape, and even a minor rebound in Wii U sales would help buoy its already robust DS business. And mobile device sophistication need not be synonymous with consolidation – a breakthrough gaming device, like the original Wii was 2006, could fit alongside the growing fleet or smart wristbands, HUD displays, and smart watches that co-exist peacefully with phones and tablets.