Software and hardware vendors are held to a ridiculously high standard – many times, the press will be breathing down their necks about a “lack of innovation” or some similarly meaningless term when their current product line is still doing remarkably well and changing – even if subtly – the markets that they occupy. A good example is the absolutely exhausting “Apple can’t innovate without Steve Jobs” trope that has been beaten like a drum by unimaginative writers since 2011, even as Apple has unveiled, well, innovations like AirDrop for iOS, 64-bit mobile processors, and high-resolution MacBooks. Likewise, Nintendo is the subject of endless ire for the struggling Wii U (which has sold 4 million units – nothing to set the world on fire, but hardly a $900 million writedown), even as the original Wii crossed 100 million in lifetime sales, the 3DS line became the best dedicated gaming device, and the company’s StreetPass/Miiverse system proved that it count use “the Internet” is ways that no one else could.
What’s weird about the “innovation” obsession is how unevenly it is applied. For example: when was the last time that someone really scolded Google for a “lack of innovation”?
Part of the reason that Google has been spared the knife is that is has too many products. Nice also-rans like Google Keep, throwaways like Google Currents, and core products like Search and Gmail – combined, this constantly shifting portfolio serves as a shield against anyone who could swipe in and say “Google isn’t innovating” (it also helps that the company’s founders are still involved – tho it would be fun to start a “Google can’t innovate without Eric Schmidt” argument).
But having a lot of products doesn’t mean Google is innovating. It just means that it deflect press attention to struggling initiatives, unlike Apple or Nintendo, both of which support only a few core products at a time (and as such, if one does well or fails, it gets an inordinate amount of attention). What has Google done since, say, 2005, when it unveiled Google Maps?
- Google Chrome – a Webkit browser, beaten to the punch (on desktop) by 5 years by Safari. It performs better than Safari in many instances, but it’s a catch-up tool. This can be seen in how Chrome didn’t even come to mobile until 2012 and wasn’t the default browser on stock Android until the first Nexus 7 was released, while Safari shipped with the iPhone from day one.
- Android – this looks impressive on the surface (and I enjoy using it – it was the impetus for starting this blog), but it was an acquisition that succeeded because of its free, open source roots and how it was updated in response to the first two iPhones. Google’s creation of a propriety Google+/Hangouts portal could take it in an odd direction.
- Google Fiber – a niche Internet service project in the U.S. that would be prohibitively expensive to build nationwide and is already being outflanked by competitors like AT&T.
- Google Drive/Docs – the definition of an also-ran, in that it imitated Dropbox and Office without adding much new.
- Google+ – a confusing response to Facebook that is super-useful in some workflows (photo backup) and utterly annoying (no functionality if not signed-in, terrible connection to real-life) in others. It’s essentially a mildly interesting blog platform that hasn’t caught public interest, since users have much better alternatives like Tumblr.
- Google Play – a belated response to the App Store. Play Music All Access is Spotify (est’d. 2008) in a different wrapper.
- YouTube – an acquisition that has been turned into a spam machine via its poor comments system, its poorly imagined channel setup and the prospect of becoming yet another me-too music subscription service.
- Google Glass – this is Segway 2.0 – a perfect match for insular geeks who pay for flying cars in bitcoin, but unlikely to become a mass-market success. The amount of attention Glass has gotten is a testament to the press’s fascination with “innovation” at the expense of the subtle iteration that often constitutes real change.
Maybe put it this way: what Web products do most people use from Google? Search, Gmail, and Maps. And all of those are ancient. They’ve been tweaked, but not always for the better – Gmail is increasingly a mess of separated inboxes and questionable compatibility with IMAP. Maps is primed for more advertising. These changes make me think that Google is spinning its wheels, a bit afraid of just blowing up something old and letting something new cannibalize it.
I meant to write this entry a few weeks, but became side tracked. My day job requires me to churn out literally thousands of words of custom content per day, which often leaves little in the tank by the time I sit down in front of my cable-less TV each evening.
Pocket informed me today that I was in the top five percent of all users this year, in terms of the number of words read. Apparently, I read enough words to have read The Great Gatsby seven and a half times. I’ve read it twice, but that wasn’t what concerned – the word count made me realize that I’ve written about seven and a half (at least) Great Gatsbies since July alone, including my daily work content, this blog and my Tumblr.
So, a retrospective? This blog is more than one year old and has roughly 100,000 views. On its best day ever, it received 800 views. I’ve raked in $56.49, putting me well on the road to becoming the next John Gruber.
I don’t know much about this blog’s audience. Most of the interactions I’ve had related to my content are: 1) family telling me that they like my entry about Google+/YouTube comments; 2) angry anonymous Demand Media addicts scolding me for my since removed rebuttal of “David Wong”. The latter post has been shared to Facebook/Twitter more than every other post on here combined, showing that I’ve failed at my attempt to be a blogger about Android, Ruby, and Nintendo.
My early entries were much more techy and, I think, pretentious. I wanted to be some respected voice, but I realized that consciously seeking approval and entry into the gated community of tech pundits wasn’t for me. The David Wong entry, and my more recent forays into rebutting doom-and-gloom prognoses about Nintendo, were liberating. In this respect, it’s comforting not having a real audience – there’s no one to throw off or puzzle, since many of my readers only stop by to see what the top Android clock widgets are, and then move on eHow.
But this blog has given me a lot more than just viewership. My explorations of Android after getting my Nexus 4 (the event that really shifted the focus of this blog) put me on a different track, not just with this blog, but in life at-large. I left that startup this May and had a tumultuous June in which I got my current job. Without the time I spent on here going thinking about app design and coding, I would not have been in a position to become a better writer – thinking aloud on entries that very few people read ended up being the type of mental exercise I needed before committing to a new path.
My updates have gotten less frequent, perhaps since there’s only a finite number of words one can spew each day. And spew I do, about enterprise cloud computing and mobile security, before coming home to write poems and filter the TextWrangler screenshots through Pixlr Express, and then maybe write something in here about tips and tricks for Google Play Newsstand. Or make a list – my top 5 Android apps for now– no. If you read regularly, I’m truly appreciative. If not, then I still feel honored that I was granted some of your time. Here’s to another year.
Reading articles about the “demise” of Nintendo is a good way to stumble over some terrible reasoning and misinformation. MG Siegler et al are all too willing to liken Nintendo to BlackBerry, despite the company’s excellent financial position (especially in light of its small workforce – Nintendo is not a gigantic operation like would-be competitor Microsoft, or like BlackBerry is/was) and its exceptional success with the Wii and DS over the past decade.
The angle of comparing Nintendo devices to “mobile” (an increasingly meaningless word applied to gigantic phones and laptop-grade tablets) is overplayed – certainly, there is some competition between devices for casual gamers who are now into Candy Crush but might have been into Nintendogs in a past era. But Nintendo isn’t really making “mobile” devices in any sense: the tablet controller of the Wii U is slightly awkward as a standalone device, and even the (3)DS is mostly a device for gamers at home, not on the go. It isn’t trying to make a play for the “mobile” audience – maybe that’s a bad move, maybe it isn’t. Twenty years ago, it looked like a mistake for Nintendo not to make full-fledged PC games, but it’s still around.
If Nintendo has competitors (I’m not sure if does – like Apple, they don’t give a shit about any other companies), they’re the home consoles – the Xbox and PlayStation lines. And it’s competing against them with not only the Wii U, but the (3)DS, too (more on this later). Sure, the console makers may be losing their asymmetric battle with “mobile,” but if they are, it’s hard to tell, in light of record-breaking opening day sales for both the Xbox One and PS4. Maybe there’s enough attention out there to sustain both consoles and “mobile.”
But then we come to the Wii U, the poster-child for both Nintendo’s assumed doom and the decline of the console business due to “mobile” and disruption and blah blah blah. The Wii U may finally cross the 4 million units sold threshold at the end of this year, making it a huge disappointment compared to any of Nintendo’s previous offerings, save the Virtual Boy. Sales could turn around; just look at the 3DS, which IGN hilariously declared doomed in 2011 but is now the most popular console in the world. However, I think it’s more useful to understand why it has struggled than to prognosticate on its future. That said, here are my three theories for why it has had such a rocky start.
Theory #1: It isn’t powerful enough
The argument: The Wii U has a PowerPC processor – that sort of says it all, what with almost every PC in the world now running x86 of some sort an everything else – mostly “mobile” – running ARM. Its technology is from a different era. It can output HD content, but not with the extra-fancy shading and high frame rates of the Xbox One or PS4. Developers won’t make anything for it because it doesn’t have the AMD chips found in its competitors and lacks the muscle to power yet another dystopian shooter.
My take: Developers are fickle. Many ended up scaling down their games for the standard-definition Wii last generation, then abandoned it near the end of its lifecycle (which didn’t matter – the console still received plenty of first-rate games, mostly from Nintendo itself). Maybe a system-selling title like Super Mario 3D world could cause a change of heart.
But all of that is secondary. Specs are mostly irrelevant – sure, there are Internet goons who only care about graphics and the newest batch of corporatized FPS crap, but if one looks back at the history of consoles – or even consumer electronics as a whole – being on the bleeding edge hasn’t always translated into “winning” the battle. The Wii outsold both the Xbox 360 and the PS3. The iPhone has outsold every single 1080p quad-core Android device.
The question is, can Nintendo and others put the Wii U’s particular strengths to good use, like they did with the Wii’s motion-sensor technology or the N64’s thumbstick. I think that the potential is there – just look at ZombiU or The Wonderful 101 – but more needs to be done to exploit the GamePad. On the HD side, Nintendo has already shown how even something as mundane as 1080p resolution can be reimagined with its subtle use of shadow and translucency in Super Mario 3D world. There needs to be more of this.
Theory #2: It hasn’t been marketed well
The argument (by way of anecdote): I was at a Target in downtown Chicago on Black Friday. A woman was trying to buy Just Dance 2014 for the Wii, but noticed that there was a Wii U version, too. She asked the sales associate what this “symbol next to the Wii” was – that symbol being the “U.” The associate had to explain to her that the Wii U was a totally different console. Customers don’t get the distinction.
The Wii U’s name is stupid. It should have been called the Super Wii for clearer differentiation. Similarly, the Wii U has all the capabilities of the Wii – the motion-sensing, Wii Remote compatibility, and the ability to play SD Wii games -, but most people probably wouldn’t even know this, despite the name similarity. It doesn’t ship with a Wii Remote, despite some of the bundled titles (like Nintendoland) requiring one for certain sequences. It wants to be a brand new console but also compatible with everything from the Wii, yet marketing has succeeded at making it seem like neither.
My take: Sure, the name maybe was an uninspired choice. But similar problems don’t seem to have affected names as bad as “Xbox One,” which is not the first Xbox, or the previous “Xbox 360” (compared to “Xbox”). Calling it the Super Wii and bundling a Wii Remote and something like Wii Party U could help, but it’s not the Panacea (dumb ZombiU reference).
Theory #3: It’s being cannibalized (by the 3DS)
The argument: Before you start worrying about parallels to Robinson Crusoe, think about this: Nintendo’s console business is unique. Since 1989, it has been supporting at least two consoles at one time, a portable one and a TV one. These two lines ran parallel for decades, with little overlap except for the IPs that made their way onto both platforms. The Game Boy was very different than, say, the SNES, and getting one system did not give one the same experience as the other. Ditto for the DS and the Wii.
But this has changed in recent years, largely because (I think) the 3DS was so underestimated during its first few years of existence. Even Nintendo seemed to struggle to wrap its head around what the 3DS should be early on – it wasn’t until Super Mario 3D Land and the much-needed 3DS XL redesign that the console began finding its footing. Prior to those two events, it was mostly a DS lookalike with some cool 3D graphics. But the run of first-rate software titles and a larger screen (the importance of the latter can’t be overstated) showed that the 3DS was something very different, something that realized the promise of the DSi and integrated console-level amenities like high-quality soundtracks and animations. I’ve argued that Nintendo is essentially a software company that dabbles in hardware, and the 3DS bears me out – it took good software to start moving hardware.
The perhaps unintended consequence of the 3DS’ maturity, however, is that it is cannibalizing Wii U sales. Cannibalization occurs when one of a company’s lower-priced products drives down sales for its more expensive ones, since they are targeting the same audience. The 3DS XL is nearly a Wii U GamePad on its own (and in Japan at least, the 3DS XL is way more popular than the standard-sized 3DS) and its software provides what could be called a “Nintendo fix” – Mario, Pokemon, Zelda, the full lot. Users get their fix from the 3DS and don’t need to get it from the Wii U, which doesn’t have a Pokemon or Zelda title and only recently got a truly new Mario title.
My take: Companies like Apple have long been conscious of cannibalization – in Apple’s case, of Mac sales by the iPad, or high-end iPhone sales by low-end iPhone sales. It’s a difficult issue to sort out, but in a way, it can be a good problem to have – at least something is selling, albeit perhaps not at the price point/profit one would hope.
The idea that Nintendo is cannibalizing its Wii U sales with 3DS sales probably doesn’t occur to many observers, since none of Nintendo’s competitors have a similar console business. Microsoft doesn’t do dedicated portable gaming machines, and Sony’s Vita is a failure compared even to the Wii U. Maybe Nintendo really isn’t competing with Sony or Microsoft – it’s not in a specs race, or a race for the best Assassin’s Creed 4 graphics, but simply trying to selling as many Nintendo devices – 3DS, Wii U, or otherwise – as it can. Given Nintendo’s history and resilience, that makes much more sense than arguments about the Wii U’s power or marketing
If you have a solid group of friends or relatives, a degree in an in-vogue “good” subject (which will just as quickly become a “bad” subject once public taste changes – just look how far Greek and Latin have fallen in 150 years), or a stable employer that doesn’t dabble in “improving operational efficiency” by slashing jobs, reshuffling roles for no reason, and driving away good employees, then count your blessings. You’re not one of the millions of struggling “overqualified” (read: too skilled/expensive for spendthrift HR departments) job seekers who comb through the cesspools of online job postings.
But if you’re in the latter group, you’ve probably grown familiar with the endless bullet lists of qualifications that finish off many job ads like the unappetizing icing on an expired pie. You know, the paeans to thinking outside the box (a metaphor with no literal grounding) or having 10+ years of experience for an “introductory” role. Maybe you’ve been asked to move to Iowa, or cold-called about relocating to Spain. Either way, the worst job posting essentially combine two seemingly contradictory things:
- Things that seem/are below the candidate – this can come in the form of implied low salary/no discussion of salary at all, overly stringent workplace requirements for a job that can be done with just an Internet-connected device, or something else
- Things that seem/are above the candidate – basically, a mother lode of responsibilities that would be unrealistic even if for the most “overqualified” person unless said candidate is willing to forgo her health and sanity. This aspect is made worse by criterion #1, and the combination ultimately either drives the candidate away or – perhaps worse, depending on perspective – sucking them into a position that they feel like they have no choice but to keep working at.
The environment created by these posting is as a good a reason as any to support basic income. Ideally, Switzerland will get the ball rolling on this initiative, but I’m not holding my breath for its worldwide roll-out.
On a more immediate level, some tech job postings having gotten out of control. An irritating IT job ad from Penny Arcade has become the object of scorn for sites as varied as Valleywag and Marco.org, which provides a good indication of how deep-seated and widespread (seriously, Valleywag can’t stand Marco Arment – it’s amazing and telling that they agree on anything) the resentment has become at employers who demand a founder’s work for an intern’s pay. The endless drive toward efficiency has resulted in corporations that have to ask customers to subsidize underpaid employees, as happened recently with Wal-Mart.
What does Penny Arcade want? The details are enough to make one’s eyes glaze over while raising one’s blood pressure. Most of it isn’t enough to bowl anyone over – they’re looking for sysadmin experience, competency with object-oriented languages (though the ad’s apparent ignorance of PHP being an object-oriented language is a nice touch), and IT miscellany.
BUT – they’re forthright about not having work-life balance and running “lean.” Now, the way that they present the work-life balance issue is telling – to Penny Arcade, it’s not that they’re actively forcing employees to work and have no lives, but that they “suck” at achieving that balance. This is insane if you think about it – the long, thankless weeks aren’t the product of a proactive “Destroy Work-Life Balance” initiative, but rather a failing – as if they were powerless to stop it, a stance that becomes a convenient way to get off the hook for any responsibility in degrading an employee’s workplace experience.
The word “lean” is legitimately used for labeling meat and describing some very specific types of startups. For everyone else, depending on context, it’s a euphemism for putting too much work on the average employee in a company or making fun of overweight people (the latter isn’t related to this article, but I just bring it up as a coincidental semantic note – it’s particularly egregious in the shallower parts of the gay community). It’s not transparent.
Granted, Penny Arcade likely received tons of submissions for this job, if only because the labor market continues to be depressed even 5+ years after the 2008 meltdown. What most observers don’t realize, however, is that it is these types of positions – underpaid, with long hours – that sap economic demand and perpetuate the cycle of weak hiring. There needs to be a return to sanity, but it won’t come tip we realize that we’re all in this (economy) together.