This week’s news of the Sony Gaikai deal (Sony bought Gaikai for $380m, in case you didn’t get the memo) reignited the debate about the impact of cloud gaming on the future of game consoles. But what does it mean for the future of input devices?
Let’s pretend that 10 or 15 years from now everyone will all be able to turn on a television, grab a controller and play games streamed from a remote server in pristine full high definition (heck, even in stereo 3D, and with 7.1 sound), with no lag whatsoever. Basically let’s assume cloud gaming will work so flawlessly that dedicated consoles won’t be needed anymore.
But what about the controller? What sort of device will we be using at that point?
I have no idea of course, but looking at the history of videogames it seems obvious to me that evolution of control interfaces had a rather important role in the evolution of gaming as a whole. It’s hard to imagine playing Killzone 3 with an Atari 2600 joystick or Diablo 3 without a mouse isn’t it?
Now, in the console space new controllers are usually introduced with new gaming hardware, so what’s gonna happen when no hardware is introduced anymore? Are we going to stick forever with whatever controller will be there when (and if) cloud gaming becomes the norm?
I don’t think so. I actually think the opposite will be true. And here is why.
Traditionally, the cost of researching, developing and manufacturing human interface devices (HIDs) has to take into account the cost of researching, developing and manufacturing the actual consoles they are intended to be bundled with.
The ambition to provide “cutting-edge graphics” at every console generation whilst keeping the selling price as low as possible has consistently (and understandably) refrained hardware manufacturers from selling expensive peripherals along with their new systems, the only exception being Nintendo with their decision to exit the “cutting-edge graphics” race altogether in order to focus on the HID.
I personally admire this attitude, but I also like “cutting-edge graphics” (and physics, AI, and all that stuff that, you know, matters the most in an interactive environment), which is why the Sony Gaikai deal is interesting to me.
Assuming the business model that Sony comes up with for recouping the costs of maintaining and upgrading the server farms powering their cloud gaming service is effective enough, I can definitely see them gradually moving resources towards the development of new HIDs and sell those as the “consoles” of the cloud gaming era.
As a matter of fact, I believe consoles won’t necessarily die. They might just mutate into something else. More specifically, they might turn into sophisticated I/O “decoders” tasked exclusively with processing the “raw data” coming from complex HIDs (comprised of depth cameras, motion sensors, touch screens, whatever) into “input data” ready to be transmitted to the cloud.
In theory, you’d connect your cameras, motion controllers or even VR HMDs to them much like you’d do with a traditional console. The only difference being that there is no traditional console to begin with.
By having all the processing power required to render “cutting-edge graphics” sitting “in the cloud”, hardware manufacturers such as Sony would be able to engineer new HIDs free from the constrains currently imposed by the need to keep the overall price of a hi-end console as low as possible. And without fear to jeopardize a whole console business if it fails. Because there isn’t one. (Imagine if cloud gaming was there already when Nintendo introduced the Virtual Boy. They would have likely shrugged their collective shoulders at its failure and moved on with something else while gamers kept enjoying, and paying for, their cloud service.)
From the end user perspective, purchasing such devices would be akin to purchasing actual consoles (which just happen to have their content streamed from a server rather than generated locally) so pricing HIDs above cost wouldn’t be totally outrageous. Imagine purchasing a Move set and getting a PS3 for free. That kind of deal.
Of course, evolution of control interfaces would occur regardless of cloud gaming, but cloud gaming has the potential of pushing the envelope faster, and with a less risk-averse mentality.
Take the recent developments on the front of head mounted displays for example, from Sony’s HMZ-T1 Personal Viewer to John Carmack’s research into the field of VR set ups with the “Oculus Rift” HMD (to be supported in the announced Doom III remake). For as much as it would be nice to have that kind of stuff bundled with the PlayStation 4, the reality of console manufacturing suggests it will hardly happen (unless everyone gets 5 jobs). Maybe with PlayStation 5, if there will be one, but even then you’d still get a device that’s a few steps behind of what the technology of that time could provide.
Now, what if PlayStation 5 turns out to be just a PlayStation 4 repurposed to act as an I/O decoder for cloud gaming? By simply purchasing (for a reasonable price) whatever the PS5 controller will be like, you’d still be getting PS5 level of “cutting-edge graphics” through the PS4, all the while enjoying the new breed of gaming experiences made possible by the new controller.
Perhaps we will come to a point where every new “console” announcement will actually be the announcement of a new HID. That wouldn’t be much different than what Nintendo is doing since the Wii, after all, but you’d still be getting those “cutting-edge graphics”.
Come to think of it, a closed, Sony-driven cloud gaming environment could bring to the console space some of the flexibility of the PC one with regards to human interface devices, but without the anarchy typical of a market driven by 3rd party productions.
That’s my hope at least. What’s yours?