The deep sea octopus is a fascinating creature. It quite literally lives only to reproduce. While the male often dies after a successful mating, the female lays the eggs and broods them. But when it’s time for the eggs to hatch, the female dies too, leaving a bunch of little octopuses who’ve never known their parents.
Our Smartphone is like the deep sea octopus. It is laying eggs for what’ll become the next generation of computing. In the process, it has kickstarted the process for its own demise. When it’s time for these eggs to hatch, the smartphone will quietly die giving way to a whole new world.
The death of the smartphone begins in its eye — the camera. Today, it is the single most important feature of the smartphone thanks to maturation and convergence of technologies like optics and artificial intelligence. It is through these eyes we get a blurry glimpse of the new era.
It’s the reason a sixty-member team works for the Facebook’s AI Camera initiative, the iPhoneX can see and recognize your face and Google shows off the intelligence of its ‘lens’ in detecting objects. When Pixel 2 launches today (October 4), we can be sure that the camera (and the applications it enables) will be the main draw. With the arrival of ARKit and ARCore, the genesis of augmented reality has begun.
Over the coming months, we’ll face an assault of augmented reality applications. A few of them may even manage to create the kind of engagement that Pokemon Go created. Yet, a vast majority of them will remain niche applications or a lazy product of slapping some digital information on the real world view. After all, we’re at the stone age in the evolution of augmented reality. More importantly, the smartphone isn’t a natural platform for AR.
Also See: The big deal about Pokémon Go? It just made augmented reality real
A vast majority of AR applications may need us to use the smartphone in uncomfortable ways like holding it up for extended periods of time. We’ll need to be told when to pull our phones to experience an augmentation which makes it difficult to build contextual information mapped to the real-world, like being able to see reviews of restaurants, as you see them in real life. Besides, the field of view as seen through the phone camera is still smaller than your eye making it a washed out novelty for a vast majority of the cases.
Then why is everyone pushing AR into smartphones? It’s easy to understand AR’s tentative steps if we see it for what it is: set up for the most radical computing paradigm to emerge.
Augmented reality alters computing paradigms much the same way smartphones changed personal computing. We did not get smartphones on day 1. We had years of using increasingly sophisticated GUIs to get our tasks done through the personal computing revolution. We’d spent more than a decade using computing devices to connect to the internet, play games, type out emails and design. We’d started carrying around mobile digital cameras to take pictures.
When the iPhone launched in 2007, Steve Jobs pieced together a device (a masterclass in introducing new product) from use-cases we could all understand — an iPod, an internet communicator and a phone all rolled into one. The technological building blocks were built over decades. So were the behavioural building blocks.
Augmented reality, which seeks to alter how we behave and interact with digital information even more radically, has to undergo the same evolution. The building blocks of technology, design, software and even computing power needs to evolve. After decades of working with digital information in a two-dimensional world of GUI, we need to now construct digital augmentations on our three-dimensional real world. This requires your computer to see and understand the real world in great detail and then project completely new digital interfaces.
If companies launched glasses today, they’d be clunky and ineffective and wouldn’t get the runway to evolve into the sophisticated final versions that can offer contextual, real-life like interactions of digital projections.
Equally important, augmented reality has to build behavioural building blocks training us on the new interfaces, getting us to wear funky devices and altogether preparing us for interacting with computers in new forms (like gestures, voice and maybe even blinks).
Smartphone, by virtue of being the most popular computing device on the planet, will be the testing platform for a lot of these building blocks. In doing so, they are perfecting their own demise.
How would a mass adoption of augmented reality look like? We’d all be sporting standalone AR glasses that look fashionable (or innocuous — as need be), is always connected to the cloud and can render realistic digital projections on the world in real time. We’d have new, natural ways to communicate including voice, visual cues and gestures. An ecosystem of applications (just like we have for smartphones) would enable us to exploit the potential of the platform in near real time.
Most of these are building blocks living in our phones today.
Phone cameras are increasingly getting sophisticated. Dual cameras, motion sensors, depth sensors are all adding capabilities like world tracking, depth and now even face tracking (like iPhoneX and Qualcom Spectra depth sensing camera system). In fact, iPhoneX is a testament to the fact that the smartphone is now just a stepping stone to a future world of AR. By putting out its face tracking capabilities in the real world, Apple will be able to perfect the technology for a future use case where AR glasses with enormous social applications that came with facial recognition. The powerful A11 bionic chip is a nod to the fact that computing will get more intensive with millions of data point in real time is going to be when we switch to glasses.
More importantly, the software platforms that are bringing the first wave of augmented reality applications to the smartphones, like Apple’s ARKit, Google’s ARCore and Facebook’s AI camera, will generate valuable real-world data to improve and evolve.These also graduate designers and developers to build for a completely new form while still sticking to the smartphone platform.
Building for AR needs an evolution of thinking as much as technology. A read through of Apple’s human interface guidelines for AR Kit is fascinating in that it reveals the new learning curve the lies in wait for designers and developers. The scope of designing apps isn’t just the four walls of the screen anymore but the world, in its unpredictable and dynamic glory, as seen through the camera. New ways of providing guidance and information to the users will evolve as will ways to allow users to interact with these new virtual artefacts.
Over the course of the coming months, we’ll evolve from apps that are simple ports of our current experience to truly engaging AR experiences. Specialized AR apps, like the real-time translation of language or 3D geometry based applications like sampling furniture or viewing 3D models of solar system etc. would work well on the phones. Engaging location-based activities, like Pokemon Go could create a new type of scarcity-based engagement.
All of this would be the first key stage in the evolution of AR. In the next stage, we’ll likely graduate to the smartphones become increasingly more powerful. At this point, our phones will be packed with specialised hardware (Something on the lines of Google Tango) that can map the world, recognize objects and perhaps even begin rendering some ultra-realistic holographic projections (see Red with holographic display).
Yet, the smartphone is limiting and we’ll eventually move into the next stage of AR’s evolution. The building blocks of these later stages have already arrived, feeding on the smartphone like parasites until they can get powerful enough to make it irrelevant.
Convergence was computing most pervasive diktat over the last few decades. The world of AR will likely reverse that trend. Computing interfaces are beginning to dis-aggregate and become an inconspicuous part of our everyday from wearables to smart homes to intelligent transportation systems. And this is the world that AR would evolve into in its final stages.
In this new world, we’ll interact with systems in more natural ways like speech which is quickly becoming a reliable platform for interaction. Voice and speech API with connectivity to the cloud are rapidly becoming easy plugins that infuse our everyday objects with intelligence and an ability to listen and talking.
Another key building block to a world without smartphones will be motion sensing wearables. Apple Watch series 3 is the first sign of things to come. With its own LTE connectivity, the watch is setting itself to be an independent device that can eventually become the hub of all the little computers we’ll be wearing on us (including glasses).
Glasses will be part of this evolution of wearables. All the big technology companies are working on some version of it. Some, like Microsoft and Magicleap, are working on a completely independent platform based on the headgear. While it’s inevitable that we’ll have some form of visual augmentation (glasses or perhaps even contact lenses), it remains to be seen in what form would they arrive and what capabilities they would have.
New gesture-based computing that simulates some of the touch, select and even type actions we do on the phones will evolve. Perhaps iPhoneX training us on gestures is not for the future of smartphones but for a future without one. I’d bet that some variant of Google’s Pixel will also look to have a gesture-based interface. Interfaces like Fujitsu’s motion tracking ring that can enable “air writing” may also become the necessary additions to this universe.
All this means that we’re moving out of the era of designing independent products like the smartphone and into having to think about and build a complete ecosystem that would work with each other. How will a consumer wearing a smartwatch, glasses and perhaps other wearable devices on her take advantage of this immense power while interacting with the world in a natural way? It is a sociological, technological and design challenge. Our real-world object design will be impacted. So would the design of our homes and cities and how we move and interact with it.
It’s a complex challenge and one we’ll increasingly solve in the coming years. But at some point in this process, the smartphone will become an unnecessary appendage. Having birthed this new world and trained us to the concept of ever-present connectivity, it will fade away into history only to be surface as nostalgia demands.