"Transported to another place": VISUALISE developed the Thomas Cook Google Cardboard app © VISUALISE

Virtual reality technology is already changing our lives

A vision of the future
March 24, 2016

The principle of Virtual Reality is easy enough to explain: you put on a headset and you are, metaphorically, transported to another place. You’re completely immersed. You look around a three-dimensional landscape just like you would in real life, by turning your head. You’re there, wherever “there” is: perhaps a recorded or live-streamed different part of the world; perhaps in a simulated environment created from scratch with computer graphics; perhaps in a combination of these.

What’s difficult to explain is what this actually feels like. Above all, it’s a startlingly emotional experience. When I visited VISUALISE, a London-based Virtual Reality (VR) studio and purveyor of some of the world’s most advanced immersive film experiences, Henry Stuart, the company’s CEO and co-founder, explained their mission to me in terms of power and responsibility. Unlike conventional cinematography, capturing a VR experience does not involve any cuts, zooms or fades. Instead, you place the viewer at the centre of events and allow him or her to take control as the story unfolds. “When your mind is tricked into thinking you are somewhere else,” he told me, “the fact that you think you are there and the connection that you have with people in the scene makes it feel real, and that is really powerful. You can also make a bad experience, which is extremely intense for someone. It has got to be done very well, and this is one of the problems with VR to date, as everybody is rushing to do stuff...”

The rush around VR is close to a stampede. The last two years have seen $3.5bn poured into the market by venture capitalists. This doesn’t include the most significant single investment of all, a cool $2bn coughed up by Facebook for Oculus: the pioneering VR company founded in 2012 whose prototype headsets currently serve a community of over 200,000 developers. In January, online pre-orders for the first public version of the headset opened at $599 apiece—with the site promptly overwhelmed by demand. Goldman Sachs conservatively estimates the overall headset hardware and software market will grow to $80bn within a decade.

Why now—and what’s the big deal? VR technology has come of age thanks to a combination of familiar tech factors: speed, size, affordability and integration. Back in the 1990s, previous attempts at creating VR headsets—over-promising and under-delivering—saw clunky and underpowered devices delivering migraine-inducing disappointment. Today, the staggering power needed to smoothly simulate reality is becoming available on the scale and price of mobile phone handsets, complete with gyroscopes, accelerometers and high resolution screens—all essential for creating the illusion of presence. Digital giants are piling in: Samsung, HTC, Sony, LG, Google and Facebook are all in the market, with Apple rumoured to be close to making its own announcements. VR is something genuinely new—and the battle for supremacy is likely to be ferocious, not only in creating experiences but also in owning the dominant ecosystems of hardware and software.

"VR technology has come of age thanks to a combination of familiar tech factors: speed, size, affordability and integration"
Also in the picture is Augmented Reality (AR). Where VR offers immersion, AR offers something equally significant for the future of technology: the transformation of real environments into machine-enhanced experiences. Microsoft is set to start shipping its first HoloLens headsets to developers any time now, promising the “holographic” projection of a computer interface on to the world around you. Forget Google Glass and its early offer of apps-in-the-corners-of-vision. Don a pair of HoloLens glasses and they will overlay computer graphics across the walls and contents of any room you’re in, continuously mapping the images to the room’s contours as you move around through an integrated depth camera. Demonstrations range from three-dimensional designs that engineers can handle like real objects, to anatomy lessons in which life-sized simulations of the body are explored and manipulated layer by layer. And this barely scratches the surface. Like its VR cousin, AR not only has to be seen to be believed—once believed, the sheer scale of its possibilities takes some imagining.

VR and AR exist within a common spectrum: the creation of human-machine interactions based not on flat screens, keyboards and mice, but on objects and environments that are experienced in the same way that we experience the real world. The difference is that the computer-generated content they offer has the infinite malleability of simulation, and few of the limitations that bodily presence imposes. It’s an enticing and disconcerting prospect. Unsurprisingly, sex is up there among the most-searched potential applications (as the Daily Mirror delightfully raved in a February headline, “Makers of ‘mindblowing’ sex robot with virtual vagina swamped with orders”). More salubriously, the training and therapeutic opportunities are vast.

Consider a study published in February in the British Journal of Psychiatry Open, in which a team of researchers from University College London and ICREA-University of Barcelona devised a test where participants entered a virtual environment by donning VR glasses and body sensors. In front of them sat a (virtual) child. Using compassionate phrases provided by the researchers, the participant was told to comfort the child. The child responded to this kindness. In the next phase, the participant’s perspective was shifted so that they were now looking out through the eyes of the child at the adult avatar they had just embodied. They listened to the kind words they had just spoken played back in their own voice. For many, it was a remarkable and intense experience.

The study was designed to help patients with depression exhibit greater compassion towards themselves, and showed significant improvements in mood and self-perception in some participants over the course of a month. The small sample size (15) makes it impossible to estimate precisely the role of the virtual environment in this—but the larger point stands. As those treating post-traumatic stress disorder, phobias and other psychological issues have long known, enacting scenarios in simulated environments represents a form of learning and re-training that addresses the whole business of perception and feeling. Allowing people to literally see through others’ eyes promises to profoundly impact everything from virtual architectural, museum and real estate tours to group therapy, military training and medical practice.

In the grand scheme of things, though, even the sudden switch to a child’s perspective barely scratches the surface of what is possible. In one recent demonstration, I enjoyed a helicopter ride above New York while “embodying” a camera dangling far below its ride: nothing but empty air below my non-existent feet and a thin wire above. Anywhere a 360-degree camera can go, a virtual environment can be captured. Given the steadily increasing power and diminishing size of components, it won’t be too long before we can be taken on immersive tours of both outer and inner space, can orbit the earth, or descend into the deep ocean. And we don’t have to be alone when we’re there. Shared VR environments are already a reality, both as passive and interactive experiences. For those that wished to experience standing next to Donald Trump, CNN broadcast last October’s presidential primary debates live in virtuality, courtesy of the California company Next VR.

If all this still sounds distant from your own tech experiences and budget, there’s at least one headset out there that most mobile phone users can access for £10 (and a few careful pieces of folding): Google Cardboard. As the name suggests, it consists of little more than a cardboard sheet with a few tabs, slots, Velcro pads and holes for peering through. You assemble your kit, download an app or two, slip your smartphone into its cardboard slot and you’re off—enjoying a stereoscopic home VR experience that’s astonishingly convincing considering its resemblance to a folded cereal packet held against the eyes with elastic.
"Virtual Reality headsets already provide doctors with training in surgical procedures"
One of my current favourite Cardboard VR adventures is a demonstration app (not currently available to the public) created by INVIVO, a Canadian interactive agency, which specialises in creating digital experiences for the pharmaceutical and medical device industries. Called Bloodstream VR, it does exactly as the name suggests and dumps you into the middle of a virtual circulatory system: red blood cells whizzing past, artery walls pulsing above and below, labels hovering in white text over key anatomical features.

Like most current VR apps, your interaction with this environment is based on the “gaze” principle: you stare fixedly for a while at an object or icon and your motionless gaze will select it. This slightly awkward interaction, James Hackett, INVIVO’s Creative Director told me, barely scratches the surface of what is coming. Existing VR and AR devices can already make use of video-game style controllers; but what INVIVO and others are starting to explore are ways to interact based on touch and physical feedback, a field known as haptics.

“Haptic controls are the big missing link right now,” Hackett explained, “especially in our industry. How can we have an experience where we are simulating surgical procedures that require a certain level of physical feedback and dexterity? We are looking into having a physical surface that our system recognises and that will provide real feedback, where the virtual space can look and feel like a portion of the anatomy.” In the meantime, physicians are already able to train and take refresher courses using VR and gain virtual experience of surgical procedures, including administering drugs through a catheter; or placing stents (small mesh tubes) in blood vessels to open them up.

Once again, sensory immersion is the key, together with the emotional and intuitive engagement it brings. There remain constraints to be overcome—but it’s remarkable how close the prospect of verisimilitude is. The screens currently used by companies like Oculus, HTC and Samsung have about 2,000 horizontal pixels. However, because they are so close to the eyes, 8,000 are required to get close to reality. Screens may never get this good, but technology such as retinal projection—where a display is drawn directly on to the retina of the eye allowing the user to see what appears to be a conventional display floating in front of them—is already promising something close.

In 2014, Google and other investors put over half a billion dollars into a firm called Magic Leap that promised to create three-dimensional images by deflecting light into the eye via a lens. As of February this year, Magic Leap is valued at $4.5bn and has begun demonstrating footage of robots and miniature star systems hanging in the air within a user’s field of view, looking and behaving precisely like any other part of reality. Add to this the emerging technology of light-field cameras—able to record the entire volume and behaviour of light in a space, allowing viewers to shift their perspective within a recording—and within a decade we will be both capturing and simulating reality to an uncanny degree.

What does this mean—and how much does it matter? At root, these are questions about how technology lives in the world: its use, adaptation and cascading consequences. Already, terms like Virtual and Augmented Reality are beginning to blur. As VISUALISE’s Stuart explained, the term “mixed reality” best describes what the next generation of headsets will offer. “In the future, headsets are going to be very subtle: they will probably be part of glasses, offering a scale of mixed reality. At the far end is pure computer graphics. But by flipping between you can be in both the real world and the virtual world. One of the crucial things that VR is going to enable us to do, as well, is to have more social interaction—but it will be in the virtual world, and that is a huge step-change from how we are used to interacting with people.”

Already, HTC’s forthcoming Vive headset not only tracks your movement using its “lighthouse” positional tracking system of sensors—a collection of tiny boxes placed around a room—but also knows where stationary objects are in the room showing a shadowy version of actual objects behind virtual ones to stop you bumping into things. This overlaying of real and computer-generated experiences will steadily grow in power and utility—and just how far it will disrupt existing technology and habits is one of the most intriguing debates in current tech.

If you can turn any flat surface into a high-definition television and watch it with your family in any room—or in a shared virtual space when you’re apart—will you still need a real one? If you can watch a gig while standing on the stage next to each instrumentalist, something VISUALISE have already created for rock band Kasabian, will a music video still matter? Living in mixed reality may become as ordinary as having a phone in your pocket; and being cut off from it every bit as frustrating and isolating.

As Roy Amara, the engineer and forecaster, put it: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” In this case, even the short term promises to be quite something.