Bridging the gap

Comparing a film to a videogame is usually a form of abuse. Yet, argues Tom Chatfield, the boundaries between the two are breaking down
November 16, 2011
The new Tintin videogame was developed in conjunction with the Steven Spielberg film




Today, when you sit in a cinema, it can be hard to tell whether a trailer is for a film or a videogame. The Adventures of Tintin: The Game uses many of the same computer-generated sets, characters, sounds and scenes as Spielberg’s film, which came out recently to much fanfare. On 20th December, the gaming event of the year—Star Wars: The Old Republic—seems likely to be a far more impressive addition to its franchise than the last three films combined, with a rumoured budget of over $135m.

Yet the creative history of the crossovers between games and films is a largely wretched one. From the joyless trainwreck of the Super Mario Bros. movie in 1993—described by its star Bob Hoskins as his single biggest regret—to the more recent inanities Tomb Raider (2001) and Prince of Persia (2010), game adaptations have given cinema little more than noise. Nor have many videogames based on films impressed—most remain generic cash-ins.

The winter blockbuster season, then, raises a nagging question. If the relationship between our two great kinetic visual media is creatively so inert, would both film and videogames be well advised to steer clear of each other?

When it comes to films' influence on games, the answer is a resounding “no.” We owe to cinema, after all, the development of almost the entire modern aesthetic of moving images. From panning shots to close-ups, zooms, fades, reverse angles and slow motion we see and describe the world today through minds trained by the conventions of cinematography—and modern videogames remain more indebted to these techniques than perhaps any other art form in our culture.

Indeed, when games skilfully ape cinematic expertise, we praise their artistry. Yet the reverse is far from true. Describing a film as “like a videogame” is critical shorthand for senseless, emotionless frenzy: the province of the Michael Bays of this world, a director described by the New Yorker as “stunningly, almost viciously, untalented” for his work on the Transformers movies.

Yet this should not suggest that an interactive medium has nothing to offer cinema. Rather, critics and filmmakers are largely misreading what games have to offer. And moving on from this problem means understanding what is, and isn't, unique about an interactive art form in the first place.

Watching someone play a videogame can be an ineffably dull experience. Shoot, duck, jump, grab, rinse, repeat: even the finest titles are repetitive. Interactivity is only engaging if you are doing it yourself. And if you try to replicate its appearance in a non-interactive medium, you end up with something like Transformers: Revenge of the Fallen: a cinematic experience slightly less emotionally involving than watching someone play the first Mario Bros. game.

What happens, though, if rather than simply echo its visuals, you try to imbue a film with something of the actual emotion and aesthetics of a gaming experience—the heightened freedom of exploration and action that flows from encountering a well-designed virtual world?

The best answer to this begins in 1999, with the first film in the Wachowski brothers’ Matrix trilogy. Largely set within its eponymous virtual world, The Matrix’s most iconic scene sees a besuited baddy known as “an Agent” firing his gun at the hero Neo, played by Keanu Reeves.

As the shots speed towards Neo, the air seems to thicken and we enter “bullet time” (a trademark of Warner Brothers), watching the bullets and the miniature sonic booms around them move at a snail’s pace while Neo flexes, limbo-style, beneath their path and the camera pans 360 degrees.

The effect was achieved through the use of virtual “cameras.” Even the world’s most sophisticated real cameras cannot capture a bullet’s flight in the way the filmmakers wanted and so, instead, they effectively built a videogame version of their set. Slow-motion gunfights were nothing new but this was a seamless deployment of special effects with something radical in the mix: an aesthetic which offered the viewer a game-like sense of malleable reality.

The deranged, cybernetic glee that the Wachowski brothers brought to their creation suggested a new way of translating the conventions of interactive media into film. At the heart of their achievement was a transformed attitude towards the camera. No longer treated as either an extension of the human eye or an all-seeing observer, the cameras of the Matrix trilogy are incorporeal digital presences: free from the laws of time and space.

Interviewed in 2008 on the relationship between games and films, Steven Spielberg made a similar point, highlighting the 2007 Matt Damon film The Bourne Ultimatum for the “videogame savvy” of its “quick cuts and the audacity of camera angle.” Yet the ferocious immediacy of the Bourne movies was nothing compared to the ultimate example of virtual camerawork to date in cinema: James Cameron’s Avatar.

For Avatar, Cameron physically integrated a computer screen and videogames graphics engine into his cameras, rendering the virtual world of Pandora around the actors in real time as they filmed. Even Avatar’s title was taken from videogames, being a term for the virtual flesh in which a player embodies themselves—and perhaps the film’s greatest achievement was its immersion of the audience within its 3D images and detailed virtual environment.

These aesthetics of world-building are becoming a central part of our cultural life. Perhaps the most profound impact of interactive media is our growing acceptance of a blurred line between real and artificial environments via the screens of ever-more-powerful digital devices.

One film that has already tapped into these aesthetic possibilities is Christopher Nolan’s 2010 hit Inception, which built dream landscapes within dream landscapes, each one a game-like challenge of obstacles and physics-defying architecture. Inception’s seamless mixing of multiple unrealities, and audiences’ largely rapturous reaction to this, suggest that it is not just the technology of visual effects that is growing more sophisticated. Audiences too are evolving—trained by the ever-more intimate relationships with ever-more powerful technologies that 21st-century life entails.

At its worst, digital world-building can breed a sterile art in which everything is shown and nothing felt: fantastical messes such as Zack Snyder’s 2011 film Sucker Punch, in which five women retreat into a stylised world of mobsters, orcs and Nazi zombies. Around the edges of this approach, however, more intriguing changes are emerging: challenges not only to the style of moving images, but to the conventions of perception and presentation that underpin them.

The traditional camera is no longer our only model for the wandering gaze of kinetic art. Instead, a generation accustomed to being in control is rising. They sweep past virtual vistas at will, and augment daily experience with constant feedback, online interaction and the possibility of departure into unreal realms.

As we once did through cinema, we are learning to see with altered eyes—and to tell new kinds of stories. Whether the glories of cinema’s first century will continue to enthral these eyes remains to be seen; but change has already crept upon us. Blockbusters like the new Mission Impossible and Sherlock Holmes may be hard to reconcile with this diagnosis, but even in them interactivity’s impact has begun to be felt.

Look closely at the camera gliding impossibly backwards within the blast of an explosion, flipping through 90 degrees to peer down from the peak of the world’s tallest building—or inside a car as its glass implodes in slow motion.

This is the visual language an audience accustomed to interactive media demands. It is evidence of the new ways in which we are building worlds ever more finely attuned not to the limits of actuality, but to our own hunger for experience.