How scientists watch your brain as you watch TV

What happens when we watch a performance or experience a piece of art? The AFTRS is using modern science to provide answers which make sense to creators.
[This is archived content and may not display in the originally intended format.]

Image: The Bridge is great old-fashioned storytelling, but its creators use scientific data to make it the best in the world.

Successful actors and directors play their audiences like a violin. They can make hundreds of people in a black room laugh and gasp and cry in unison. It is a completely intuitive groove and it has been carved over centuries.

But what is actually going on? Is this just a dark art? Does science have any answers?

The Australian Film Television and Radio School has created a relationship with a Danish company called Imotions, which describes itself as a biometric research enterprise, created by neuroscientists and software programmers. According to Martin Brown, officially the Director of Award Courses but actually a roving curiosity machine, the company is the dominant player in the space, so AFTRS bought the software and is using its expertise and training to create research opportunities. 

The company has developed a simple kit comprising laptops with cameras, some facial recognition software, maps to track eye movements, and skin sensors running on the palms of hands, while more elaborate versions add electrodes to measure brainwaves and heart rate. They are using it to focus an enormous amount of science and previous data to interrogate what happens when we watch moving images on a screen. 

All the pieces have been around for a long time, but the latest generation of neurologically driven research into sensory perception has really crossed an extraordinary barrier. It has now become a piece of software which pulls together the different kinds of data, and integrates them with the subjective information provided by participants. It is user friendly, relatively cheap, and available for routine use across the arts and screen media. Now we know much more about what the audience is experiencing. 

Thomas Romanoff and Tue Hvass Petersen demonstrated the basics of the system at an AFTRS seminar on 23 March with Romanoff as the subject. He was wired up and shown short clips of a prank about women golfers, a calm shot of a lake and an alleged gag in which a cat has its head torn off. 

We could see the changing levels of joy, anger and fear, along with interest and excitation, while his eyes darted from object to object in the scene. 

Penelope Thomas, the Project Manager of Applied Industry Research at AFTRS, played clips from two student films, Object and A Boy Named Su, along with the collected evidence of audience response. This is the school’s first tentative forays into the partnership. 

It is really easy to see why it helps for students, who are learning how to relate to their audiences. The bumps and oddities in these clips will show up, and the sudden changes in intensity. The directors can ask whether their strategies worked. Did that subtle choice about clothing pay off? What about the cut to the close up? Or the retreat to a wide shot? Does the editing and music manage the energy of the story?

Only two of the company’s major clients are in the screen sector. DR, the Danish broadcaster which made The Killing and The Bridge is using it routinely. Martin Brown saw the system in action when he advanced the deal. The producers stop the post production process for a week just before fine cut, and show the project to individuals, each working at a carrel with a single computer.  The data is assembled and shown to the editor, director and producers. 

The commissioning editors, according to Brown, do not give advice or issue instructions on the basis of this data. Instead it is a common frame of reference, provides a different way of looking at the cut, and enables the makers to really see the scenes which are worn out by constant repetition and recutting. 

In the United States, Imotions is working with NBCUniversal. It has two viewing spaces, each seating several hundred people, wired up so the scientists can compare how they are all responding. The company sees it as a way of increasing the strike rate of their new shows and sustaining the interest of the older ones. You can see its value on pilots, and testing the way audiences respond to climaxes and payoffs. It is a handy way of evaluating the performance of actors and the value of their characters.

In all of these cases, the scientific data is only part of the study. It is always fed into qualitative evidence from focus groups, conversations and feedback forms. In other words, the final decisions are made by a combination of fairly simple data and the complex self descriptions from viewers.

The data is very useful in understanding what the respondents actually mean. They are relying on memory, which is a selective filter, so the researchers can work out the real bases for their conclusions. They know, for instance, that the data on attention correlates with memory. The things on which the viewers pay the most attention are the elements which they talk about to other people. The producers can read a show for its potential social media and word of mouth.

Broadly speaking the data provides a valuable decision factor in working out the possibility of success. Superficially attractive films with a good cast and a name director and an enticing location, all of which create lots of peaks in graphs and eyes moving in the same direction are useless if the rest is muddled. In a sense, coherence works and scatter fails.

Besides the learning process, this is of enormous interest to people making commercials, where every element is tested and options are offered. Music, for instance, is changed, remixed and shifted about. Endings and cutaways can be abandoned or emphasised. 

For the longform feature and television sectors, the real benefit is in the creation of trailers. And we know from audience research they are the single most important element in the audience’s decision to see a film or commit to a TV show. There are magic trailer makers,  mostly in Los Angeles, who can take an ordinary local version and make it pop. 

Martin Brown is professionally cool about the school’s involvement. They are interested in the technology, he said, and have done the basic tests to develop their expertise. Now they are looking for external companies who want to try the system on their own productions. 

The system is based on some fascinating ideas about human behaviour, both of which are deeply embedded in the crafts of performance and acted storytelling. Romanoff and Petersen insist that the real drivers of life are spontaneous, emotional and intuitive. We feel and act and then think about it, and rationalise what we have done. We can make strategic decisions after the fact, but they are hard and slower and they rely on our memory of experiences. Here we are talking about the underlying choices which assemble the memory.

They also believe that everything which happens in our minds has a physical correlation. We do stuff which reflects our inner activity. Breathing, pulse, tensing, glancing, focusing, laughing, gasping, wriggling – all these things give us away. What is more, we now know that the process of identification makes our brains active in the same areas. We watch someone jump and our brain function for leg control is activated. Inside the theatre of our anatomical minds, we are doing it too. 

As audiences we do all this together and performers read the living, heaving mass of us in a room sharing the experience.  That is what they use to play that violin, and this science gathers that data, and pins it to the moment when it happens. It is possible to quantify and discuss how the audience responds as a group, to a story played out in front of them. 

At the moment this approach is pretty crude. We don’t get data about envy, or sexual fascination, or grief, or shame and guilt and confusion – although the data provides indirect evidence by certain shapes of response. Around half of the Imotions research is with hard core academics working on their theory of mind and the biological brain. 

We do by and large know what our audiences are doing, because live performers get a huge amount of feedback, and screen creators spend months in editing rooms constructing realities which are highly designed for specific effect. 

In the larger commercial and political world, the research is much more important. Most of the public work occurs with companies who are designing products, creating packaging and building brands. The Danes are busy in the retail environment watching how shoppers trailing head cameras move from display to display, logo to logo, around the gondolas and high traffic areas and video displays. Huge amounts of money are at stake and this quantitative/qualitative approach is fundamental to marketing and product decisions. 

It is also extremely useful in the military world, where designers build systems to fit our primitive neural systems, and soldiers learn to make sense of terrifying environments. Games developers are asking the same questions. 

There is an advancing area of mental health, where we can explore just how people with neurally atypical experiences actually make sense of their environment. The gear is dynamic, just like the world which is being navigated with such pain. 

This research is in its infancy, and it is not going to go away. Meanwhile, the screen sector is moving beyond the first depowering and stupifying approaches to audience testing, to create tools which genuinely work to build sophisticated experiences for audiences. This is probably only the beginning.

David Tiley was the Editor of Screenhub from 2005 until he became Content Lead for Film in 2021 with a special interest in policy. He is a writer in screen media with a long career in educational programs, documentary, and government funding, with a side order in script editing. He values curiosity, humour and objectivity in support of Australian visions and the art of storytelling.