Words and pictures: how we read comics

We are pleased to announce the start of a new project in the AVL, beginning on June 1st, looking at how people understand information conveyed jointly through word and image.

Words and images are used together to convey a wide range of information to us; for example via safety signs, instruction manuals, adverts and comics. Often to understand the message we need to combine the information from the words and images. Surprisingly little is known about how we view and understand this type of stimulus.

In this project, we will use the medium of comics and the method of eye-tracking to study how people view and understand information conveyed jointly through words and images. By using comics we will also be able to explore established comics theories that describe how comics are created and the effects that artists and writers can have on the reader’s experience.

This project is funded by the ESRC and is run jointly by Ben Tatler, Chris Murray (School of Humanities, University of Dundee) and Phillip Vaughan (DJCAD, University of Dundee) with Clare Kirtley as the postdoc.

For more information see our project page.

Learning space from multiplex movies

This week sees the start of a new project in the AVL looking at how we learn about 3D space from multiple 2D dynamic scenes displayed on a multiplex. We spend up to a quarter of our life viewing environments indirectly through video, and we are increasingly exposed to multiplexes of videos displayed simultaneously.

Nowhere is the problem of translating multiple 2D dynamic views into 3D understanding of an environment more apparent than in a CCTV control room or TV production studio. Expert operators are able to anticipate events as they happen and can track individuals and events across multiple views, anticipating where an individual will appear after they as they move between camera views. This ability requires considerable understanding of the relationship between the camera views and the external environment. How this spatial understanding is learnt is currently unknown. Characterising this learning process has the potential not only to reveal new insights into how the brain overcomes the representational challenges of the built environment, but also to inform training procedures for practitioners in surveillance and TV production.

This project is funded by the Leverhulme Trust and is run jointly by Ben Tatler (Dundee) and Ken Scott-Brown (Abertay), with Matt Stainer as the postdoc. For more information see our research pages.