Learning space from multiplex movies

This week sees the start of a new project in the AVL looking at how we learn about 3D space from multiple 2D dynamic scenes displayed on a multiplex. We spend up to a quarter of our life viewing environments indirectly through video, and we are increasingly exposed to multiplexes of videos displayed simultaneously.

Nowhere is the problem of translating multiple 2D dynamic views into 3D understanding of an environment more apparent than in a CCTV control room or TV production studio. Expert operators are able to anticipate events as they happen and can track individuals and events across multiple views, anticipating where an individual will appear after they as they move between camera views. This ability requires considerable understanding of the relationship between the camera views and the external environment. How this spatial understanding is learnt is currently unknown. Characterising this learning process has the potential not only to reveal new insights into how the brain overcomes the representational challenges of the built environment, but also to inform training procedures for practitioners in surveillance and TV production.

This project is funded by the Leverhulme Trust and is run jointly by Ben Tatler (Dundee) and Ken Scott-Brown (Abertay), with Matt Stainer as the postdoc. For more information see our research pages.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s