After nearly 11 years at the University of Dundee, we have now moved to the School of Psychology at the University of Aberdeen. It has been an eventful decade or so for us and we are very grateful to all the support we received in Dundee. We are really looking forward to the future in Aberdeen. I am very pleased that three post docs have made the move north with the lab: Clare Kirtley, Matt Stainer and Sharon Scrafton. It will be a busy summer setting up the new lab spaces and getting our ESRC funded project on words and images up and running.
We are pleased to announce the start of a new project in the AVL, beginning on June 1st, looking at how people understand information conveyed jointly through word and image.
Words and images are used together to convey a wide range of information to us; for example via safety signs, instruction manuals, adverts and comics. Often to understand the message we need to combine the information from the words and images. Surprisingly little is known about how we view and understand this type of stimulus.
In this project, we will use the medium of comics and the method of eye-tracking to study how people view and understand information conveyed jointly through words and images. By using comics we will also be able to explore established comics theories that describe how comics are created and the effects that artists and writers can have on the reader’s experience.
This project is funded by the ESRC and is run jointly by Ben Tatler, Chris Murray (School of Humanities, University of Dundee) and Phillip Vaughan (DJCAD, University of Dundee) with Clare Kirtley as the postdoc.
For more information see our project page.
This week sees the start of a new project in the AVL looking at how we learn about 3D space from multiple 2D dynamic scenes displayed on a multiplex. We spend up to a quarter of our life viewing environments indirectly through video, and we are increasingly exposed to multiplexes of videos displayed simultaneously.
Nowhere is the problem of translating multiple 2D dynamic views into 3D understanding of an environment more apparent than in a CCTV control room or TV production studio. Expert operators are able to anticipate events as they happen and can track individuals and events across multiple views, anticipating where an individual will appear after they as they move between camera views. This ability requires considerable understanding of the relationship between the camera views and the external environment. How this spatial understanding is learnt is currently unknown. Characterising this learning process has the potential not only to reveal new insights into how the brain overcomes the representational challenges of the built environment, but also to inform training procedures for practitioners in surveillance and TV production.
This project is funded by the Leverhulme Trust and is run jointly by Ben Tatler (Dundee) and Ken Scott-Brown (Abertay), with Matt Stainer as the postdoc. For more information see our research pages.
As we publish work, we will be making data and stimuli available wherever possible on this website. We are happy to share resources as openly as possible. Data and stimuli will be uploaded to the Research pages of this site as separate sub-pages for each project. If you would like access to data that are not presently uploaded then please contact us and we will see what we can do. Likewise if the formatting of data is not what you are looking for please let us know.
If you have visited the AVL website over the last two or three years you will have noticed that it has fallen into disrepair. Sorry about this. This new website is my first attempt with WordPress so please forgive teething troubles with the various pages. I will continue to update and add to this over the next little while and if you have any suggestions please let me know. Thanks.