The timecourse of utilising high-level information in scene perception

This project was funded by the ESRC (Grant reference: RES-000-22-4098)


Eye movements are guided by high- and low-level factors. Two key high-level factors are knowledge of where targets are likely to appear and knowledge of target appearance. Our research investigated how these two sources of information guide search. In Experiment 1, we collected eye movement data as participants searched scenes in which the objects appeared in expected or unexpected locations. We also varied the type of information participants were provided with before each search: either a picture of their search target or a word naming their search target. We found that both object appearance and likely target location guide search. Following a picture of the target, search is not strongly influenced by where the target is located. Following a word naming the target, we often search the scene region in which we expect to find the object even when it is not there. Experiment 2 was aimed at isolating the effects of spatial inconsistency in scene search, teasing apart any effect of attentional prioritisation due to inconsistency (context/local information conflict) from the disrupting impact of unreliable expectations. We found that inconsistent placement results in less efficient search, but found no strong evidence for attentional capture by inconsistency alone. The project findings reveal the visual system’s strategy in utilising high-level information to guide search: where we expect to find an object has a key influence, but this influence is greatly diminished if we have detailed knowledge about the precise appearance of the target object.

Publications arising

Spotorno, S., Malcolm, G. L. & Tatler, B. W. (2014). How context information and target information guide the eyes from the first epoch of search in real-world scenes. Journal of Vision, 14 (2):7



If you are interested in using the stimuli from the first of the two experiments that were conducted for this project, you can download them here. These images have been evaluated by 10 participants, who provided Likert scale ratings for: (1) the degree of matching between the verbal label and the picture of the object, (2) the quality of object insertion (i.e., how much it seemed to belong in the scene in terms of visual features, independent of the plausibil- ity of its location), (3) the plausibility of the object’s position in the scene, (4) the object’s perceptual salience (in terms of brightness, color, size, etc.), (5) the object’s semantic relevance for the global meaning (i.e., the gist) of the scene, and (6) the complexity of the whole image, defined with regard to the number of objects, their organization, and image textures.

Download stimuli

Download evaluation study ratings
(coming soon)

Eye movement data
At present these are the ASCII files created directly from the EyeLink .edf files. If you need any help with the content of these then please email Ben Tatler directly. These files contain only eye events and not ms-to-ms samples. If you require ms-to-ms samples then we will be happy to supply these on request.

Eye movement data from

If you use any of these resources in any published work, please cite the following article for experiment 1 stimuli and data:
Spotorno, S., Malcolm, G. L. & Tatler, B. W. (2014). How context information and target information guide the eyes from the first epoch of search in real-world scenes. Journal of Vision, 14 (2):7

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s