Modelling visual attention | Putting a saliency model of eye guidance to a test

Last year my fellow students and me investigated the performance of a visual attention model based on low level features. The results are "some kind of stunning" - If you're interested in visual attention or eye tracking in general have fun reading our publication.

Authors:

Hendrik Koesling (1), Rafael Friesen (2), Sebastian Hammerl (2), Florian Lier (2), Tim Preuss (2)

Affiliation: (1) CRC673 “Alignment in Communication”, Bielefeld University, Germany, (2) Faculty of Technology, Bielefeld University, Germany

Title: Modelling visual attention: Putting a saliency model of eye guidance to a test

Abstract: We studied the performance of attention models based on the saliency of low-level image features as proposed by, for example, Itti, Koch and Niebur (1998, IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1254-1259). Using a change blindness paradigm, 13 subjects viewed image pairs depicting abstract object arrangements (coloured squares, stars, etc.) and complex naturalistic scenes from different categories (landscape, road traffic, desktop). Eye movements were analysed during initial ambient scanning and subsequent focussed viewing of the first image (e.g., Pannasch et al., 2008, Journal of EyeMovement Research, 2, 1-19). Cluster analysis of gaze points determined empirical foci of attention. The comparison between these foci and model-generated saliency centres based on stimulus colour, intensity and orientation produced significant location differences. Even though model foci were significantly nearer to empirical foci than random positions, (geometric) stimulus centres more accurately predicted attention foci than the model. Results pose further doubts about the adequacy of attention models purely based on visual saliency. As suggested by Underwood et al. (2006, European Journal of Cognitive Psychology, 18, 321-342) and others, taking into account high-level information such as expected object locations could lead to more adequate modelling of eye guidance and attention processes.

You can have a quick look at the image above or just download the attached pdf file below

Last year my fellow students and me investigated the performance of a visual attention model based on low level features. The results are "some kind of stunning" - If you're interested in visual attention or eye tracking in general have fun reading our publication.Authors:Hendrik Koesling (1), Rafael Friesen (2), Sebastian Hammerl (2), Florian Lier (2), Tim Preuss (2)Affiliation: (1) CRC673 “Alignment in Communication”, Bielefeld University, Germany, (2) Faculty of Technology, Bielefeld University, GermanyTitle: Modelling visual attention: Putting a saliency model of eye guidance to a testAbstract: We studied the performance of attention models based on the saliency of low-level image features as proposed by, for example, Itti, Koch and Niebur (1998, IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1254-1259). Using a change blindness paradigm, 13 subjects viewed image pairs depicting abstract object arrangements (coloured squares, stars, etc.) and complex naturalistic scenes from different categories (landscape, road traffic, desktop). Eye movements were analysed during initial ambient scanning and subsequent focussed viewing of the first image (e.g., Pannasch et al., 2008, Journal of EyeMovement Research, 2, 1-19). Cluster analysis of gaze points determined empirical foci of attention. The comparison between these foci and model-generated saliency centres based on stimulus colour, intensity and orientation produced significant location differences. Even though model foci were significantly nearer to empirical foci than random positions, (geometric) stimulus centres more accurately predicted attention foci than the model. Results pose further doubts about the adequacy of attention models purely based on visual saliency. As suggested by Underwood et al. (2006, European Journal of Cognitive Psychology, 18, 321-342) and others, taking into account high-level information such as expected object locations could lead to more adequate modelling of eye guidance and attention processes.
AttachmentSize
ecvp2009.pdf1.84 MB

Comments

Post new comment

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code>
  • Lines and paragraphs break automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.