Divided multimodal attention sensory trace and context coding strategies in spatially congruent auditory and visual presentation

Tómas Kristjánsson, Tómas Páll Thorvaldsson, Árni Kristjánsson

Research output: Contribution to journalArticlepeer-review

Abstract

Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.

Original languageEnglish
Pages (from-to)91-110
Number of pages20
JournalMultisensory Research
Volume27
Issue number2
DOIs
Publication statusPublished - 2014

Other keywords

  • Auditory attention
  • Multimodal
  • Spatial confound
  • Visual attention

Fingerprint

Dive into the research topics of 'Divided multimodal attention sensory trace and context coding strategies in spatially congruent auditory and visual presentation'. Together they form a unique fingerprint.

Cite this