The present studies attempt to characterize the representations that support long-term, cross-modality priming, with the main hypothesis being that auditory-to-visual priming for words is supported by phonological representations. The purpose of Experiments 1 and 2 was to develop and refine an experimental paradigm for manipulating phonological processing. Results indicate that, regardless of study condition, semantic or phonological-articulatory, using homophonic nonwords as foils in a lexical decision task can eliminate phonological priming of both the speed and accuracy of responding.
The purpose of Experiment 3 was to apply this paradigm to cross-modality priming. To the extent that it is mediated by phonological representations, cross-modality priming should be reduced in the context of homophonic foils. The findings support the hypothesis: Robust same-modality priming was obtained in both the reaction time and error data, whereas cross-modality priming was not significant. There was, however, a small amount of cross-modality priming in the RT data, prompting further exploration of this issue.
Experiment 4 included type of nonword foil, nonhomophonic and homophonic, as a between-participants variable. The study task was changed from tangibility judgments to naming aloud. The pattern of results was similar to that of Experiment 3, with robust cross-modality priming when foils were nonhomophonic nonwords, and very attenuated cross-modality priming when the foils sounded like words.
In Experiment 5, visual-to-visual and auditory-to-visual priming for words was compared in the lexical decision task when foils were nonhomophonic nonwords or homophonic nonwords. A recognition memory condition was also included. Cross-modality priming for words was about half as large as repetition priming when nonhomophonic nonwords served as the foils, but was virtually eliminated in the context of homophonic nonwords. Same-modality priming was unaffected by foil manipulation, and there were no modality differences in the recognition condition.
Taken together, this pattern of findings suggests that visual-to-visual word priming is mediated largely by orthographic representations, and auditory-to-visual cross-modality word priming is mediated largely by phonological representations.