Link Between Visual and Auditory Aspects of Language

When We Read, We Recognize Words as Pictures and Hear Them Spoken Aloud

This Scientific American article sheds light on a very interesting issue for neuroscientists and philosophers of language. Neuroscience is not perfect but the researchers from the article have found a strong correlation between activity of certain neural structures and word recognition.

The article refers to both the Visual Word Form Area, and the Fusiform face area, which are in opposite and corresponding locations. In non-linguistic humans, it is claimed that both areas are devoted to face recognition, but in humans who have language, the visual word form area recognizes words based on their shapes, and synthesizes auditory signals.

This is directly related to the idea of ‘inner speech’ while reading, and truly harkens back to the theories of David Marr on computational vision. But could these findings also confound the hypothesis that our sensory systems are “encapsulated”, in an informational sense?

If this process synthesizes visual data into auditory representations ( We apparently “hear” written words in our heads) , It plays very well into Peter Carruthers Idea of a conceptual module ( which is further explained here). This would suggest that an intermediate “Module” handles all linguistic data, and it is collated into one format before its information is taken to “central” or executive mental faculties.

Advertisements