If you’re baffled by the title, don’t fret: “House Wheat This Hound!” means nothing in its printed form. When said out loud, however, a listener can find meaning: How sweet the sound!
Here’s what’s interesting: the listener must be a different person, someone who is not looking at the text. Perhaps that’s why we find “mad gabs” like this one so compelling: Even when we know what we’re supposed to hear, our eyes continue to interfere with our ears, and we are fascinated.
For teachers of English, recognizing our bias toward the visual is a first step toward understanding why the teaching of spoken English tends to play second fiddle to language in print.
Take the topic of grammar, for example: Though grammar is an underlying feature of all language skills, we tend to teach it primarily as an aspect of writing, largely ignoring it as an aspect of speech. It’s as if the relative permanence of the written word captures our minds while holding our ears hostage.
Why do we do this? Dr. Robin Barr, linguist in residence at American University in Washington DC, explains:
When we hear a sound, our brain first determines if it’s language or not. If it’s language, it gets sent over to Broca’s area for processing. Next, Broca’s area, combined with mirror neurons for the vocal tract, determine what the speaker probably did to make that sound. Your brain uses information from the language you already know, plus any other information that helps it make the decision. Sometimes, the visual information leads you astray.
Here’s a mind-blowing example with consonants: the McGurk Effect. The mouth positions you see help your brain decide what sound you hear.
If the McGurk Effect seems a bit far out, consider this video of “Obama’s Elf,” in which the clever author uses subtitles and animation to completely change how you hear this song, perhaps for all time.
You can observe the same effect in so-called “backmasking,” where people claim to hear secret, usually subversive, messages in music or speech played backwards. But in fact these messages are complete gibberish unless you’re reading the subtitles at the same time, in which case you suddenly think you’re hearing the same message, all because your eyes are bullying your ears into hearing what you’re looking at.
The Stroop Effect is another way of understanding the Obama’s Elf effect: The words you see overshadow the colors you see, as shown in this sample online test.
Now knowing that our eyes can influence what we hear, what are the implications for us as language teachers?
We need to recognize that we are psycholinguistically biased toward text, and in our “teaching” mode perhaps even more so. Consider how automatic it is to say a key word and write it on the board at the same time. So—put the chalk down and apply these these three principles:
1. Present language in its spoken form first so that learners are free to grapple with it acoustically and without the interference of letters and words. Don’t forget to give learners sufficient “wait time” (silence) after modeling a phrase or asking a question.
2. Don’t read along when your learners read aloud. If you read along with what they’re saying, you’ll ascribe greater comprehensibility to their spoken English than they may be achieving. Instead, look away and actually listen. In this way, you’ll be able to assess the reader’s pronunciation more objectively.
3. Provide learners with ample opportunity to explore the intersection of written and spoken English. Grammar dictation (aka dictogloss, aka collaborative reconstruction) is an especially powerful classroom activity that will provide you with a window on your learners’ own text-influenced perceptions.
Do you know of activity that helps teachers and learners explore language and the eye-to-ear connection? If so, please share in the comments below. Meanwhile, enjoy the McGurk Effect and Obama’s Elf; thanks to your brain, they never get old!