By Àngels Pinyana
In recent years, research on how we go about reading has incorporated a new tool: the eye-tracker. Eye-tracking technology allows researchers to examine their subjects’ eye-movements when reading. This is especially interesting when the text is multi modal. Think about a storybook for children, with text and pictures; an audiobook, which may contain text, pictures and audio; or a film with subtitles.
Among other measurements, eye-tracking technology determines and records the number and length of saccades, which in eye-tracking jargon means the movement of the eyes to parts of the text that the subject is fixating his/her eyes on. It also analyses the number and length of fixations, which are the number of stops the eyes make when reading the text, or calculates the regressions, that is, the movements the eye performs when rereading or going back to visual prompts within the area under analysis, a.k.a. area of interest (AOI).
As you may imagine, eye-tracking technology data assembles a very rich record of eye-movement behaviour, which can be considered a very close approximation to natural reading or viewing behaviour. It allows researchers to identify what the subjects are paying attention to when they read, and to pinpoint the elements that require more processing effort from the participants.
One of the results from eye-tracking research has shown that the combination of written and auditory verbal input affects how readers process vocabulary. Researchers have seen that fixation duration and number of fixations on a given word are longer when we read aloud or when we read while we listen to the words than when we are reading silently. Consequently, when we follow audio cues from an audio book, or when somebody is reading aloud to us, words seem to be more memorable.
In short, according to this research, if you want to remember more vocabulary: read aloud!