Our eyes wander as we read text, and not just in the figurative sense — between a series of rapid motions called saccades, eyes remain still for just 200-300 milliseconds on average. Those movements are rich with subtext — they’re driven by cognitive processes involving vision, attention, language, and motor control — and according to new research from the University of Potsdam, Weizenbaum Institute for the Networked Society, and Leibniz Institute for Agricultural Engineering and Bioeconomy, they’re enough to identify a person pretty accurately.
A paper published on the preprint server Arxiv.org (“A Discriminative Model for Identifying Readers and Assessing Text Comprehension from Eye Movements“) describes a system that learns to associate eye movement behavior — including scanpaths, or gaze patterns — with individuals.
“Identification based on eye movements during reading may offer several advantages in many application areas,” the researchers wrote. “Users can be identified unobtrusively while having access to a document they would read anyway, which saves time and attention.”
First, the team identified scanpaths that could be observed with an eye-tracking system, which they correlated with “lexical features” in phrases of text (like word frequency, type length in numbers, syllables, and parts of speech). The resulting generative model inferred the likelihood of a given scanpath by taking into account not just the amplitude and duration of each saccade, but the subtle differences across five saccade types:
- Refixate the current word at a character position before the current position
- Refixate the current word at a position after the current position
- Fixate the next word in the text
- Move the fixation to a word after the next word
- Regress to fixate a word occurring earlier in the text
The team used the model to derive a Fisher kernel — a function that measures the similarity of two objects — that could compare scanpaths.
To test the system’s accuracy, the researchers next recruited volunteers to read 11 texts presented in a randomized order, each fitting onto a single screen. Their eye movements were recorded with an SR Research Eyelink 1000 eye tracker.
So how’d the AI perform? In a test set of 62 readers, the Fisher kernel with lexical features (the team tested at least one model without them) achieved identification accuracy of up to 91.53 percent. That’s not quite as high as fingerprints’ 99.8 percent, but the team claims it’s state of the art.
“We conclude that this model significantly outperforms the semiparametric model of [Abdelwahab, Kliegl, and Landwehr] in some cases, which, to the best of our knowledge, is the best published biometric model that is based on eye movements,” the researchers wrote.
It’s not the first time we’ve seen AI use eye movements to derive insights. In a study conducted by the University of South Australia, University of Stuttgart, Flinders University, and the Max Planck Institute for Informatics in Germany, researchers describe a machine learning model that can predict traits like sociability, curiosity, and conscientiousness from a person’s eye movements alone.