(Iefym Turkin/iStock/Getty Images)
The first emoji was created in Japan in the late 1990s to cut down on the time and data it takes to spell out a word. Since then, these little 'picture words' have become a staple of modern discourse, used to share inside jokes, express emotion, or add flavor to a conversation.
But while it might be easier for a computer to send and receive emojis rather than words, it requires a little more effort from us mere mortals.
New research suggests most people can easily understand an emoji when it replaces a word directly – like an icon for a car instead of the word 'car' – yet it takes us about 50 percent longer to comprehend the icon.
The slight delay probably exists because our minds interpret these images as pictures, not as words, the authors argue, which requires an extra step of processing.
First, our brains must recognize the image before our eyes, and then, we must match that image to a word. If we simply read a word, we get there sooner.
That might sound obvious, but surprisingly little research has been done on whether we interpret emojis as pictures or as words, especially when used as direct substitutes for more formalized language.
So to figure out how our brains read sentences with emojis, researchers in Germany set up a self-paced online reading study among 53 native German speakers.
In the experiment, participants were given a sentence one word at a time on the screen. When they had finished reading a word, they pressed a key on the keyboard to trigger the next word. Instead of including only words, some of the sentences replaced a word with an emoji.
After reading each sentence, the participants were asked questions to make sure they had understood correctly. Measuring the reading time for each word, researchers found most people were able to accurately comprehend sentences with an emoji replacing a word, although it takes them about 350 milliseconds longer than when the sentence only contains words: 456 milliseconds with the word and 804 milliseconds with the emoji.
That's still quite quick, all things considered. Even more impressive, when a word is replaced with an icon that translates to another word with the same pronunciation - like the image of a rodent mouse to convey a computer mouse, or the image of a palm tree to represent the palm of a hand – it only takes our minds about 900 milliseconds longer to figure out what it means.
Within a second, our brains seem capable of pulling up a whole lexicon of words that sound similar, and that might match the picture we are seeing before selecting the best-fitting homophone.
"In the first step, a visual conceptual activation takes place," the authors explain.
"If this step is not enough for the generation of a meaningful utterance, phonological information from the lexical entity is retrieved in order to access additional meanings, and the original activated concept must be suppressed."
Based on these findings, the authors of the paper argue emojis are interpreted in a context-dependent way.
For instance, if a word is directly substituted for an emoji, our brains don't bother to pull up a complete lexical representation of the original word, including how it sounds.
But if a word is substituted for an emoji that only sounds like the word, then our brains do.
Because people with high emoji literacy didn't perform any better at the homophone emoji task, that means their brains probably aren't as practiced at pulling up the whole lexicon of a word; they likely don't do that very often when reading emojis in a text.
"In this case, the fact that someone is used to emoji is no longer helpful," explains linguist Tatjana Scheffler from the German Studies Institute at Ruhr-Universität Bochum.
"Participants who use emoji more often read the homophone emoji just as slowly as the others. This is also supported by the fact that those test participants who self-assessed as using emoji often, read matching emoji more quickly."
It will be interesting to see whether future research can replicate these findings among larger cohorts and different languages. Scheffler plans to conduct a similar study with people living with schizophrenia since many of them have difficulty identifying non-literal meanings.
The study was published in Computers in Human Behavior.