HADDINGTON, Scotland — In August 2017, Ben Nimmo was declared dead by 13,000 Russian bots on Twitter.
“Our beloved friend and colleague Ben Nimmo passed away this morning,” read the epitaph, which was manipulated to look as if it were from a co-worker’s Twitter account. “Ben, we will never forget you.”
The message was immediately shared thousands of times by the network of automated accounts. Notes began pouring in from worried friends and colleagues — even though Mr. Nimmo was very much alive.
It didn’t take long for Mr. Nimmo, who helped pioneer investigations into online disinformation, to figure out what was going on: He had been targeted by a shadowy group after reporting, along with others, that American far-right groups had adopted pro-Kremlin messages on social media about Ukraine. His fake death notice was a sinister attempt at disinformation, which is the spreading of falsehoods with the deliberate intent to mislead.
“That made it personal,” said Mr. Nimmo, 47, whose home address in a town near Edinburgh and other personal data, like bank details, have also been posted online.
For the last five years, Mr. Nimmo, a founder of the Atlantic Council’s Digital Forensic Research Lab, has been a leader of a small but growing community of online sleuths. These researchers serve as an informal internet police force that combats malicious attempts to use false information to sway public opinion, sow political discord and foment distrust in traditional institutions like the news media and the government.
Mr. Nimmo’s work came to the fore after the 2016 American presidential election, when intelligence agencies concluded that Russia had used Facebook and other internet platforms to influence voters. His research has since caused Facebook and other companies to ban thousands of disinformation-related accounts; he has also been tapped as an expert by governments studying foreign interference.
Now his skills are needed more than ever, as the 2020 presidential election approaches and the tactics of internet trickery have been adopted by governments, activist groups and clickbait farms in at least 70 countries. In tandem, a disinformation-for-hire industry has emerged. And domestic disinformation efforts in the United States are also on the rise.
“It doesn’t matter how much money you throw at the problem, or how many technological advances you have,” said Jenni Sargent, managing director of First Draft, a London group that tracks disinformation and trains journalists. “Without the human layer of someone like Ben dissecting the way that people use the internet, then we wouldn’t be as far ahead as we are in terms of understanding the problem and the scale.”
Mr. Nimmo’s goal is to spot disinformation early — essentially, to stamp out the fire before it spreads.
His techniques have changed as his adversaries have become more cunning. Because Facebook, Twitter and YouTube are now policing their platforms more aggressively, he is less able to rely on obvious clues like masses of automated Twitter posts and fake Facebook accounts.
So Mr. Nimmo has started looking for clues in obscure areas of the internet, like German news sites that accept unverified user-generated content and Iranian video-sharing services. Websites like Reddit, Medium and Quora are becoming popular places to create fake accounts and plant disinformation and leaks.
“Every time we catch a threat actor, you can bet that the other ones will change their tactics to try and keep ahead,” he said.
More interference is coming in the 2020 campaigns, Mr. Nimmo said. He said he was particularly worried about a “hack-and-leak” operation like the one in 2016 when Russian operatives took information from the Democratic National Committee’s servers and got it published online. Loaded with juicy and accurate information, such leaks go viral on social media and can be irresistible to the news media.
Mr. Nimmo’s path to disinformation research was not an obvious one. An Englishman who studied literature at Cambridge University, he worked as a scuba diving instructor in Egypt, as well as a travel writer and journalist in Europe. In 2007, while reporting on violent demonstrations in Estonia for Deutsche Presse-Agentur, he was head-butted by a protester, breaking his nose and leaving it off center still today.
In 2011, he began working at the North Atlantic Treaty Organization as a press officer. While there in 2014, he saw how Russia had worked to muddy perceptions of its invasion of Crimea that year, including misrepresenting Russian soldiers as “local self-defense forces.”
“There was this constant drumbeat of Russian disinformation,” he said.
Inspired to dig deeper, he became an independent researcher that same year. He moved to Scotland to be closer to family and began doing contract work on Russia for pro-democracy think tanks like the Institute for Statecraft.
During the 2016 American election campaign, Mr. Nimmo helped found the Atlantic Council’s Digital Forensic Research Lab, a Washington-based group that studies online disinformation. Facebook made him and the lab among the first outsiders allowed to study disinformation networks on its site before the company shut the networks down.
Last year, Mr. Nimmo became the head of investigations for the social-media monitoring company Graphika.
“He was there well before this was a trendy thing to do,” said Alex Stamos, who is conducting similar disinformation research work at Stanford University and was previously Facebook’s chief security officer. Both Graphika and the Digital Forensic Research Lab have received funding from Facebook.
Mr. Nimmo works from his home atop a hill and next to a grain farm in the small Scottish town of Haddington. To ferret out disinformation networks, he relies on open-source digital tools: the Wayback Machine to find internet pages that have been deleted; Amnesty International’s Citizen Evidence Lab, which provides information about YouTube videos; and Sysomos for spotting social media trends.
What is hard, he said, is determining when material is coming from regular people expressing a point of view or from a coordinated system linked to a government. One giveaway is when the same material is posted at the same time, or when it can be traced to an original post — “patient zero,” he said — known to be a website or social media account used by a government.
“The magic of the internet is there is always another clue to find,” he said.
Mr. Nimmo speaks fluent Russian, French, German and Latvian — and is conversant in several other languages — teaching himself by buying books in the “Lord of the Rings” trilogy in languages he is trying to learn. That makes it easier for him to spot clues like mistakes a native Russian speaker makes when writing in English in disinformation posts.
The amount of disinformation has increased recently. In October, Mr. Nimmo’s team at Graphika explained how pro-China propaganda accounts targeted Hong Kong demonstrators. In November, he helped expose an operation that used fringe platforms to leak a sensitive British trade document before Britain’s general election. And in December, he analyzed Facebook’s first big takedown of fake accounts with profile pictures generated by artificial intelligence.
Most recently, he has investigated Iranian disinformation after the United States killed the head of Iran’s security machinery, Maj. Gen. Qassim Suleimani, last month. Mr. Nimmo is also tracing Russia-linked campaigns, including an effort to blame the United States for the downing of Ukraine International Airlines Flight 752, which Iran said it mistakenly shot down last month, killing 176 people.
This past week, after technical problems delayed the reporting of results from the Iowa caucuses, Mr. Nimmo was on alert for disinformation. There was little, he said, and he mainly found gleeful trolling from Republican supporters and right-wing groups.
Mr. Nimmo has sometimes made mistakes in identifying culprits. In 2018, he pinpointed a number of Twitter accounts as “Russian trolls,” when one of them was a British citizen sympathetic to Russia.
One recent evening, he started work at 7, chasing leads on Iranian disinformation related to the killing of General Suleimani. One suspicious Twitter account provided clues that led to various YouTube videos. From there, Mr. Nimmo found links to Facebook and Instagram pages. After a few hours, he had traced how memes from a suspicious pro-government Iranian website had traveled elsewhere on the web.
By the time Mr. Nimmo went to bed after 2 a.m., he had more than 50 tabs open on his browser, but no definitive evidence of an Iranian government campaign.
“He’s very careful,” said Camille François, the chief innovation officer at Graphika, who hired Mr. Nimmo. “It’s important to detect them, and to study them, but it’s also important not to overreact to the threat.”
That’s especially true now that foundations, universities and companies have poured money into efforts to examine disinformation, luring new researchers eager to spot such activity. Mr. Nimmo said he was concerned that investigators could have an incentive to sensationalize material that cannot be accurately attributed and argued that new standards were needed.
“When we look back on 2020, I hope we’ll see it as the year when disinformation research passed the tipping point and really started becoming a mainstream discipline,” he said. “We need to make that happen, because the threat actors aren’t going away.”
The Disinformation Contagion