www.washingtonpost.com /media/2023/01/17/cnet-ai-articles-journalism-corrections/

A news site used AI to write articles. It was a journalistic disaster.

Paul Farhi 7-8 minutes 1/17/2023

When internet sleuths discovered last week that CNET had quietly published dozens of feature articles generated entirely by artificial intelligence, the popular tech site acknowledged that it was true — but described the move as a mere experiment.

Now, though, in a scenario familiar to any sci-fi fan, the experiment seems to have run amok: The bots have betrayed the humans.

Specifically, it turns out the bots are no better at journalism — and perhaps a bit worse — than their would-be human masters.

On Tuesday, CNET began appending lengthy correction notices to some of its AI-generated articles after Futurism, another tech site, called out the stories for containing some “very dumb errors.”

An automated article about compound interest, for example, incorrectly said a $10,000 deposit bearing 3 percent interest would earn $10,300 after the first year. Nope. Such a deposit would actually earn just $300.

More broadly, CNET and sister publication Bankrate, which has also published bot-written stories, have now disclosed qualms about the accuracy of the dozens of automated articles they’ve published since November.

New notices appended to several other pieces of AI-generated work state that “we are currently reviewing this story for accuracy,” and that “if we find errors, we will update and issue corrections.”

Artificial intelligence has been deployed to handle facial recognition, recommend movies, and auto-complete your typing. The news that CNET had been using it to generate entire stories, however, sent a ripple of anxiety through the news media for its seeming threat to journalists. The robot-brained yet conversational ChatGPT can produce copy without lunch or bathroom breaks and never goes on strike.

Until last week, CNET had coyly attributed its machine-written stories to “CNET Money Staff.” Only by clicking on the byline would a reader learn that the article was produced by “automation technology” — itself a euphemism for AI.

The company came clean after a sharp-eyed marketing executive named Gael Breton called attention to the labels on Twitter. CNET subsequently changed the bylines to “CNET Money,” added some clarification (“this article was assisted by an AI engine”) and further stipulated that the stories were “thoroughly edited and fact-checked by an editor on our editorial staff.”

If that’s true, “then this is primarily an editorial failure,” said Hany Farid, a professor of electrical engineering and computer science at the University of California at Berkeley and an expert in deepfake technologies.

“I wonder if the seemingly authoritative AI voice led to the editors lowering their guard,” he added, “and [were] less careful than they may have been with a human journalist’s writing.”

CNET’s robot-written copy is generally indistinguishable from the human-produced kind, although it’s not exactly snappy or scintillating. It’s, well, robotic: serviceable but plodding, pocked by cliches, lacking humor or sass or anything resembling emotions or idiosyncrasies.

“The choice between a bank and credit union is not one-size-fits-all,” reads one AI-written story published by CNET in December. “You’ll have to weigh the pros and cons with your goals to determine your best fit.”

Advises another bot-written story: “The longer you leave your investment in a savings account or money-market account, the more time you have to leverage the power of compounding.”

The deployment of the technology comes amid growing concern about the uses and potential abuses of sophisticated AI engines. The technology’s astonishing capabilities have led some school districts to consider banning it lest students use it to cut corners on class and homework assignments.

In a statement published last week, CNET’s editor, Connie Guglielmo, called her site’s use of AI “an experiment” aimed not at replacing reporters but to assist their work. “The goal is to see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective,” she wrote. Guglielmo didn’t respond to a request for comment.

Bankrate and CNET said in a statement on Tuesday that the publications are “actively reviewing all our AI-assisted pieces to make sure no further inaccuracies made it through the editing process, as humans make mistakes, too. We will continue to issue any necessary corrections.”

Even before CNET’s grand experiment, other news organizations had used automation in a more limited capacity to augment and to analyze their work. The Associated Press began using AI in 2014 to produce corporate earnings stories. It also has used the technology for sports recaps.

But AP’s system is relatively crude — it essentially inserts new information into pre-formatted stories, like a game of Mad Libs — compared with the CNET’s machine-creation of feature-length articles.

Others have created internal tools to assess human work — such as a Financial Times bot that checks to see if their stories quote too many men. The International Consortium of Investigative Journalists has set AI loose on millions of pages of leaked financial and legal documents to identify details that deserve a closer look from its reporters.

Beyond flawed reporting, AI-written stories raise a few practical and ethical questions that journalists are only beginning to ponder.

One is plagiarism: Writer Alex Kantrowitz found last week that a Substack post written by a mysterious author named Petra contained phrases and sentences lifted from a column Kantrowitz had published two days earlier. He later discovered that Petra had used AI programs to “remix” content from other sources.

After all, given that AI programs assemble articles by churning through mountains of publicly available information, even the best automated stories are essentially clip jobs, devoid of new findings or original reporting.

“These tools can’t go out and report or ask questions,” said Matt MacVey, who heads an AI and local news project at the NYC Media Lab at New York University. So their stories will never break new ground or deliver a scoop.

The larger fear about AI among journalists, however, is whether it represents an existential threat. Employment in the news media has been shrinking for decades, and machines may only accelerate the problem.

“This is, perhaps, the classic story of automation reducing the need for human labor and/or changing the nature of human labor,” said Farid. “The difference now is that the automation is not disrupting manual work, but is instead disrupting highly creative work that was thought to be outside the reach of automation.”

Social-media trolls have long taunted newly laid-off reporters with the epithet “Learn to code.” Despite evident flaws, the rise of AI reporting suggests the codes being created may someday be the very thing driving journalists from their newsrooms.