Journalism and the limits of AI in the age of ChatGPT

March 16, 2023
by

A good editor wears multiple hats. There’s the hard hat for the factory floor where the editor is in charge of quality control. They need to know their audience and pick story pitches that fit the publication. The story is like unprocessed ore to be refined or rejected, tweaked on the assembly line or ripped apart for spare parts and reworked if it can indeed be salvaged at a latter date. Then an editor needs to put on his Hemmingway “Oysterman hat” which facilitates the removal of superfluous lines and extraneous material that doesn’t contribute to the story. Finally, there’s the Sherlock Holmes “Deerstalker hat” accompanied with mandatory tobacco pipe that an editor wears in the final phase of editing whereby facts are checked and the story gets the final tick of approval before the publish button is pressed.

ChatGPT fails abysmally at wearing deerstalker hat. According to the Press Gazette, the feedback from an editor that “hired” raised enough alarm bells the agency subsequently ¨fired” ChatGPT and scrap its experiment with AI for publishing. ChatGPT’s first assignemnt sourced an article in Spanish on the mugging of an elderly woman at a cash machine in Buenos Aires with this prompt: “Create news story using this source material, add names, place, when it happened: https://tn.com.ar/policiales/2023/02/27/video-un-ladron-le-pego-una-trompada-a-una-jubilada-cuando-estaba-sacando-plata-de-un-cajero-automatico/.”

Consequently, ChatGPT got wrong the following:

  1. You got the name of the victim wrong
  2. The age of the victim wrong
  3. The location of the crime wrong
  4. The description of the video didn’t match the footage
  5. Said the perpetrator was unidentified when we have a name and an age
  6. Said the perpetrator was at large when he was arrested
  7. Appear to have fabricated a quotation by the mayor of Buenos Aires
  8. Got the name of the mayor of Buenos Aires wrong
  9. Appear to have fabricated a quotation by the victims’ children.

Having worked for years in the Latin America, this may be because ChatGPT no habla espanol muy bien. This is not, however, been an isolated incident. Newsrooms that are cutting costs and firing competent editors best beware of hiring a robot that is not ready to fill the void.

The next big issue on the horizon is whether publishers, already facing catastrophic losses to their coffers after sustained assaults from Silicon Valley, “have a right in law to charge ChatGPT for analysing and exploiting their content is a moot point. News Corp certainly thinks they do but legal opinion is far from set on this. Unless new legislation specifically addresses the issue, it may take a test case to decide.”

Leader Board_728x90

jacob

has postgrads in Cyber Law from Deakin Law School; Cyber Crime from Griffith School of Criminology and Criminal Justice; and Cloud Computing and Virtualization from Charles Sturt. After spending the last several years consulting on tech and cybersecurity for newsrooms from México's noticiascancun.mx to South Africa's health-e.org.za he still finds time to write in the age of ChatGPT to keep his pencil sharpened.

Leave a Reply

Your email address will not be published.

Social Media_Landscape_1200x628
Previous Story

Gonzalez v. Google & Twitter v. Taamneh

Next Story

TikTok tapdances across DOJ dancefloor

Latest from Artificial Intelligence

Go toTop