• Wed. Feb 21st, 2024

Will AI News Reporters Replace Humans? Will anyone care… or will they?

Will AI News Reporters Replace Humans?  Will anyone care… or will they?

At first I was “meh”—then “hmm.” My editor sent me one A link to an article Under the headline “Google tests AI tool capable of writing news articles”. The disturbing part was she followed it up with “lol”. My pride as a human made me think a little deeper about something: not whether generative AI can write news articles (already can), but what this means for news readers and news/information consumption.

Could it be true? A decade from now, will I be alone and penniless, with a set of VR goggles for company, sitting in a city flat reading news articles created by machine learning LLMs about how Bitcoin and the BSV blockchain will drive the digital economy of the future? I might even be able to use the glasses to have an interactive live debate with some artificially generated news anchor, giving me all the unbiased updates and analysis I need to stay optimistic.

News writing seems like a perfect fit for LLMs and machine learning. It’s very formulaic: the first paragraph contains the “hook” and main points of interest. The second, or “nut graph,” explains the reasons why the article exists. The rest of the article consists of backup details and quotes, and a conclusion in the final graph (which I never read) to wrap it all up. Even as a man who writes news, it often feels like muscle-memory works more than actual brainpower or creativity (am I giving too much away here?).

The first thing I did was test it by asking ChatGPT: “Can I write a 600-word news article in Bloomberg News style about how artificial intelligence and LLMs can write news articles soon?”

I have to say the result wasn’t bad—if a little lackluster. ChatGPT took less than 20 seconds to write this. The grammar was impeccable and it laid out the facts. The only laugh I got was the repeated references to “Language Model Models (LLMs)” and honestly, I didn’t expect it to be wrong. Ha, my job is safe!

Generative AI isn’t that great—yet

Adding to my false comfort are reports that ChatGPT may deteriorate with age. Testers noted that the GPT-4 had a dramatic drop in accuracy when given math problems, visual reasoning tests, and exams. One theory as to why this is happening, discussed on social networks, is that programmers at OpenAI, the creators of ChatGPT, may have inadvertently (or intentionally) stunted its growth by introducing limitations designed in the interests of “AI safety” and avoiding offensive answers.

My own experience with GPT3.5 is almost eye-opening, as it tends to generate more lines of text apologizing and preempting why certain tasks cannot be done than outputting useful (or desirable) material.

Of course, there is no point in reveling in the mistakes AIs and LLMs make at their current stage of development. Doing so reminds me of people who said in the late 1990s that the Web would never become a mass medium because it was too slow and impossible to stream a video. You may recall the first DARPA Grand Challenge for Autonomous Vehicles in 2004. None of them traveled more than 12 kilometers of the 240 kilometer course in the middle of the Mojave Desert. However, a year later, five vehicles completed the course and only one fell short of the 2004 record of 12 km.

Just because some technology doesn’t work well now, we shouldn’t assume it never will. This seems like common sense, but many people continue to make that mistake. OpenAI will fix any issues with the GPT if they decide they are hindering the project. For different clients, they may provide different versions of it depending on the type of content they need to produce. In any case, aside from the debate about whether machine learning and LLMs are forms of “intelligence” or not, judging their future performance on current examples is unwise. Think that someday, if ever, generative AI will be able to produce persuasive news content. Journalists working today have to deal with it.

One thing I noticed about ChatGPT’s article was this: If I hadn’t requested it, and had known in advance that it was automatically generated, I wouldn’t have noticed it (despite the weird terminology errors). Looking at the daily news stories in the mainstream and independent media, it is impossible to tell whether they are written by machines or by machines under human editorial supervision. Go to articles in the “General/Technical Information” or “Life/Health Advice” categories and it’s harder to tell.

Machines that write news for machines to read

With all this in mind, it can be extrapolated and predicted that most of the content we read and watch in the future will be produced by generative AI. Probably most of it already. There are many examples of online written content that is automatically generated or at least written by disinterested and unimaginative humans. There are Twitter threads full of responses that seem to come from LLMs, not real people.

How will people react to this? Will they continue to consume (and trust) news as they do now? In recent years, there have been several media reports on polls investigating the level of public trust in the news media. The existence and increasing regularity of these polls suggests that a panic is brewing. The results show a A steady decline in the level of trust in media news, as well as deep gaps in trust levels between people of different political leanings. For example, CNN and MSNBC have less than 30% trust from Republicans, but +50% ratings from Democrats. Some surveys show trust levels are below 50% across the board. The Weather Channel is the most trusted news source in the USA and is the only network that sits above 50% of the average.

Introducing generative AI into the mix probably won’t affect trust levels, that is, if the public can even tell. Viewers/readers will assume that automatically generated content has the same biases as its human creators and will treat it in the same way. We’ve all seen collage videos of human newsreaders and commentators at different stations who all seem to read from the same script. We follow accounts with human faces and names on Twitter, but suspect in the back of our minds that they aren’t “real” people with real opinions and time to write them — at “worst” they might be AI bots, or fake accounts run by PR companies, personal content astroturfed with factory-level efficiency.

A dirty secret of today’s news media industry (in the last few decades) is that the content it produces isn’t meant to inform or entertain people in the present, it’s meant to be indexed so it can be cited as a reference for future writing. Material that has been around for a long time has more value for a few reasons.

Whether it is biased, fair, accurate, proven or corrected, it will still appear in search results. The same is true in book publishing, especially non-fiction academics—the information contained in those books can be cited and referenced by others years into the future, whether or not someone actually reads the book.

Once generative AI produces all the content and is widely assumed to do so, the real battlefield for news in the future will move to the back. The content itself will be window dressing, while those trying to shape public opinion and “constructive consent” will try to influence trained data generative AIs, hoping to push the needle on their cause. In the past, activists would try to flood news and social networks with content favorable to their own interests. These days, they work equally hard to remove any content that displeases them. Therefore, machines will produce content, the main purpose of which will be for future generative AI systems to “read” and produce more content… for a machine audience.

The bottom line to all of this is that, yes, news reporters will eventually be replaced by generative AI—and so will a large percentage of news consumers. Real-life humans can lose interest in the news media, treat it as a sideshow, or use it to confirm existing biases. Is this already happening without the mass intervention of AI? may be. Can we trust news written by machines? As much as we can believe is man-made. The most important question for news reporters, which I deliberately posed at the end of the last paragraph, is: Will those operating news sites continue to use humans to produce content? The answer is probably yes, albeit more in an opinion-editorial or analytical commentary role. That’s why this piece is an op-ed, not a dry report on the latest developments.

Watch CoinGeek Roundtable with Joshua Hensley: AI, ChatGPT & Blockchain

New to Blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about Blockchain technology.

Leave a Reply

Your email address will not be published. Required fields are marked *