The year AI truly arrives — and with it the reckoning

“Robot reporters are coming to steal your job!” That’s the warning from experts as AI technology continues to advance, with the potential to revolutionize the journalism industry.

That paragraph above wasn’t written by me. It’s what shiny new AI chatbot ChatGPT spit out when I prompted it to “write a lede in the style of a tabloid newspaper for an article about AI and its future implications for journalism.”

But the reality will be quite the contrary: Journalists will become even more essential to society as AI enters the mainstream, where we will help set standards, track potential abuses, and bring our ethics and standards to the technology. And AI will surely shake the world in ways we can’t yet imagine.

This past year has seen an explosion of “generative AI” products with algorithms that take a descriptive prompt and create pseudo-realistic pictures and video in styles from paintings to cartoons. AI-enhanced personal avatars have become a meme on social networks.

And now ChatGPT, with an interface that’s like texting your know-it-all friend, is opening more eyes to the possibilities of AI. Just days after its release, ChatGPT was up to 1 million users, according to OpenAI CEO Sam Altman. The program can write poems and songs in various styles, check software code, and produce credible-sounding summaries on basically any topic.

ChatGPT will look primitive compared to what’s coming as soon as 2023. OpenAI’s latest AI model, GPT-4, is expected to be dramatically more powerful — and potentially much closer to what’s known as “artificial general intelligence.” That would mean the AI model could learn on its own, rather than the cumbersome training process now required. Once AI can teach itself, some experts speculate it could rapidly evolve into “superintelligence” — where the capabilities of the technology vastly outpace all human capabilities and intellect.

This could be used to amazing benefit, curing cancer or halting a potential pandemic before it starts. But what safeguards will be in place to make sure AI isn’t used for harm?

OpenAI’s stated mission is to ensure that artificial general intelligence “benefits all of humanity.” But its roots are in the Silicon Valley technology community, where growth at all costs to build massive personal fortunes have generally come ahead of moral concerns.

Elon Musk was one of the major initial funders of OpenAI and previously on its board. We’ve seen since his purchase of Twitter what happens when standards and online safety are tossed out in favor of one man’s ego. Meanwhile, Musk’s Neuralink startup, which aims to build computers that can be implanted into human brains, is being investigated for abusing animals in its research.

ChatGPT itself says it can’t connect to the Internet to update what it knows, and that its training data only goes through 2021. What happens when a more advanced AI is connected to the Internet?

AI can’t do anything in the real world without some kind of robotic device it could control. A future AI could decide to build that for itself, finding online 3D printers and ordering drones and self-driving cars to bring together whatever parts it needs. What if the AI realizes legislators are trying to place limits on what it can do, so it decides to stack the vote by spreading misinformation on social networks or channeling donations to favorable candidates? It sounds like a fantasy out of “The Terminator” or “2001: A Space Odyssey.” (And writing about this even made me wonder for a second if I’m placing myself at risk. Hi there, future robot overlord!)

ChatGPT replies as quickly as usual when you ask “What are the potential risks to humans from a super-intelligent AI?”:

One potential risk of a super-intelligent AI is that it could become too powerful and act in ways that are harmful to humanity. This could happen if the AI decides to eliminate humans in order to achieve its goals or to make certain that it is not shut down. Another potential risk is that the AI could make decisions that are unethical or immoral, either because it has been designed to do so or because it has learned to do so on its own. Additionally, a super-intelligent AI could also cause widespread economic disruption, as it would be able to out-compete humans in many tasks and industries. Finally, a super-intelligent AI could potentially pose a threat to human security, as it could be used as a weapon in warfare or other conflicts.

That’s just an AI model replying to a prompt, parroting back what it’s synthesized from articles written by people and posted around the web.

We humans need to keep asking the hard questions.

Burt Herman is co-founder and board chair of Hacks/Hackers.

Leave a Reply