This month, OpenAI announced that its new chatbot, Chat GPT, reached one million users in five days, inspiring feverish social media chatter about interactions where users asked it to respond to a wide range of amusing and/or unsettling prompts. The tool is indeed impressive, with implications for everything from education and classassignments to journalism and marketing. But while its output is entertaining, sometimes funny, and fluent — convincingly so — it is also unreliable, generating responses that can be spectacularly wrong. In an information environment in which trust is extremely low and mis- and disinformation are rampant, ChatGPT’s parlor trick of human mimicry pours gas on an already flaming dumpster fire.
We know from years of research that people will always use technologies in ways that their creators did not intend. In other sectors and industries, governments and governance bodies create rules, laws, and regulations to constrain and limit malicious or dangerous uses of potentially harmful products. But advances in artificial intelligence and algorithmic, data-centric technologies have slipped the leash and operate largely outside of those kinds of assessments and controls. With the United States finally beginning to take steps toward putting regulations in place (as other jurisdictions, like the EU, have already moved to do), it’s time to accelerate that work.
With that in mind, I have three predictions. The first is that we will see ChatGPT and tools like it used in adversarial ways that are intended to undermine trust in information environments, pushing people away from public discourse to increasingly homogenous communities. Second, I predict that we’ll see a range of fascinating on-the-ground experiments and research emerging around how we as a society adapt to image and text generation tools like ChatGPT and Dall-E, to use these incredible advances in ways that truly benefit society while limiting harms, particularly to the most vulnerable. Finally, I predict — and hope — we will see growing attention at the federal level to build meaningful guardrails around the development and deployment of these and other AI systems — ones that account for their costs to society and put the protection of fundamental rights and freedoms over pure technical innovation.