Synthetic media forces us to understand how media gets made
So you’ve been skeptical about deepfakes ever since you read a hyperbolic headline in 2018. And you’ve been right — sort of. Faceswap deepfakes haven’t rocked U.S. or European politics or permeated every social media ecosystem, and mis-contextualized shallowfake videos outnumber them thousands to one in misinformation and disinformation. But globally, false claims of deepfakery increasingly confound publics and journalists, and the underlying foundational problem of non-consensual sexual images targeting women and LGBTQI+ people festers without solutions.
2023 will be the year in which we take seriously the measures to prepare for, but not panic over, synthetic media and its big sibling, the broader phenomenon of “generative AI.” The rapacious pace and public visibility of developments in this space — including the accessibility of Stability AI, the picture-generating variety of DALL-E, the look to the future of text-to-video research like Imagen and Phenaki, as well as the recent popularization of consumer tools like Lensa — reflects an underlying swell of technological advances, as well as potential profits taking the driver seat over ethics. These tools are rife with potential for distributed creativity and journalistic storytelling. But making it easier to fake realistic scenes of real people doing things they never did, or sexualized images of women, or nonsensical floods of fake war crimes images are not to be laughed at.
What form is better preparation likely to take? Witness’s own global consultations in our Prepare, Don’t Panic initiative on synthetic media have raised a number of areas: equity in access to detection tools and capacities for journalists globally and in smaller organizations, efforts on the insidious power of deepfake claims around real footage, strong platform policies and legislative options. But here I’ll focus on authenticity and provenance infrastructure, which show the work of how media was made, where it came from and was edited, and how it was distributed.
Early efforts like Coalition for Content Provenance and Authenticity (C2PA) technical standards and Content Authenticity Initiative launched in 2022, and this space will be rife for innovation as long as we don’t default to just assuming it’s about tamper-proof immutability of origin images but understanding the nuance of how media is made. Authenticity and provenance efforts focus on layers of context about media integrity and origins available to everyone from a viewer who really wants to understand how a creative image was made, to a professional investigator or journalist. They are a proactive step to engage with a manipulated and synthetic media world. Witness focused within the C2PA coalition on the global, human rights ramifications of these types of standards, and how they can be done right and in a way that is user-centric, respects privacy, accounts for global journalistic contexts, and avoids legislative weaponization.
In 2023, we’ll see these expanding authenticity and provenance technology efforts intersect with the evolving TikTokification of media production, focused on remix, playful editing, and integrated AI effects. A labeling and disclosure mindset for creators and journalists alike will intermingle with the creative potential of showing how media is created and revealing the production process. We’ll start to extract ourselves from the current idea that disclosure and labeling are about singling out or discerning misinformation or malice. Want to see what this looks like in its baby steps? You can see the start of this process in your For You Page on TikTok, where you can see the audio a creator used or the effect they incorporated.
When it comes to generative AI systems, we’re likely to see efforts (and fight-back) to bake disclosure of how media is made into these models’ outputs, as well as the combinations of real and synthetic media that will become more commonplace. It’s not just soft norms pushing this way, like efforts on a Synthetic Media Code of Conduct, but also recent moves in Europe to mandate it within the draft EU AI Act.
These efforts will not sufficiently address non-consensual sexual images. These threats expand in scope with open image-generating systems which permit both real individuals and sexual imagery. The problem with these images is not one of “knowing it’s a deepfake”; even more acutely than in other scenarios, it’s the weaponization of lifelike images, irrespective of their perceived “reality.”
In 2023, as we start to separate the hype from the (un)reality with deepfakes, authenticity and provenance technologies will be one place we can look to help fortify the truth and pull-back the curtain on delightful creativity, by creating clear signals about how a piece of media has been created, generated, manipulated and edited.
Sam Gregory is director of programs, strategy, and innovation at Witness, the global human rights and civic journalism network.
Leave a Reply