Metas new AI labels wont solve the Taylor Swift problem
Happy Wednesday! Today I’m excited to announce that my colleague Will Oremus is joining The Technology 202 as a co-host, writing one edition a week, including today’s. If you’re not already familiar with his work, Will joined The Post in 2021 as a tech news analysis writer and has helped lead our coverage of antitrust, Elon Musk and artificial intelligence.
Will’s addition to the team means you’ll now typically receive the newsletter three times a week, Tuesday through Thursday. We'll also be experimenting with the tipsheet’s format in coming weeks, so don't be surprised if you spot a few changes. With that, take it away, Will!
I’ve been a Tech 202 reader since it launched in 2018, so I’m thrilled to be part of it. I’ll be writing each week about the ideas driving the tech industry, its critics and policymakers. Send feedback, newsletter ideas and tips to will.oremus@washpost.com.
Meta’s new AI labels won’t solve the 'Taylor Swift problem'
In a world where it's getting harder to tell the difference between the work of humans and that of machines, experts are welcoming Meta's new plan to label AI-generated content on its platforms. But it's worth being clear about which AI problems it could help to solve — and which ones it won't.
Advertisement
The world's largest social media company announced on Tuesday it will begin putting labels on realistic-seeming images that users post on Facebook, Instagram and Threads when it can tell that they were generated with AI. The goal is to make sure users don't get fooled into thinking an AI fake — say, the pope in a puffer coat — is the genuine article.
The move aligns with the Biden administration’s executive order on AI last fall, which urged "watermarking" — invisible signals built into images that identify them as AI-generated — as a policy priority. Meta already puts both watermarks and visible "imagined by AI" labels on images created with its own AI tools. But now it will work with other companies on industry-standard signals that they can all use to recognize AI images wherever they crop up. Meta said it will also ask users to label AI-generated images they upload, though how it will enforce that was not immediately clear.
It’s a worthy step in the battle to contain an explosion of AI fakery, experts told the Tech 202. But it won't do much to contain some of the most harmful categories of AI-generated content.
Advertisement
For one thing, Meta’s labeling system will only work on images that have already been watermarked or are identified as AI-generated in their metadata. Companies such as Google, Microsoft, Adobe and Midjourney either already do that or have committed to work on doing it. Together, their AI tools amount to a big share of the mainstream market.
But there are other AI image tools that don’t and they could be used to circumvent the content guardrails of mainstream platforms. If watermarking catches on, those will likely be the tools that trolls, hoaxsters and propagandists flock to.
Meta has acknowledged that watermarking isn’t a full solution and said it's working on technologies to identify AI-generated images and videos even when they aren't watermarked. But so far, those systems remain far from reliable, missing lots of AI content and flagging some real content as fake. For now, it's not even attempting to apply AI labels to unwatermarked images.
Advertisement
Then there’s the "Taylor Swift problem" — the problem of AI-generated imagery being used not to deceive the viewer, but to harass or humiliate the subject of the image.
When fake pornographic images of Taylor Swift circulated on X and other social networks last week, "the question of whether it was authentic or not was not really the point," said Gili Vidan, a professor of information science at Cornell University. She noted that one of the trending topics on X that led millions of users to the images was "Taylor Swift AI," indicating that people already knew they were fake. The point was to degrade Swift in a public way, and labeling the images wouldn’t change that.
To be fair, Meta’s platforms were reportedly much quicker to remove the images than X, even without the labels. The company did so under its policies against sexually explicit content, not because they were made with AI.
Advertisement
Vidan said the Swift episode served as a high-profile example of a much broader problem of AI fakes that target women and girls sexually. Typically, the victims don’t have the fan base or the clout of a Taylor Swift to pressure platforms to take those images down. These kinds of AI fakes remain an old-fashioned content moderation problem, she said — the kind that doesn't lend itself to a tidy technical fix.
That doesn’t mean Meta’s AI labels, or watermarking in general, are a fool's errand.
Ahead of a big election year, both in the U.S. and globally, labeling huge swaths of AI-generated images created with mainstream tools will, if nothing else, "put more friction into the system" by which AI fakes are generated, said David Broniatowski, an engineering professor at George Washington University. "It’s nice to see that they're taking the problem of false content at scale seriously."
Advertisement
But that problem is bigger than just AI, he added. In some ways, both AI and social media suffer from the same basic flaw — they’re systems designed to generate and spread information without regard to whether it's true.
It’s worth noting here that Meta's AI labeling announcement came a day after its Oversight Board criticized the company for its "incoherent" and "confusing" policies on manipulated media, as my colleague Naomi Nix reported. That rebuke was prompted by an altered video of President Biden, which Meta had said did not violate its rules.
"Speaking as an engineer," Broniatowski said, "until we actually start designing systems from the ground up to take into account whether information is true or false, then all we’re doing is putting a band-aid on a hemorrhaging wound."
Agency scanner
Hill happenings
White House renews calls on Congress to extend internet subsidy program (Associated Press)
Inside the industry
Meta announces new updates to help teens on its platforms combat sextortion (TechCrunch)
Advertisement
Bluesky, a trendy rival to X, finally opens to the public (By Will Oremus)
Privacy monitor
WhatsApp Chats Will Soon Work With Other Encrypted Messaging Apps (Wired)
Zuckerberg’s Secret Weapon for AI Is Your Facebook Data (Bloomberg)
Workforce report
Amazon Is Laying Off Hundreds in Its Health Care Operation (Bloomberg)
Trending
Everyone is using the Apple Vision Pro all wrong (By Shira Ovide)
Daybook
- The House Ways and Means Committee holds a hearing, “Advancing America’s Interests at the World Trade Organization’s 13th Ministerial Meeting,” Wednesday at 9 a.m.
- The House Committee on Administration holds a hearing, “American Confidence in Elections: Confronting Zuckerbucks, Private Funding of Election Administration,” Wednesday at 9 a.m.
- The Senate Finance Committee holds a hearing, “Artificial Intelligence and Health Care: Promise and Pitfalls,” Thursday at 10 a.m.
- FTC Chair Lina Khan speaks at the DCN Next Summit event, Friday at 9:30 a.m.
Before you log off
That’s all for today — thank you so much for joining us! Make sure to tell others to subscribe to The Technology202 here. Get in touch with tips, feedback or greetings on Twitter or email.
ncG1vNJzZmivp6x7uK3SoaCnn6Sku7G70q1lnKedZL2wuMitoJyrX2d9c4COaWloaGdkuqbAwKxkp52nYq6qecuamZ6ko2LEsLrTZqqopKaaerWt2KWmq2WjrLanwIypqaianJq6cA%3D%3D