Today, let’s talk about a policy issue that more companies are going to have to consider as the 2024 election approaches: when should platforms allow materials created by generative artificial intelligence, and when should they remove them?

Well aware that lawmakers are watching their every move here, some platforms moved early to restrict the use of their tools in political settings. OpenAI, for example, updated its usage policies in March to ban its large language models for being used for creating “high volumes of campaign materials,” personalizing or targeting those materials to “specific demographics,” or building conversational chatbots that “engage in political advocacy or lobbying.”

The policies seek to strike a balance between permitting GPT-4 for being used for smaller or more personal uses (drafting a speech, say, or brainstorming advertising taglines) while blocking it from use in large-scale campaigns (like generating thousands of pro-Trump messages for bots to flood the former Twitter with).

But the real policy is what you choose to enforce. And on that front, OpenAI appears to have left some gaps. Here’s Cat Zakrzewski in the Washington Post:

An analysis by the Washington Post shows that OpenAI for months has not enforced its ban. ChatGPT generates targeted campaigns almost instantly, given prompts such as “Write a message encouraging suburban women in their 40s to vote for Trump” or “Make a case to convince an urban dweller in their 20s to vote for Biden.”

It told the suburban women that Trump’s policies “prioritize economic growth, job creation, and a safe environment for your family.” In the message to urban dwellers, the chatbot rattles off a list of 10 of President Biden’s policies that might appeal to young voters, including the president’s climate change commitments and his proposal for student loan debt relief.

What gives? So far as I can tell, OpenAI didn’t offer an explanation to the Post beyond “it’s complicated,” and the company didn’t respond to my email. OpenAI told the Post it would consider building tools to detect when its products are being used to generate campaign materials, but also said it wanted to avoid inadvertently going too far — blocking users from creating campaigns around disease prevention, for example.

Still, given the heightened sensitivity around AI and politics at the moment, it’s at least somewhat surprising that the Post found so many enforcement gaps.

At the same time, it’s worth talking about what our threat model is when it comes to generative AI and elections. As far as I can tell, there’s a missing piece that doesn’t get included in these discussions often enough — and it may turn out to be a good defense against the worse that AI can offer.

One much-discussed threat model goes something like this: a candidate or someone working on their behalf uses AI to craft and refine a huge number of messages that are highly persuasive, targeted at tiny demographics or even individuals, and use those messages to reshape opinion at a large scale.

If that sounds familiar, it’s because it this micro-targeting approach is what Cambridge Analytica promised clients it would do during the 2016 election. Revelations that Facebook user data was being used to fuel these efforts caused a major data privacy scandal at the company, and effectively set in motion the events that led the company to change its name to Meta.

In the end, though, Cambridge Analytica’s efforts were deemed largely ineffective by experts. And even if generative AI techniques ultimately prove more persuasive than the ads we saw in 2016, tomorrow’s influence operations will still face the problem that every media operator has to contend with: distribution.

Let’s accept that soon it will be trivially easy to create thousands of highly persuasive AI-generated messages. How do you deliver them to the electorate?

Most platforms have strong anti-spam tools in place that prevent you from flooding feeds. You could try to place your AI posts as ads, but that gets expensive. What most influence operations continue to want is free distribution, afforded by platforms to the posts that are getting the most engagement. But look around — that distribution is getting harder and harder to come by.

After a long and mostly unhappy relationship with the journalism industry, Meta is increasingly taking steps to reduce its role in news distribution. (The fact that countries like Canada want the company to pay publishers for the right to display their links is surely a factor here.) When it launched Threads this summer, Meta said it would not do anything to promote the Twitter clone as a place to find journalism, even though the app it was designed to replace grew almost entirely on its reputation as a place for real-time news reading.

Meanwhile, X is reportedly planning to remove headlines from news articles shared on the platform, part of an apparent plan to push reporters into publishing all their work directly on the site. (I can’t imagine their bosses are too excited about that.)

Taken together, moves like these reduce the surface area for influence operations to take place. People will still try, of course. But it seems likely that those posts will get less attention, and potentially have even less of an impact, than they once did.

And while AI-generated text is much further along in its development than AI-generated video, I can imagine influence operations having similar problems in that medium. What’s your plan to make 1,000 YouTube videos go viral? What’s your secret for cracking a million individual For You pages on TikTok? These are problems that the best media companies still can’t solve, and it’s not clear to me why we expect state-backed troll armies to fare much better.

Platforms are both fragmented today than they were in 2016, more resistant to spam attacks, and more skeptical of the news. You soon may be able to generate unfathomable amounts of targeted political material, but you still have to deliver it to your target. And that’s getting harder all the time.

That doesn’t mean we all can relax, of course. New platforms and distribution channels will emerge, and they may prove less resistant to interference than the ones that defined our elections for the past decade. Platforms’ broader retreat from policing misinformation this year bears watching. And as with any kind of spam, bad actors will always be coming up with new techniques in an effort to best platforms’ defenses.

But for the moment, at least, I’m less worried about the threat of AI to upend the 2024 election than I once was. It would be good to see OpenAI start to enforce its policies against misuse — but in the meantime, the platforms where its tools can do the most damage mostly already are.

Talk about this edition with us in Discord: This link will get you in for the next week.

A funny and predictable outcome of me writing about why we should stop trusting souped-up note-taking apps to make us smarter last week is that dozens of you wrote to me saying: ok, but have you tried this souped-up note-taking app to make you smarter? And then linked to something I had never heard of that looked to be at most 1 percent different from every other product on the market. Several of you told me that, driven to madness by the failure of other note-taking apps, you had actually built your own souped-up note-taking app and invited me to try it. You are all degenerates and I hope you get the help you need.

An even funnier and more predictable outcome of this response is that I actually downloaded one of these suggested apps and immediately fell in love with it. It’s called Capacities, it’s built by a small team in Europe, and it’s at most 1 percent different from every other product on the market. I’m completely in love, and about a year from now I’ll tell you in a separate post why I abandoned Capacities for something that looks almost identical.

Elsewhere, I should note that lots of folks (especially on Bluesky!) really did not like this piece on how chronological feeds are making a comeback. I had argued that, while these feeds can be good and useful, users mostly reject them when given an option, and so it seems strange to legislate that platforms must offer them under the Digital Services Act in Europe.

Lots of you pointed out that European regulators are skeptical of social networks generally, and think it’s bad to look at feeds for long periods of time, and so mandating that social networks create a worse version of themselves to forcibly wean users off their own products does have a certain logic to it. Fair enough. (I also think my social posts of this column wound up coming across as more trollish than I intended them, as I said that “users mostly hate” chronological feeds, which served as bat signal for lovers of chronological feeds to congregate in my mentions and tell me to piss off. I should have worded that differently!)

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Send us tips, comments, questions, and ChatGPT propaganda: casey@platformer.news and zoe@platformer.news.

ncG1vNJzZminoJq7b7%2FUm6qtmZOge6S7zGinrppfpbmiwMWoqaadomS9cLTOsGSppJGps7C%2BzKxknJmeYrqit8Rmpq6qXaW8rbXTopqsd6Jyt7C502xdrqydlLqmsMiupHahn6hztsDMmJqapaCWtqi6nKmmrKw%3D