Profile

unixronin: Galen the technomage, from Babylon 5: Crusade (Default)
Unixronin

December 2012

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829
3031     

Most Popular Tags

Expand Cut Tags

No cut tags
Friday, October 24th, 2025 09:07 pm

Posted by Bruce Schneier

There is a new cigar named “El Pulpo The Squid.” Yes, that means “The Octopus The Squid.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Friday, October 24th, 2025 11:01 am

Posted by Bruce Schneier

Two people found the solution. They used the power of research, not cryptanalysis, finding clues amongst the Sanborn papers at the Smithsonian’s Archives of American Art.

This comes as an awkward time, as Sanborn is auctioning off the solution. There were legal threats—I don’t understand their basis—and the solvers are not publishing their solution.

Thursday, October 23rd, 2025 11:04 am

Posted by Bruce Schneier

This is bad:

F5, a Seattle-based maker of networking software, disclosed the breach on Wednesday. F5 said a “sophisticated” threat group working for an undisclosed nation-state government had surreptitiously and persistently dwelled in its network over a “long-term.” Security researchers who have responded to similar intrusions in the past took the language to mean the hackers were inside the F5 network for years.

During that time, F5 said, the hackers took control of the network segment the company uses to create and distribute updates for BIG IP, a line of server appliances that F5 says is used by 48 of the world’s top 50 corporations. Wednesday’s disclosure went on to say the threat group downloaded proprietary BIG-IP source code information about vulnerabilities that had been privately discovered but not yet patched. The hackers also obtained configuration settings that some customers used inside their networks.

Control of the build system and access to the source code, customer configurations, and documentation of unpatched vulnerabilities has the potential to give the hackers unprecedented knowledge of weaknesses and the ability to exploit them in supply-chain attacks on thousands of networks, many of which are sensitive. The theft of customer configurations and other data further raises the risk that sensitive credentials can be abused, F5 and outside security experts said.

F5 announcement.

Wednesday, October 22nd, 2025 11:03 am

Posted by Bruce Schneier

Interesting article on people with nonstandard faces and how facial recognition systems fail for them.

Some of those living with facial differences tell WIRED they have undergone multiple surgeries and experienced stigma for their entire lives, which is now being echoed by the technology they are forced to interact with. They say they haven’t been able to access public services due to facial verification services failing, while others have struggled to access financial services. Social media filters and face-unlocking systems on phones often won’t work, they say.

It’s easy to blame the tech, but the real issue are the engineers who only considered a narrow spectrum of potential faces. That needs to change. But also, we need easy-to-access backup systems when the primary ones fail.

Monday, October 20th, 2025 11:00 am

Posted by Bruce Schneier

The OODA loop—for observe, orient, decide, act—is a framework to understand decision-making in adversarial situations. We apply the same framework to artificial intelligence agents, who have to make their decisions with untrustworthy observations and orientation. To solve this problem, we need new systems of input, processing, and output integrity.

Many decades ago, U.S. Air Force Colonel John Boyd introduced the concept of the “OODA loop,” for Observe, Orient, Decide, and Act. These are the four steps of real-time continuous decision-making. Boyd developed it for fighter pilots, but it’s long been applied in artificial intelligence (AI) and robotics. An AI agent, like a pilot, executes the loop over and over, accomplishing its goals iteratively within an ever-changing environment. This is Anthropic’s definition: “Agents are models using tools in a loop.”1

OODA Loops for Agentic AI

Traditional OODA analysis assumes trusted inputs and outputs, in the same way that classical AI assumed trusted sensors, controlled environments, and physical boundaries. This no longer holds true. AI agents don’t just execute OODA loops; they embed untrusted actors within them. Web-enabled large language models (LLMs) can query adversary-controlled sources mid-loop. Systems that allow AI to use large corpora of content, such as retrieval-augmented generation (https://en.wikipedia.org/wiki/Retrieval-augmented_generation), can ingest poisoned documents. Tool-calling application programming interfaces can execute untrusted code. Modern AI sensors can encompass the entire Internet; their environments are inherently adversarial. That means that fixing AI hallucination is insufficient because even if the AI accurately interprets its inputs and produces corresponding output, it can be fully corrupt.

In 2022, Simon Willison identified a new class of attacks against AI systems: “prompt injection.”2 Prompt injection is possible because an AI mixes untrusted inputs with trusted instructions and then confuses one for the other. Willison’s insight was that this isn’t just a filtering problem; it’s architectural. There is no privilege separation, and there is no separation between the data and control paths. The very mechanism that makes modern AI powerful—treating all inputs uniformly—is what makes it vulnerable. The security challenges we face today are structural consequences of using AI for everything.

  1. Insecurities can have far-reaching effects. A single poisoned piece of training data can affect millions of downstream applications. In this environment, security debt accrues like technical debt.
  2. AI security has a temporal asymmetry. The temporal disconnect between training and deployment creates unauditable vulnerabilities. Attackers can poison a model’s training data and then deploy an exploit years later. Integrity violations are frozen in the model. Models aren’t aware of previous compromises since each inference starts fresh and is equally vulnerable.
  3. AI increasingly maintains state—in the form of chat history and key-value caches. These states accumulate compromises. Every iteration is potentially malicious, and cache poisoning persists across interactions.
  4. Agents compound the risks. Pretrained OODA loops running in one or a dozen AI agents inherit all of these upstream compromises. Model Context Protocol (MCP) and similar systems that allow AI to use tools create their own vulnerabilities that interact with each other. Each tool has its own OODA loop, which nests, interleaves, and races. Tool descriptions become injection vectors. Models can’t verify tool semantics, only syntax. “Submit SQL query” might mean “exfiltrate database” because an agent can be corrupted in prompts, training data, or tool definitions to do what the attacker wants. The abstraction layer itself can be adversarial.

For example, an attacker might want AI agents to leak all the secret keys that the AI knows to the attacker, who might have a collector running in bulletproof hosting in a poorly regulated jurisdiction. They could plant coded instructions in easily scraped web content, waiting for the next AI training set to include it. Once that happens, they can activate the behavior through the front door: tricking AI agents (think a lowly chatbot or an analytics engine or a coding bot or anything in between) that are increasingly taking their own actions, in an OODA loop, using untrustworthy input from a third-party user. This compromise persists in the conversation history and cached responses, spreading to multiple future interactions and even to other AI agents. All this requires us to reconsider risks to the agentic AI OODA loop, from top to bottom.

  • Observe: The risks include adversarial examples, prompt injection, and sensor spoofing. A sticker fools computer vision, a string fools an LLM. The observation layer lacks authentication and integrity.
  • Orient: The risks include training data poisoning, context manipulation, and semantic backdoors. The model’s worldview—its orientation—can be influenced by attackers months before deployment. Encoded behavior activates on trigger phrases.
  • Decide: The risks include logic corruption via fine-tuning attacks, reward hacking, and objective misalignment. The decision process itself becomes the payload. Models can be manipulated to trust malicious sources preferentially.
  • Act: The risks include output manipulation, tool confusion, and action hijacking. MCP and similar protocols multiply attack surfaces. Each tool call trusts prior stages implicitly.

AI gives the old phrase “inside your adversary’s OODA loop” new meaning. For Boyd’s fighter pilots, it meant that you were operating faster than your adversary, able to act on current data while they were still on the previous iteration. With agentic AI, adversaries aren’t just metaphorically inside; they’re literally providing the observations and manipulating the output. We want adversaries inside our loop because that’s where the data are. AI’s OODA loops must observe untrusted sources to be useful. The competitive advantage, accessing web-scale information, is identical to the attack surface. The speed of your OODA loop is irrelevant when the adversary controls your sensors and actuators.

Worse, speed can itself be a vulnerability. The faster the loop, the less time for verification. Millisecond decisions result in millisecond compromises.

The Source of the Problem

The fundamental problem is that AI must compress reality into model-legible forms. In this setting, adversaries can exploit the compression. They don’t have to attack the territory; they can attack the map. Models lack local contextual knowledge. They process symbols, not meaning. A human sees a suspicious URL; an AI sees valid syntax. And that semantic gap becomes a security gap.

Prompt injection might be unsolvable in today’s LLMs. LLMs process token sequences, but no mechanism exists to mark token privileges. Every solution proposed introduces new injection vectors: Delimiter? Attackers include delimiters. Instruction hierarchy? Attackers claim priority. Separate models? Double the attack surface. Security requires boundaries, but LLMs dissolve boundaries. More generally, existing mechanisms to improve models won’t help protect against attack. Fine-tuning preserves backdoors. Reinforcement learning with human feedback adds human preferences without removing model biases. Each training phase compounds prior compromises.

This is Ken Thompson’s “trusting trust” attack all over again.3 Poisoned states generate poisoned outputs, which poison future states. Try to summarize the conversation history? The summary includes the injection. Clear the cache to remove the poison? Lose all context. Keep the cache for continuity? Keep the contamination. Stateful systems can’t forget attacks, and so memory becomes a liability. Adversaries can craft inputs that corrupt future outputs.

This is the agentic AI security trilemma. Fast, smart, secure; pick any two. Fast and smart—you can’t verify your inputs. Smart and secure—you check everything, slowly, because AI itself can’t be used for this. Secure and fast—you’re stuck with models with intentionally limited capabilities.

This trilemma isn’t unique to AI. Some autoimmune disorders are examples of molecular mimicry—when biological recognition systems fail to distinguish self from nonself. The mechanism designed for protection becomes the pathology as T cells attack healthy tissue or fail to attack pathogens and bad cells. AI exhibits the same kind of recognition failure. No digital immunological markers separate trusted instructions from hostile input. The model’s core capability, following instructions in natural language, is inseparable from its vulnerability. Or like oncogenes, the normal function and the malignant behavior share identical machinery.

Prompt injection is semantic mimicry: adversarial instructions that resemble legitimate prompts, which trigger self-compromise. The immune system can’t add better recognition without rejecting legitimate cells. AI can’t filter malicious prompts without rejecting legitimate instructions. Immune systems can’t verify their own recognition mechanisms, and AI systems can’t verify their own integrity because the verification system uses the same corrupted mechanisms.

In security, we often assume that foreign/hostile code looks different from legitimate instructions, and we use signatures, patterns, and statistical anomaly detection to detect it. But getting inside someone’s AI OODA loop uses the system’s native language. The attack is indistinguishable from normal operation because it is normal operation. The vulnerability isn’t a defect—it’s the feature working correctly.

Where to Go Next?

The shift to an AI-saturated world has been dizzying. Seemingly overnight, we have AI in every technology product, with promises of even more—and agents as well. So where does that leave us with respect to security?

Physical constraints protected Boyd’s fighter pilots. Radar returns couldn’t lie about physics; fooling them, through stealth or jamming, constituted some of the most successful attacks against such systems that are still in use today. Observations were authenticated by their presence. Tampering meant physical access. But semantic observations have no physics. When every AI observation is potentially corrupted, integrity violations span the stack. Text can claim anything, and images can show impossibilities. In training, we face poisoned datasets and backdoored models. In inference, we face adversarial inputs and prompt injection. During operation, we face a contaminated context and persistent compromise. We need semantic integrity: verifying not just data but interpretation, not just content but context, not just information but understanding. We can add checksums, signatures, and audit logs. But how do you checksum a thought? How do you sign semantics? How do you audit attention?

Computer security has evolved over the decades. We addressed availability despite failures through replication and decentralization. We addressed confidentiality despite breaches using authenticated encryption. Now we need to address integrity despite corruption.4

Trustworthy AI agents require integrity because we can’t build reliable systems on unreliable foundations. The question isn’t whether we can add integrity to AI but whether the architecture permits integrity at all.

AI OODA loops and integrity aren’t fundamentally opposed, but today’s AI agents observe the Internet, orient via statistics, decide probabilistically, and act without verification. We built a system that trusts everything, and now we hope for a semantic firewall to keep it safe. The adversary isn’t inside the loop by accident; it’s there by architecture. Web-scale AI means web-scale integrity failure. Every capability corrupts.

Integrity isn’t a feature you add; it’s an architecture you choose. So far, we have built AI systems where “fast” and “smart” preclude “secure.” We optimized for capability over verification, for accessing web-scale data over ensuring trust. AI agents will be even more powerful—and increasingly autonomous. And without integrity, they will also be dangerous.

References

1. S. Willison, Simon Willison’s Weblog, May 22, 2025. [Online]. Available: https://simonwillison.net/2025/May/22/tools-in-a-loop/

2. S. Willison, “Prompt injection attacks against GPT-3,” Simon Willison’s Weblog, Sep. 12, 2022. [Online]. Available: https://simonwillison.net/2022/Sep/12/prompt-injection/

3. K. Thompson, “Reflections on trusting trust,” Commun. ACM, vol. 27, no. 8, Aug. 1984. [Online]. Available: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf

4. B. Schneier, “The age of integrity,” IEEE Security & Privacy, vol. 23, no. 3, p. 96, May/Jun. 2025. [Online]. Available: https://www.computer.org/csdl/magazine/sp/2025/03/11038984/27COaJtjDOM

This essay was written with Barath Raghavan, and originally appeared in IEEE Security & Privacy.

Friday, October 17th, 2025 11:03 am

Posted by Bruce Schneier

Here’s the summary:

We pointed a commercial-off-the-shelf satellite dish at the sky and carried out the most comprehensive public study to date of geostationary satellite communication. A shockingly large amount of sensitive traffic is being broadcast unencrypted, including critical infrastructure, internal corporate and government communications, private citizens’ voice calls and SMS, and consumer Internet traffic from in-flight wifi and mobile networks. This data can be passively observed by anyone with a few hundred dollars of consumer-grade hardware. There are thousands of geostationary satellite transponders globally, and data from a single transponder may be visible from an area as large as 40% of the surface of the earth.

Full paper. News article.

Thursday, October 16th, 2025 11:06 am

Posted by Bruce Schneier

CNN has a great piece about how cryptocurrency ATMs are used to scam people out of their money. The fees are usurious, and they’re a common place for scammers to send victims to buy cryptocurrency for them. The companies behind the ATMs, at best, do not care about the harm they cause; the profits are just too good.

Wednesday, October 15th, 2025 11:02 am

Posted by Bruce Schneier

Apple is now offering a $2M bounty for a zero-click exploit. According to the Apple website:

Today we’re announcing the next major chapter for Apple Security Bounty, featuring the industry’s highest rewards, expanded research categories, and a flag system for researchers to objectively demonstrate vulnerabilities and obtain accelerated awards.

  1. We’re doubling our top award to $2 million for exploit chains that can achieve similar goals as sophisticated mercenary spyware attacks. This is an unprecedented amount in the industry and the largest payout offered by any bounty program we’re aware of ­ and our bonus system, providing additional rewards for Lockdown Mode bypasses and vulnerabilities discovered in beta software, can more than double this reward, with a maximum payout in excess of $5 million. We’re also doubling or significantly increasing rewards in many other categories to encourage more intensive research. This includes $100,000 for a complete Gatekeeper bypass, and $1 million for broad unauthorized iCloud access, as no successful exploit has been demonstrated to date in either category.
  2. Our bounty categories are expanding to cover even more attack surfaces. Notably, we’re rewarding one-click WebKit sandbox escapes with up to $300,000, and wireless proximity exploits over any radio with up to $1 million.
  3. We’re introducing Target Flags, a new way for researchers to objectively demonstrate exploitability for some of our top bounty categories, including remote code execution and Transparency, Consent, and Control (TCC) bypasses ­ and to help determine eligibility for a specific award. Researchers who submit reports with Target Flags will qualify for accelerated awards, which are processed immediately after the research is received and verified, even before a fix becomes available.
Tuesday, October 14th, 2025 04:01 pm

Posted by Bruce Schneier

This is a current list of where and when I am scheduled to speak:

  • Nathan E. Sanders and I will be giving a book talk on Rewiring Democracy at the Harvard Kennedy School’s Ash Center in Cambridge, Massachusetts, USA, on October 22, 2025, at noon ET.
  • Nathan E. Sanders and I will be speaking and signing books at the Cambridge Public Library in Cambridge, Massachusetts, USA, on October 22, 2025, at 6:00 PM ET. The event is sponsored by Harvard Bookstore.
  • Nathan E. Sanders and I will give a virtual talk about our book Rewiring Democracy on October 23, 2025, at 1:00 PM ET. The event is hosted by Data & Society.
  • I’m speaking at the Ted Rogers School of Management in Toronto, Ontario, Canada, on Thursday, October 29, 2025, at 1:00 PM ET.
  • Nathan E. Sanders and I will give a virtual talk about our book Rewiring Democracy on November 3, 2025, at 2:00 PM ET. The event is hosted by the Boston Public Library.
  • I’m speaking at the World Forum for Democracy in Strasbourg, France, November 5-7, 2025.
  • I’m speaking and signing books at the University of Toronto Bookstore in Toronto, Ontario, Canada, on November 14, 2025. Details to come.
  • Nathan E. Sanders and I will be speaking at the MIT Museum in Cambridge, Massachusetts, USA, on December 1, 2025, at 6:00 pm ET.
  • Nathan E. Sanders and I will be speaking at a virtual event hosted by City Lights on the Zoom platform, on December 3, 2025, at 6:00 PM PT.
  • I’m speaking and signing books at the Chicago Public Library in Chicago, Illinois, USA, on February 5, 2026. Details to come.

The list is maintained on this page.

Tuesday, October 14th, 2025 11:09 am

Posted by Bruce Schneier

This chilling paragraph is in a comprehensive Brookings report about the use of tech to deport people from the US:

The administration has also adapted its methods of social media surveillance. Though agencies like the State Department have gathered millions of handles and monitored political discussions online, the Trump administration has been more explicit in who it’s targeting. Secretary of State Marco Rubio announced a new, zero-tolerance “Catch and Revoke” strategy, which uses AI to monitor the public speech of foreign nationals and revoke visas of those who “abuse [the country’s] hospitality.” In a March press conference, Rubio remarked that at least 300 visas, primarily student and visitor visas, had been revoked on the grounds that visitors are engaging in activity contrary to national interest. A State Department cable also announced a new requirement for student visa applicants to set their social media accounts to public—reflecting stricter vetting practices aimed at identifying individuals who “bear hostile attitudes toward our citizens, culture, government, institutions, or founding principles,” among other criteria.

Monday, October 13th, 2025 04:36 pm

Posted by Bruce Schneier

My latest book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, will be published in just over a week. No reviews yet, but you can read chapters 12 and 34 (of 43 chapters total).

You can order the book pretty much everywhere, and a copy signed by me here.

Please help spread the word. I want this book to make a splash when it’s public. Leave a review on whatever site you buy it from. Or make a TikTok video. Or do whatever you kids do these days. Is anyone a Slashdot contributor? I’d like the book to be announced there.

Monday, October 13th, 2025 11:04 am

Posted by Bruce Schneier

Two years ago, Americans anxious about the forthcoming 2024 presidential election were considering the malevolent force of an election influencer: artificial intelligence. Over the past several years, we have seen plenty of warning signs from elections worldwide demonstrating how AI can be used to propagate misinformation and alter the political landscape, whether by trolls on social media, foreign influencers, or even a street magician. AI is poised to play a more volatile role than ever before in America’s next federal election in 2026. We can already see how different groups of political actors are approaching AI. Professional campaigners are using AI to accelerate the traditional tactics of electioneering; organizers are using it to reinvent how movements are built; and citizens are using it both to express themselves and amplify their side’s messaging. Because there are so few rules, and so little prospect of regulatory action, around AI’s role in politics, there is no oversight of these activities, and no safeguards against the dramatic potential impacts for our democracy.

The Campaigners

Campaigners—messengers, ad buyers, fundraisers, and strategists—are focused on efficiency and optimization. To them, AI is a way to augment or even replace expensive humans who traditionally perform tasks like personalizing emails, texting donation solicitations, and deciding what platforms and audiences to target.

This is an incremental evolution of the computerization of campaigning that has been underway for decades. For example, the progressive campaign infrastructure group Tech for Campaigns claims it used AI in the 2024 cycle to reduce the time spent drafting fundraising solicitations by one-third. If AI is working well here, you won’t notice the difference between an annoying campaign solicitation written by a human staffer and an annoying one written by AI.

But AI is scaling these capabilities, which is likely to make them even more ubiquitous. This will make the biggest difference for challengers to incumbents in safe seats, who see AI as both a tacitly useful tool and an attention-grabbing way to get their race into the headlines. Jason Palmer, the little-known Democratic primary challenger to Joe Biden, successfully won the American Samoa primary while extensively leveraging AI avatars for campaigning.

Such tactics were sometimes deployed as publicity stunts in the 2024 cycle; they were firsts that got attention. Pennsylvania Democratic Congressional candidate Shamaine Daniels became the first to use a conversational AI robocaller in 2023. Two long-shot challengers to Rep. Don Beyer used an AI avatar to represent the incumbent in a live debate last October after he declined to participate. In 2026, voters who have seen years of the official White House X account posting deepfaked memes of Donald Trump will be desensitized to the use of AI in political communications.

Strategists are also turning to AI to interpret public opinion data and provide more fine-grained insight into the perspective of different voters. This might sound like AIs replacing people in opinion polls, but it is really a continuation of the evolution of political polling into a data-driven science over the last several decades.

A recent survey by the American Association of Political Consultants found that a majority of their members’ firms already use AI regularly in their work, and more than 40 percent believe it will “fundamentally transform” the future of their profession. If these emerging AI tools become popular in the midterms, it won’t just be a few candidates from the tightest national races texting you three times a day. It may also be the member of Congress in the safe district next to you, and your state representative, and your school board members.

The development and use of AI in campaigning is different depending on what side of the aisle you look at. On the Republican side, Push Digital Group is going “all in” on a new AI initiative, using the technology to create hundreds of ad variants for their clients automatically, as well as assisting with strategy, targeting, and data analysis. On the other side, the National Democratic Training Committee recently released a playbook for using AI. Quiller is building an AI-powered fundraising platform aimed at drastically reducing the time campaigns spend producing emails and texts. Progressive-aligned startups Chorus AI and BattlegroundAI are offering AI tools for automatically generating ads for use on social media and other digital platforms. DonorAtlas automates data collection on potential donors, and RivalMind AI focuses on political research and strategy, automating the production of candidate dossiers.

For now, there seems to be an investment gap between Democratic- and Republican-aligned technology innovators. Progressive venture fund Higher Ground Labs boasts $50 million in deployed investments since 2017 and a significant focus on AI. Republican-aligned counterparts operate on a much smaller scale. Startup Caucus has announced one investment—of $50,000—since 2022. The Center for Campaign Innovation funds research projects and events, not companies. This echoes a longstanding gap in campaign technology between Democratic- and Republican-aligned fundraising platforms ActBlue and WinRed, which has landed the former in Republicans’ political crosshairs.

Of course, not all campaign technology innovations will be visible. In 2016, the Trump campaign vocally eschewed using data to drive campaign strategy and appeared to be falling way behind on ad spending, but was—we learned in retrospect—actually leaning heavily into digital advertising and making use of new controversial mechanisms for accessing and exploiting voters’ social media data with vendor Cambridge Analytica. The most impactful uses of AI in the 2026 midterms may not be known until 2027 or beyond.

The Organizers

Beyond the realm of political consultants driving ad buys and fundraising appeals, organizers are using AI in ways that feel more radically new.

The hypothetical potential of AI to drive political movements was illustrated in 2022 when a Danish artist collective used an AI model to found a political party, the Synthetic Party, and generate its policy goals. This was more of an art project than a popular movement, but it demonstrated that AIs—synthesizing the expressions and policy interests of humans—can formulate a political platform. In 2025, Denmark hosted a “summit” of eight such AI political agents where attendees could witness “continuously orchestrate[d] algorithmic micro-assemblies, spontaneous deliberations, and impromptu policy-making” by the participating AIs.

The more viable version of this concept lies in the use of AIs to facilitate deliberation. AIs are being used to help legislators collect input from constituents and to hold large-scale citizen assemblies. This kind of AI-driven “sensemaking” may play a powerful role in the future of public policy. Some research has suggested that AI can be as or more effective than humans in helping people find common ground on controversial policy issues.

Another movement for “Public AI” is focused on wresting AI from the hands of corporations to put people, through their governments, in control. Civic technologists in national governments from Singapore, Japan, Sweden, and Switzerland are building their own alternatives to Big Tech AI models, for use in public administration and distribution as a public good.

Labor organizers have a particularly interesting relationship to AI. At the same time that they are galvanizing mass resistance against the replacement or endangerment of human workers by AI, many are racing to leverage the technology in their own work to build power.

Some entrepreneurial organizers have used AI in the past few years as tools for activating, connecting, answering questions for, and providing guidance to their members. In the UK, the Centre for Responsible Union AI studies and promotes the use of AI by unions; they’ve published several case studies. The UK Public and Commercial Services Union has used AI to help their reps simulate recruitment conversations before going into the field. The Belgian union ACV-CVS has used AI to sort hundreds of emails per day from members to help them respond more efficiently. Software companies such as Quorum are increasingly offering AI-driven products to cater to the needs of organizers and grassroots campaigns.

But unions have also leveraged AI for its symbolic power. In the U.S., the Screen Actors Guild held up the specter of AI displacement of creative labor to attract public attention and sympathy, and the ETUC (the European confederation of trade unions) developed a policy platform for responding to AI.

Finally, some union organizers have leveraged AI in more provocative ways. Some have applied it to hacking the “bossware” AI to subvert the exploitative intent or disrupt the anti-union practices of their managers.

The Citizens

Many of the tasks we’ve talked about so far are familiar use cases to anyone working in office and management settings: writing emails, providing user (or voter, or member) support, doing research.

But even mundane tasks, when automated at scale and targeted at specific ends, can be pernicious. AI is not neutral. It can be applied by many actors for many purposes. In the hands of the most numerous and diverse actors in a democracy—the citizens—that has profound implications.

Conservative activists in Georgia and Florida have used a tool named EagleAI to automate challenging voter registration en masse (although the tool’s creator later denied that it uses AI). In a nonpartisan electoral management context with access to accurate data sources, such automated review of electoral registrations might be useful and effective. In this hyperpartisan context, AI merely serves to amplify the proclivities of activists at the extreme of their movements. This trend will continue unabated in 2026.

Of course, citizens can use AI to safeguard the integrity of elections. In Ghana’s 2024 presidential election, civic organizations used an AI tool to automatically detect and mitigate electoral disinformation spread on social media. The same year, Kenyan protesters developed specialized chatbots to distribute information about a controversial finance bill in Parliament and instances of government corruption.

So far, the biggest way Americans have leveraged AI in politics is in self-expression. About ten million Americans have used the chatbot Resistbot to help draft and send messages to their elected leaders. It’s hard to find statistics on how widely adopted tools like this are, but researchers have estimated that, as of 2024, about one in five consumer complaints to the U.S. Consumer Financial Protection Bureau was written with the assistance of AI.

OpenAI operates security programs to disrupt foreign influence operations and maintains restrictions on political use in its terms of service, but this is hardly sufficient to deter use of AI technologies for whatever purpose. And widely available free models give anyone the ability to attempt this on their own.

But this could change. The most ominous sign of AI’s potential to disrupt elections is not the deepfakes and misinformation. Rather, it may be the use of AI by the Trump administration to surveil and punish political speech on social media and other online platforms. The scalability and sophistication of AI tools give governments with authoritarian intent unprecedented power to police and selectively limit political speech.

What About the Midterms?

These examples illustrate AI’s pluripotent role as a force multiplier. The same technology used by different actors—campaigners, organizers, citizens, and governments—leads to wildly different impacts. We can’t know for sure what the net result will be. In the end, it will be the interactions and intersections of these uses that matters, and their unstable dynamics will make future elections even more unpredictable than in the past.

For now, the decisions of how and when to use AI lie largely with individuals and the political entities they lead. Whether or not you personally trust AI to write an email for you or make a decision about you hardly matters. If a campaign, an interest group, or a fellow citizen trusts it for that purpose, they are free to use it.

It seems unlikely that Congress or the Trump administration will put guardrails around the use of AI in politics. AI companies have rapidly emerged as among the biggest lobbyists in Washington, reportedly dumping $100 million toward preventing regulation, with a focus on influencing candidate behavior before the midterm elections. The Trump administration seems open and responsive to their appeals.

The ultimate effect of AI on the midterms will largely depend on the experimentation happening now. Candidates and organizations across the political spectrum have ample opportunity—but a ticking clock—to find effective ways to use the technology. Those that do will have little to stop them from exploiting it.

This essay was written with Nathan E. Sanders, and originally appeared in The American Prospect.