Good video.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Good video.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Here’s the summary:
We pointed a commercial-off-the-shelf satellite dish at the sky and carried out the most comprehensive public study to date of geostationary satellite communication. A shockingly large amount of sensitive traffic is being broadcast unencrypted, including critical infrastructure, internal corporate and government communications, private citizens’ voice calls and SMS, and consumer Internet traffic from in-flight wifi and mobile networks. This data can be passively observed by anyone with a few hundred dollars of consumer-grade hardware. There are thousands of geostationary satellite transponders globally, and data from a single transponder may be visible from an area as large as 40% of the surface of the earth.
CNN has a great piece about how cryptocurrency ATMs are used to scam people out of their money. The fees are usurious, and they’re a common place for scammers to send victims to buy cryptocurrency for them. The companies behind the ATMs, at best, do not care about the harm they cause; the profits are just too good.
Apple is now offering a $2M bounty for a zero-click exploit. According to the Apple website:
Today we’re announcing the next major chapter for Apple Security Bounty, featuring the industry’s highest rewards, expanded research categories, and a flag system for researchers to objectively demonstrate vulnerabilities and obtain accelerated awards.
- We’re doubling our top award to $2 million for exploit chains that can achieve similar goals as sophisticated mercenary spyware attacks. This is an unprecedented amount in the industry and the largest payout offered by any bounty program we’re aware of and our bonus system, providing additional rewards for Lockdown Mode bypasses and vulnerabilities discovered in beta software, can more than double this reward, with a maximum payout in excess of $5 million. We’re also doubling or significantly increasing rewards in many other categories to encourage more intensive research. This includes $100,000 for a complete Gatekeeper bypass, and $1 million for broad unauthorized iCloud access, as no successful exploit has been demonstrated to date in either category.
- Our bounty categories are expanding to cover even more attack surfaces. Notably, we’re rewarding one-click WebKit sandbox escapes with up to $300,000, and wireless proximity exploits over any radio with up to $1 million.
- We’re introducing Target Flags, a new way for researchers to objectively demonstrate exploitability for some of our top bounty categories, including remote code execution and Transparency, Consent, and Control (TCC) bypasses and to help determine eligibility for a specific award. Researchers who submit reports with Target Flags will qualify for accelerated awards, which are processed immediately after the research is received and verified, even before a fix becomes available.
This is a current list of where and when I am scheduled to speak:
The list is maintained on this page.
This chilling paragraph is in a comprehensive Brookings report about the use of tech to deport people from the US:
The administration has also adapted its methods of social media surveillance. Though agencies like the State Department have gathered millions of handles and monitored political discussions online, the Trump administration has been more explicit in who it’s targeting. Secretary of State Marco Rubio announced a new, zero-tolerance “Catch and Revoke” strategy, which uses AI to monitor the public speech of foreign nationals and revoke visas of those who “abuse [the country’s] hospitality.” In a March press conference, Rubio remarked that at least 300 visas, primarily student and visitor visas, had been revoked on the grounds that visitors are engaging in activity contrary to national interest. A State Department cable also announced a new requirement for student visa applicants to set their social media accounts to public—reflecting stricter vetting practices aimed at identifying individuals who “bear hostile attitudes toward our citizens, culture, government, institutions, or founding principles,” among other criteria.
My latest book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, will be published in just over a week. No reviews yet, but you can read chapters 12 and 34 (of 43 chapters total).
You can order the book pretty much everywhere, and a copy signed by me here.
Please help spread the word. I want this book to make a splash when it’s public. Leave a review on whatever site you buy it from. Or make a TikTok video. Or do whatever you kids do these days. Is anyone a Slashdot contributor? I’d like the book to be announced there.
Two years ago, Americans anxious about the forthcoming 2024 presidential election were considering the malevolent force of an election influencer: artificial intelligence. Over the past several years, we have seen plenty of warning signs from elections worldwide demonstrating how AI can be used to propagate misinformation and alter the political landscape, whether by trolls on social media, foreign influencers, or even a street magician. AI is poised to play a more volatile role than ever before in America’s next federal election in 2026. We can already see how different groups of political actors are approaching AI. Professional campaigners are using AI to accelerate the traditional tactics of electioneering; organizers are using it to reinvent how movements are built; and citizens are using it both to express themselves and amplify their side’s messaging. Because there are so few rules, and so little prospect of regulatory action, around AI’s role in politics, there is no oversight of these activities, and no safeguards against the dramatic potential impacts for our democracy.
Campaigners—messengers, ad buyers, fundraisers, and strategists—are focused on efficiency and optimization. To them, AI is a way to augment or even replace expensive humans who traditionally perform tasks like personalizing emails, texting donation solicitations, and deciding what platforms and audiences to target.
This is an incremental evolution of the computerization of campaigning that has been underway for decades. For example, the progressive campaign infrastructure group Tech for Campaigns claims it used AI in the 2024 cycle to reduce the time spent drafting fundraising solicitations by one-third. If AI is working well here, you won’t notice the difference between an annoying campaign solicitation written by a human staffer and an annoying one written by AI.
But AI is scaling these capabilities, which is likely to make them even more ubiquitous. This will make the biggest difference for challengers to incumbents in safe seats, who see AI as both a tacitly useful tool and an attention-grabbing way to get their race into the headlines. Jason Palmer, the little-known Democratic primary challenger to Joe Biden, successfully won the American Samoa primary while extensively leveraging AI avatars for campaigning.
Such tactics were sometimes deployed as publicity stunts in the 2024 cycle; they were firsts that got attention. Pennsylvania Democratic Congressional candidate Shamaine Daniels became the first to use a conversational AI robocaller in 2023. Two long-shot challengers to Rep. Don Beyer used an AI avatar to represent the incumbent in a live debate last October after he declined to participate. In 2026, voters who have seen years of the official White House X account posting deepfaked memes of Donald Trump will be desensitized to the use of AI in political communications.
Strategists are also turning to AI to interpret public opinion data and provide more fine-grained insight into the perspective of different voters. This might sound like AIs replacing people in opinion polls, but it is really a continuation of the evolution of political polling into a data-driven science over the last several decades.
A recent survey by the American Association of Political Consultants found that a majority of their members’ firms already use AI regularly in their work, and more than 40 percent believe it will “fundamentally transform” the future of their profession. If these emerging AI tools become popular in the midterms, it won’t just be a few candidates from the tightest national races texting you three times a day. It may also be the member of Congress in the safe district next to you, and your state representative, and your school board members.
The development and use of AI in campaigning is different depending on what side of the aisle you look at. On the Republican side, Push Digital Group is going “all in” on a new AI initiative, using the technology to create hundreds of ad variants for their clients automatically, as well as assisting with strategy, targeting, and data analysis. On the other side, the National Democratic Training Committee recently released a playbook for using AI. Quiller is building an AI-powered fundraising platform aimed at drastically reducing the time campaigns spend producing emails and texts. Progressive-aligned startups Chorus AI and BattlegroundAI are offering AI tools for automatically generating ads for use on social media and other digital platforms. DonorAtlas automates data collection on potential donors, and RivalMind AI focuses on political research and strategy, automating the production of candidate dossiers.
For now, there seems to be an investment gap between Democratic- and Republican-aligned technology innovators. Progressive venture fund Higher Ground Labs boasts $50 million in deployed investments since 2017 and a significant focus on AI. Republican-aligned counterparts operate on a much smaller scale. Startup Caucus has announced one investment—of $50,000—since 2022. The Center for Campaign Innovation funds research projects and events, not companies. This echoes a longstanding gap in campaign technology between Democratic- and Republican-aligned fundraising platforms ActBlue and WinRed, which has landed the former in Republicans’ political crosshairs.
Of course, not all campaign technology innovations will be visible. In 2016, the Trump campaign vocally eschewed using data to drive campaign strategy and appeared to be falling way behind on ad spending, but was—we learned in retrospect—actually leaning heavily into digital advertising and making use of new controversial mechanisms for accessing and exploiting voters’ social media data with vendor Cambridge Analytica. The most impactful uses of AI in the 2026 midterms may not be known until 2027 or beyond.
Beyond the realm of political consultants driving ad buys and fundraising appeals, organizers are using AI in ways that feel more radically new.
The hypothetical potential of AI to drive political movements was illustrated in 2022 when a Danish artist collective used an AI model to found a political party, the Synthetic Party, and generate its policy goals. This was more of an art project than a popular movement, but it demonstrated that AIs—synthesizing the expressions and policy interests of humans—can formulate a political platform. In 2025, Denmark hosted a “summit” of eight such AI political agents where attendees could witness “continuously orchestrate[d] algorithmic micro-assemblies, spontaneous deliberations, and impromptu policy-making” by the participating AIs.
The more viable version of this concept lies in the use of AIs to facilitate deliberation. AIs are being used to help legislators collect input from constituents and to hold large-scale citizen assemblies. This kind of AI-driven “sensemaking” may play a powerful role in the future of public policy. Some research has suggested that AI can be as or more effective than humans in helping people find common ground on controversial policy issues.
Another movement for “Public AI” is focused on wresting AI from the hands of corporations to put people, through their governments, in control. Civic technologists in national governments from Singapore, Japan, Sweden, and Switzerland are building their own alternatives to Big Tech AI models, for use in public administration and distribution as a public good.
Labor organizers have a particularly interesting relationship to AI. At the same time that they are galvanizing mass resistance against the replacement or endangerment of human workers by AI, many are racing to leverage the technology in their own work to build power.
Some entrepreneurial organizers have used AI in the past few years as tools for activating, connecting, answering questions for, and providing guidance to their members. In the UK, the Centre for Responsible Union AI studies and promotes the use of AI by unions; they’ve published several case studies. The UK Public and Commercial Services Union has used AI to help their reps simulate recruitment conversations before going into the field. The Belgian union ACV-CVS has used AI to sort hundreds of emails per day from members to help them respond more efficiently. Software companies such as Quorum are increasingly offering AI-driven products to cater to the needs of organizers and grassroots campaigns.
But unions have also leveraged AI for its symbolic power. In the U.S., the Screen Actors Guild held up the specter of AI displacement of creative labor to attract public attention and sympathy, and the ETUC (the European confederation of trade unions) developed a policy platform for responding to AI.
Finally, some union organizers have leveraged AI in more provocative ways. Some have applied it to hacking the “bossware” AI to subvert the exploitative intent or disrupt the anti-union practices of their managers.
Many of the tasks we’ve talked about so far are familiar use cases to anyone working in office and management settings: writing emails, providing user (or voter, or member) support, doing research.
But even mundane tasks, when automated at scale and targeted at specific ends, can be pernicious. AI is not neutral. It can be applied by many actors for many purposes. In the hands of the most numerous and diverse actors in a democracy—the citizens—that has profound implications.
Conservative activists in Georgia and Florida have used a tool named EagleAI to automate challenging voter registration en masse (although the tool’s creator later denied that it uses AI). In a nonpartisan electoral management context with access to accurate data sources, such automated review of electoral registrations might be useful and effective. In this hyperpartisan context, AI merely serves to amplify the proclivities of activists at the extreme of their movements. This trend will continue unabated in 2026.
Of course, citizens can use AI to safeguard the integrity of elections. In Ghana’s 2024 presidential election, civic organizations used an AI tool to automatically detect and mitigate electoral disinformation spread on social media. The same year, Kenyan protesters developed specialized chatbots to distribute information about a controversial finance bill in Parliament and instances of government corruption.
So far, the biggest way Americans have leveraged AI in politics is in self-expression. About ten million Americans have used the chatbot Resistbot to help draft and send messages to their elected leaders. It’s hard to find statistics on how widely adopted tools like this are, but researchers have estimated that, as of 2024, about one in five consumer complaints to the U.S. Consumer Financial Protection Bureau was written with the assistance of AI.
OpenAI operates security programs to disrupt foreign influence operations and maintains restrictions on political use in its terms of service, but this is hardly sufficient to deter use of AI technologies for whatever purpose. And widely available free models give anyone the ability to attempt this on their own.
But this could change. The most ominous sign of AI’s potential to disrupt elections is not the deepfakes and misinformation. Rather, it may be the use of AI by the Trump administration to surveil and punish political speech on social media and other online platforms. The scalability and sophistication of AI tools give governments with authoritarian intent unprecedented power to police and selectively limit political speech.
These examples illustrate AI’s pluripotent role as a force multiplier. The same technology used by different actors—campaigners, organizers, citizens, and governments—leads to wildly different impacts. We can’t know for sure what the net result will be. In the end, it will be the interactions and intersections of these uses that matters, and their unstable dynamics will make future elections even more unpredictable than in the past.
For now, the decisions of how and when to use AI lie largely with individuals and the political entities they lead. Whether or not you personally trust AI to write an email for you or make a decision about you hardly matters. If a campaign, an interest group, or a fellow citizen trusts it for that purpose, they are free to use it.
It seems unlikely that Congress or the Trump administration will put guardrails around the use of AI in politics. AI companies have rapidly emerged as among the biggest lobbyists in Washington, reportedly dumping $100 million toward preventing regulation, with a focus on influencing candidate behavior before the midterm elections. The Trump administration seems open and responsive to their appeals.
The ultimate effect of AI on the midterms will largely depend on the experimentation happening now. Candidates and organizations across the political spectrum have ample opportunity—but a ticking clock—to find effective ways to use the technology. Those that do will have little to stop them from exploiting it.
This essay was written with Nathan E. Sanders, and originally appeared in The American Prospect.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
AI agents are now hacking computers. They’re getting better at all phases of cyberattacks, faster than most of us expected. They can chain together different aspects of a cyber operation, and hack autonomously, at computer speeds and scale. This is going to change everything.
Over the summer, hackers proved the concept, industry institutionalized it, and criminals operationalized it. In June, AI company XBOW took the top spot on HackerOne’s US leaderboard after submitting over 1,000 new vulnerabilities in just a few months. In August, the seven teams competing in DARPA’s AI Cyber Challenge collectively found 54 new vulnerabilities in a target system, in four hours (of compute). Also in August, Google announced that its Big Sleep AI found dozens of new vulnerabilities in open-source projects.
It gets worse. In July Ukraine’s CERT discovered a piece of Russian malware that used an LLM to automate the cyberattack process, generating both system reconnaissance and data theft commands in real-time. In August, Anthropic reported that they disrupted a threat actor that used Claude, Anthropic’s AI model, to automate the entire cyberattack process. It was an impressive use of the AI, which performed network reconnaissance, penetrated networks, and harvested victims’ credentials. The AI was able to figure out which data to steal, how much money to extort out of the victims, and how to best write extortion emails.
Another hacker used Claude to create and market his own ransomware, complete with “advanced evasion capabilities, encryption, and anti-recovery mechanisms.” And in September, Checkpoint reported on hackers using HexStrike-AI to create autonomous agents that can scan, exploit, and persist inside target networks. Also in September, a research team showed how they can quickly and easily reproduce hundreds of vulnerabilities from public information. These tools are increasingly free for anyone to use. Villager, a recently released AI pentesting tool from Chinese company Cyberspike, uses the Deepseek model to completely automate attack chains.
This is all well beyond AIs capabilities in 2016, at DARPA’s Cyber Grand Challenge. The annual Chinese AI hacking challenge, Robot Hacking Games, might be on this level, but little is known outside of China.
AI agents now rival and sometimes surpass even elite human hackers in sophistication. They automate operations at machine speed and global scale. The scope of their capabilities allows these AI agents to completely automate a criminal’s command to maximize profit, or structure advanced attacks to a government’s precise specifications, such as to avoid detection.
In this future, attack capabilities could accelerate beyond our individual and collective capability to handle. We have long taken it for granted that we have time to patch systems after vulnerabilities become known, or that withholding vulnerability details prevents attackers from exploiting them. This is no longer the case.
The cyberattack/cyberdefense balance has long skewed towards the attackers; these developments threaten to tip the scales completely. We’re potentially looking at a singularity event for cyber attackers. Key parts of the attack chain are becoming automated and integrated: persistence, obfuscation, command-and-control, and endpoint evasion. Vulnerability research could potentially be carried out during operations instead of months in advance.
The most skilled will likely retain an edge for now. But AI agents don’t have to be better at a human task in order to be useful. They just have to excel in one of four dimensions: speed, scale, scope, or sophistication. But there is every indication that they will eventually excel at all four. By reducing the skill, cost, and time required to find and exploit flaws, AI can turn rare expertise into commodity capabilities and gives average criminals an outsized advantage.
AI technologies can benefit defenders as well. We don’t know how the different technologies of cyber-offense and cyber-defense will be amenable to AI enhancement, but we can extrapolate a possible series of overlapping developments.
Phase One: The Transformation of the Vulnerability Researcher. AI-based hacking benefits defenders as well as attackers. In this scenario, AI empowers defenders to do more. It simplifies capabilities, providing far more people the ability to perform previously complex tasks, and empowers researchers previously busy with these tasks to accelerate or move beyond them, freeing time to work on problems that require human creativity. History suggests a pattern. Reverse engineering was a laborious manual process until tools such as IDA Pro made the capability available to many. AI vulnerability discovery could follow a similar trajectory, evolving through scriptable interfaces, automated workflows, and automated research before reaching broad accessibility.
Phase Two: The Emergence of VulnOps. Between research breakthroughs and enterprise adoption, a new discipline might emerge: VulnOps. Large research teams are already building operational pipelines around their tooling. Their evolution could mirror how DevOps professionalized software delivery. In this scenario, specialized research tools become developer products. These products may emerge as a SaaS platform, or some internal operational framework, or something entirely different. Think of it as AI-assisted vulnerability research available to everyone, at scale, repeatable, and integrated into enterprise operations.
Phase Three: The Disruption of the Enterprise Software Model. If enterprises adopt AI-powered security the way they adopted continuous integration/continuous delivery (CI/CD), several paths open up. AI vulnerability discovery could become a built-in stage in delivery pipelines. We can envision a world where AI vulnerability discovery becomes an integral part of the software development process, where vulnerabilities are automatically patched even before reaching production—a shift we might call continuous discovery/continuous repair (CD/CR). Third-party risk management (TPRM) offers a natural adoption route, lower-risk vendor testing, integration into procurement and certification gates, and a proving ground before wider rollout.
Phase Four: The Self-Healing Network. If organizations can independently discover and patch vulnerabilities in running software, they will not have to wait for vendors to issue fixes. Building in-house research teams is costly, but AI agents could perform such discovery and generate patches for many kinds of code, including third-party and vendor products. Organizations may develop independent capabilities that create and deploy third-party patches on vendor timelines, extending the current trend of independent open-source patching. This would increase security, but having customers patch software without vendor approval raises questions about patch correctness, compatibility, liability, right-to-repair, and long-term vendor relationships.
These are all speculations. Maybe AI-enhanced cyberattacks won’t evolve the ways we fear. Maybe AI-enhanced cyberdefense will give us capabilities we can’t yet anticipate. What will surprise us most might not be the paths we can see, but the ones we can’t imagine yet.
This essay was written with Heather Adkins and Gadi Evron, and originally appeared in CSO.
The company Flok is surveilling us as we drive:
A retired veteran named Lee Schmidt wanted to know how often Norfolk, Virginia’s 176 Flock Safety automated license-plate-reader cameras were tracking him. The answer, according to a U.S. District Court lawsuit filed in September, was more than four times a day, or 526 times from mid-February to early July. No, there’s no warrant out for Schmidt’s arrest, nor is there a warrant for Schmidt’s co-plaintiff, Crystal Arrington, whom the system tagged 849 times in roughly the same period.
You might think this sounds like it violates the Fourth Amendment, which protects American citizens from unreasonable searches and seizures without probable cause. Well, so does the American Civil Liberties Union. Norfolk, Virginia Judge Jamilah LeCruise also agrees, and in 2024 she ruled that plate-reader data obtained without a search warrant couldn’t be used against a defendant in a robbery case.
Citizen Lab has uncovered a coordinated AI-enabled influence operation against the Iranian government, probably conducted by Israel.
Key Findings
- A coordinated network of more than 50 inauthentic X profiles is conducting an AI-enabled influence operation. The network, which we refer to as “PRISONBREAK,” is spreading narratives inciting Iranian audiences to revolt against the Islamic Republic of Iran.
- While the network was created in 2023, almost all of its activity was conducted starting in January 2025, and continues to the present day.
- The profiles’ activity appears to have been synchronized, at least in part, with the military campaign that the Israel Defense Forces conducted against Iranian targets in June 2025.
- While organic engagement with PRISONBREAK’s content appears to be limited, some of the posts achieved tens of thousands of views. The operation seeded such posts to large public communities on X, and possibly also paid for their promotion.
- After systematically reviewing alternative explanations, we assess that the hypothesis most consistent with the available evidence is that an unidentified agency of the Israeli government, or a sub-contractor working under its close supervision, is directly conducting the operation.
News article.
We are nearly one year out from the 2026 midterm elections, and it’s far too early to predict the outcomes. But it’s a safe bet that artificial intelligence technologies will once again be a major storyline.
The widespread fear that AI would be used to manipulate the 2024 US election seems rather quaint in a year where the president posts AI-generated images of himself as the pope on official White House accounts. But AI is a lot more than an information manipulator. It’s also emerging as a politicized issue. Political first-movers are adopting the technology, and that’s opening a gap across party lines.
We expect this gap to widen, resulting in AI being predominantly used by one political side in the 2026 elections. To the extent that AI’s promise to automate and improve the effectiveness of political tasks like personalized messaging, persuasion, and campaign strategy is even partially realized, this could generate a systematic advantage.
Right now, Republicans look poised to exploit the technology in the 2026 midterms. The Trump White House has aggressively adopted AI-generated memes in its online messaging strategy. The administration has also used executive orders and federal buying power to influence the development and encoded values of AI technologies away from “woke” ideology. Going further, Trump ally Elon Musk has shaped his own AI company’s Grok models in his own ideological image. These actions appear to be part of a larger, ongoing Big Tech industry realignment towards the political will, and perhaps also the values, of the Republican party.
Democrats, as the party out of power, are in a largely reactive posture on AI. A large bloc of Congressional Democrats responded to Trump administration actions in April by arguing against their adoption of AI in government. Their letter to the Trump administration’s Office of Management and Budget provided detailed criticisms and questions about DOGE’s behaviors and called for a halt to DOGE’s use of AI, but also said that they “support implementation of AI technologies in a manner that complies with existing” laws. It was a perfectly reasonable, if nuanced, position, and illustrates how the actions of one party can dictate the political positioning of the opposing party.
These shifts are driven more by political dynamics than by ideology. Big Tech CEOs’ deference to the Trump administration seems largely an effort to curry favor, while Silicon Valley continues to be represented by tech-forward Democrat Ro Khanna. And a June Pew Research poll shows nearly identical levels of concern by Democrats and Republicans about the increasing use of AI in America.
There are, arguably, natural positions each party would be expected to take on AI. An April House subcommittee hearing on AI trends in innovation and competition revealed much about that equilibrium. Following the lead of the Trump administration, Republicans cast doubt on any regulation of the AI industry. Democrats, meanwhile, emphasized consumer protection and resisting a concentration of corporate power. Notwithstanding the fluctuating dominance of the corporate wing of the Democratic party and the volatile populism of Trump, this reflects the parties’ historical positions on technology.
While Republicans focus on cozying up to tech plutocrats and removing the barriers around their business models, Democrats could revive the 2020 messaging of candidates like Andrew Yang and Elizabeth Warren. They could paint an alternative vision of the future where Big Tech companies’ profits and billionaires’ wealth are taxed and redistributed to young people facing an affordability crisis for housing, healthcare, and other essentials.
Moreover, Democrats could use the technology to demonstrably show a commitment to participatory democracy. They could use AI-driven collaborative policymaking tools like Decidim, Pol.Is, and Go Vocal to collect voter input on a massive scale and align their platform to the public interest.
It’s surprising how little these kinds of sensemaking tools are being adopted by candidates and parties today. Instead of using AI to capture and learn from constituent input, candidates more often seem to think of AI as just another broadcast technology—good only for getting their likeness and message in front of people. A case in point: British Member of Parliament Mark Sewards, presumably acting in good faith, recently attracted scorn after releasing a vacuous AI avatar of himself to his constituents.
Where the political polarization of AI goes next will probably depend on unpredictable future events and how partisans opportunistically seize on them. A recent European political controversy over AI illustrates how this can happen.
Swedish Prime Minister Ulf Kristersson, a member of the country’s Moderate party, acknowledged in an August interview that he uses AI tools to get a “second opinion” on policy issues. The attacks from political opponents were scathing. Kristersson had earlier this year advocated for the EU to pause its trailblazing new law regulating AI and pulled an AI tool from his campaign website after it was abused to generate images of him appearing to solicit an endorsement from Hitler. Although arguably much more consequential, neither of those stories grabbed global headlines in the way the Prime Minister’s admission that he himself uses tools like ChatGPT did.
Age dynamics may govern how AI’s impacts on the midterms unfold. One of the prevailing trends that swung the 2024 election to Trump seems to have been the rightward migration of young voters, particularly white men. So far, YouGov’s political tracking poll does not suggest a huge shift in young voters’ Congressional voting intent since the 2022 midterms.
Embracing—or distancing themselves from—AI might be one way the parties seek to wrest control of this young voting bloc. While the Pew poll revealed that large fractions of Americans of all ages are generally concerned about AI, younger Americans are much more likely to say they regularly interact with, and hear a lot about, AI, and are comfortable with the level of control they have over AI in their lives. A Democratic party desperate to regain relevance for and approval from young voters might turn to AI as both a tool and a topic for engaging them.
Voters and politicians alike should recognize that AI is no longer just an outside influence on elections. It’s not an uncontrollable natural disaster raining deepfakes down on a sheltering electorate. It’s more like a fire: a force that political actors can harness and manipulate for both mechanical and symbolic purposes.
A party willing to intervene in the world of corporate AI and shape the future of the technology should recognize the legitimate fears and opportunities it presents, and offer solutions that both address and leverage AI.
This essay was written with Nathan E. Sanders, and originally appeared in Time.