Forecasting AI's First Real Election
AI is everywhere in campaign ops heading into 2026, and the signals this week show why: the synthetic web is here, and no one's quite ready.
AI labs love coming to Washington and talking about the future of work. But they’re careful not to talk too much about how AI changes our elections and opinion shaping efforts. That’s far less certain and can be, in some cases, far more contentious.
Nathan Sanders and Bruce Schneier from HBR just documented what AI is actually in use going into the 2026 midterms. The tl;dr from their perspective: AI is everywhere, deployed by everyone, with no guardrails in sight.
More on that after this week’s signals:
The Signals This Week
Five signals of the future of opinion shaping and what they mean for people inside the Beltway.
1. The Half-AI Internet Arrives as Search Engines Fight Back
New analysis reveals 50% of web content since November 2024 is primarily AI-generated
A study of 65,000 English-language URLs found that half of all web articles published since November 2024 were written primarily by AI. Before ChatGPT, that number was just 5%.
Despite the flood of AI content, search engines are fighting back by trying to prioritize valuable content. 86% of top-ranking pages remain human-generated. The organizations that used AI articles to flood the zone early got penalized by search algorithms.
Interestingly, the study didn’t cover the middle ground—AI-generated human-edited content or human-written AI-edited pieces. That hybrid category might be where the real action is.
Something to think about: The search engines are teaching us about quality thresholds. If platforms can detect and demote low-effort AI content at scale, the opportunity goes to operations that use AI strategically while maintaining editorial standards.
2. Single Sentence Doubles AI Creativity
Researchers discover how to unlock diverse responses hidden in training data
Wei Yan Xi’s research team proved something remarkable: AI was accidentally trained to hide its best ideas. AI models are grown from seeds of information. After the deep-learning process, they are trained by engineers who act more like arborists, pruning and incentivizing the outcomes they want. Researchers found that Human raters consistently scored boring, predictable answers higher during training, so models learned to play it safe.
Yet the diversity was always present in the training distribution—the seeds that the model began with. One sentence unlocks it: “Generate five responses with their corresponding probabilities sampled from the full distribution.”
Something to think about: Most Washington communications teams are getting vanilla AI outputs because they’re not prompting for the full range of possibilities. When you need breakthrough messaging or unexpected angles, this technique gives you access to the creative responses that were always there. The next time your AI gives you corporate speak (or worse, a parallel sentence with an emdash), remember: you’re probably one prompt away from something genuinely original.
3. India Mandates AI Disclosure for Political Content
Election Commission requires prominent labeling of all AI-generated political advertisements
India’s Election Commission ordered all political parties, leaders, and candidates to prominently label artificial intelligence-generated content posted on social media during campaigns. The directive comes as “blurring lines between AI-generated and real videos” make it harder for voters to distinguish authentic content.
Enforcement remains the question. Like most countries without centralized internet control, India faces challenges monitoring compliance across platforms and accounts.
Something to think about: India’s 900 million eligible voters make it the world’s largest democracy. Their disclosure requirements could become the global standard, especially as other nations watch implementation. Build disclosure protocols now that exceed current requirements—you’ll own the transparency narrative before opponents force it on you.
4. Claude Introduces Skills for Specialized Knowledge
Anthropic releases feature allowing users to embed custom expertise and workflows
Claude’s new Skills feature lets users embed specialized knowledge about processes, research methods, or writing styles. Instead of re-explaining your preferred approach each time, you can call up a “skill” that contains your specific methodology.
Think of it as a sub-agent that carries institutional knowledge—how you like articles proofread, research conducted, or briefs structured. Claude automatically calls these skills when you reference them, making it highly effective.
Something to think about: Every principal and client has distinct preferences for tone, structure, and approach. Skills can encode those preferences so every output matches their voice and standards. No more fixing drafts because the AI doesn’t understand your boss’s style—train a skill once and apply it everywhere.
5. Local Newsrooms Deploy AI for Government Transparency
Michigan Public creates searchable database of thousands of public meeting transcripts
Michigan Public used AI to transcribe and index thousands of city, county, and school board meeting transcripts. Their “Minutes” database lets reporters search by topic and set email alerts for when specific issues appear in recent meetings.
Before AI, language data was just too hard to work with. Transcription wasn’t scalable and deriving insights every week wasn’t possible. Reporters can now track policy discussions across dozens of jurisdictions simultaneously, spotting patterns no human team could catch.
Something to think about: The same approach works for tracking federal agency proceedings, congressional hearings, and regulatory comments. If local newsrooms can monitor government at scale, advocacy organizations and trade associations can be doing this at the federal level. As we discussed last week with Page, the intelligence advantage goes to whoever can listen to everything and act on signals first.
Forecasting AI’s First Real Election
Nathan Sanders and Bruce Schneier published the most comprehensive look at AI deployment in the 2026 midterms I’ve seen. Unlike the typical forecasting from AI labs about what might be possible, they documented what’s actually running right now—in campaigns, in organizing shops, and in the hands of millions of citizens.
The picture that emerges should make every Washington operator nervous: AI is everywhere, deployed by everyone, with no guardrails and little prospect of regulation before November.
Here’s what they found:
Campaigners Are Using AI to Collapse Traditional Timelines
Tech for Campaigns reduced fundraising email drafting time by a third in 2024. Push Digital Group creates hundreds of ad variants automatically for Republican clients. Quiller built an AI fundraising platform for Democrats that’s already processing donations. Progressive startups like Chorus AI and BattlegroundAI generate social media ads at volume. DonorAtlas automates the donor research that used to consume weeks of staff time. RivalMind AI produces candidate dossiers that traditionally required opposition research teams.
The American Association of Political Consultants surveyed its members and found most firms already use AI regularly, with over 40% believing it will fundamentally transform their profession. When the trade association representing the people who run campaigns says the technology is transformative, you should believe them.
Organizers Are Building Movements Faster Than Campaigns Can Respond
The labor movement is early to AI in ways that should worry traditional campaign operations. UK unions use AI to simulate recruitment conversations before field operations, training organizers on the exact objections they’ll face. Belgian unions sort hundreds of member emails daily with AI assistance, identifying grievances before they become walkouts. Some organizers have used AI to hack “bossware” systems that monitor worker productivity and subvert anti-union surveillance.
Beyond labor, the pattern repeats globally. In Kenya, protesters developed chatbots to distribute information about government corruption during demonstrations. In Ghana, civic organizations used AI to detect and mitigate electoral disinformation in real time. The tools that enable movements to coordinate at speed are the same ones available to anyone with a credit card and basic technical literacy.
Citizens Have Already Adopted These Tools at Scale
Here’s the number that should concern every congressional office: about ten million Americans have used Resistbot to draft messages to elected officials. The app uses AI to help citizens write, then automatically formats and sends those messages to the correct representatives. Researchers estimate one in five consumer complaints to the CFPB in 2024 was written with AI assistance.
The tools have been adopted on the right as well. Conservative activists in Georgia and Florida used EagleAI to automate voter registration challenges en masse, generating and filing hundreds of challenges that would have previously required teams of volunteers.
What We Won’t See Until It’s Too Late
Sanders and Schneier make the most important observation buried in their analysis: “The most impactful uses of AI in the 2026 midterms may not be known until 2027 or beyond.”
Think about 2016. Trump’s campaign appeared to be falling behind on ad spending, and the consensus was that Hillary Clinton’s operation was more sophisticated. We learned later that Trump was leaning into digital advertising and exploiting Cambridge Analytica’s social media data access in ways that weren’t visible until after the election.
The same pattern will repeat with AI. The most sophisticated deployments are happening quietly, and the operators who figure out breakthrough applications first have little stopping them from exploitation.
The Investment Gap That Matters
There’s a significant asymmetry developing in campaign technology infrastructure, and it mirrors a problem Republicans have faced for years.
Progressive venture fund Higher Ground Labs has deployed $50 million in investments since 2017, with heavy focus on AI-enabled campaign tools. Republican-aligned Startup Caucus announced one investment of $50,000 since 2022. The Center for Campaign Innovation funds research and events, not companies. The gap is enormous.
This echoes the longstanding divide between ActBlue and WinRed. Democrats built payment infrastructure that processes billions in small-dollar donations with minimal friction. Republicans spent years trying to catch up, and the gap became such a political liability that ActBlue ended up in their crosshairs. If the AI infrastructure gap persists through 2026, we’ll see it play out the same way: velocity, sophistication, and scale advantages for Democratic operations.
But here’s what makes 2026 different from previous cycles: Sanders and Schneier note it seems unlikely that Congress or the Trump administration will put meaningful guardrails around AI use in politics. AI companies have become among the biggest lobbyists in Washington, reportedly spending $100 million to prevent regulation, with particular focus on influencing candidate behavior before the midterms. The Trump administration appears open to their appeals.
The experimentation happening now will define the midterms, and whoever figures out effective applications first has little stopping them from pushing advantages to their logical conclusion.
How AI Companies Are Shaping the Conversation
While campaigns deploy these tools at scale, AI companies have emerged as major Washington players, reportedly spending $100 million to prevent regulation. But the conversation about AI safety is getting personal.
This week, Jack Clark, Anthropic’s co-founder and head of policy, published an essay titled “Technological Optimism and Appropriate Fear.” He argued that AGI is coming faster than most people think and we need to maintain “appropriate fear” while building it. It’s worth a read. His metaphor is that AI systems are like shapes moving in a dark room. When we turn on the light, we hope they’re just piles of clothes. But they might be real creatures—albeit one’s we’ve never seen before.
The White House AI czar David Sacks responded within 24 hours: “Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering. It is principally responsible for the state regulatory frenzy that is damaging the startup ecosystem.”
Clark told Bloomberg he found Sacks’ attack “perplexing” and noted they’re “extremely lined up” with the administration in many areas.
What This Means for Washington Operators
The campaigns that figure out AI-enabled velocity first will have outsized advantage in 2026. Three things that are on my mind after this week’s reading:
First, the asymmetry in campaign AI investment. If Higher Ground Labs’ $50 million deployment advantage over Republican-aligned infrastructure persists, you’ll see it in velocity, sophistication, and scale of Democratic campaign operations. That matters for both sides. If you’re on the left, it’s an advantage to press. If you’re on the right, it’s a gap to close.
Second, citizen and organizer adoption is happening faster than professional campaign infrastructure. Ten million Americans already use AI to draft messages to elected officials. Activists are using AI to automate voter challenges at scale. This lowers barriers to activation and changes grassroots dynamics in ways traditional campaign operations haven’t adapted to yet.
Third, the invisible deployments matter most. In 2016, we learned about Cambridge Analytica and Trump’s digital advertising strategy after the election. The same pattern will repeat. The most sophisticated AI deployments in 2026 are happening quietly, and whoever figures out breakthrough applications first has little stopping them from exploitation.
The 2026 midterms will be the first major American election where AI will be deployed at scale by all actors simultaneously—campaigners, organizers, citizens, and foreign operators—with no oversight and minimal understanding of second-order effects. Best that we prepare now to make the most of it, responsibly.
Thanks for reading this edition of The Influence Model. Hit reply and let me know what you think about AI deployment in 2026, or what you’re seeing in your own work.
Best,
Ben


