The election technology community has a patching problem, and AI just made it urgent.
Over the past week, the cybersecurity world has been rocked by the implications of Anthropic's Project Glasswing—a program built around a new AI model capable of autonomously discovering zero-day vulnerabilities across every major operating system and browser. My friend and resident Very Smart Person™, Mike Alvarez, Caltech's Flintridge Foundation Professor and co-director of the Caltech/MIT Voting Technology Project, laid out the election security implications clearly on his Substack: we need to assume that adversarial nations are building similar capabilities, election infrastructure is a target, and we have limited time to fortify.
I understand where Mike is coming from, and his first recommendation—keep all software systems up to date as security patches flow—is exactly right. His second recommendation, pushing to get election technologies and election technology firms into Project Glasswing, deserves more consideration before the elections community embraces it. AI's role in cybersecurity is likely inevitable, but we're not there yet—and the risks of rushing in are real.
There is something the elections community can do right now that doesn't require handing election system source code to AI companies or waiting for anyone else to act.
The threat to election technology
The implications for elections are straightforward. If AI can discover and exploit vulnerabilities at this scale, the attack surface for election technology—particularly internet-connected systems like electronic pollbooks, voter registration systems, election night reporting, and electronic blank ballot transmission—is significantly broader than it was even a year ago. The New York Times noted that these capabilities "can often be triggered by amateurs with simple prompts." That's the part that should concern election officials most.
Before we hand AI the keys
I understand the desire to push the elections community to persuade Anthropic to add election technologies to Project Glasswing's scope. If a tool this powerful exists, why not point it at our systems? But we need to be more cautious about what that entails.
Project Glasswing would require giving Anthropic—or any AI company running similar programs—access to election technology source code and system architectures. That's a significant trust decision, and the track record so far doesn't inspire confidence. Just last month, Anthropic accidentally included Claude Code's source code in a public release. Within days, researchers found a critical vulnerability in the leaked code that could bypass its security controls entirely. OpenAI had user data exposed through a breach of Mixpanel, a third-party analytics provider.
There are broader issues with AI models. Prompt injection—i.e., providing inputs to the model that cause it to behave in unintended ways, leading to data leakage, privilege escalation, or unethical responses—remains a well-documented and largely unsolved problem across frontier models. Even without sketching a full threat model, the security risks of exposing election system internals to AI companies are significant.
And the rulebook for safely using AI on critical infrastructure hasn't been written yet. NIST is actively developing frameworks for governing AI systems that operate with increasing autonomy, and the SL5 Standard—which aims to set safety benchmarks for the most capable AI models—is still in draft. Think of using AI today like deciding to take a newly built ship across the Atlantic without ever conducting a sea trial. These frameworks will eventually help us evaluate when and how to trust AI with sensitive infrastructure, but they're still in development and haven't been tested.
To save time and space, I'll save all the policy concerns with using frontier AI for another article.
None of this means AI won't play an important role in cybersecurity in the future. It almost certainly will. But "inevitable" and "ready to trust with election infrastructure today" are two very different things. The elections community should be thoughtful about the pace at which it adopts these tools, especially when source code access is involved.
RABET-V™: a program built for this moment
The good news is that the elections community doesn't need to wait for AI standards to mature or for Glasswing to expand its scope. RABET-V already exists, is operational, and was built for exactly this kind of threat environment.
For those new to our work, RABET-V is a technology verification program developed by the Center for Internet Security and administered and operated by The Turnout. We've written extensively about the program before, but the short version is that RABET-V verifies technology products through three core activities:
- Organizational Assessment: evaluates the technology provider's software development maturity, including their processes for governance, security, and operations.
- Architecture Assessment: evaluates the product's software architecture to assess security, design quality, and risk from changes.
- Product Verification: a point-in-time compliance and security test of the product itself, with testing rigor calibrated to the product's maturity and the nature of changes.
Most importantly, none of these activities requires access to source code.
RABET-V was originally built for the election technology ecosystem and designed to match the speed of development and be iterative and continuous—not a one-and-done certification. That continuous nature is exactly what makes it relevant to AI-driven vulnerability discovery. It isn't a one-time event. It's an ongoing escalation. The programs we use to verify election technology need to match that pace.
How RABET-V addresses the threat
Let me walk through a few specific ways RABET-V addresses the concerns.
Patch management and system integrity
Systems need to be kept up to date, and RABET-V is designed to ensure that technology providers have the organizational maturity to do exactly that. Our security requirements for system integrity explicitly require that vendors deploy the latest stable security updates for operating systems and third-party software. Products aren't verified in a static snapshot—they're expected to maintain those standards over time.
When the flood of patches from Glasswing-tested systems starts arriving, election technology providers verified through RABET-V will already have the processes in place to apply them quickly and reliably.
Architecture resilience and defense-in-depth
Zero-day vulnerabilities exploit individual flaws. A well-architected system limits the damage any single flaw can cause. RABET-V's Architecture Assessment scores products on reliability, modularity, isolation, and depth of control coverage—the qualities that determine whether exploiting a single vulnerability gives an attacker access to the entire system or only to a contained component.
In a world where AI can find vulnerabilities at scale, defense-in-depth isn't optional. It's the difference between a breach and a contained incident.
Continuous automated security testing
RABET-V doesn't just assess products at a point in time. We provide registered technology providers with access to static application security testing (SAST), dynamic application security testing (DAST), and software composition analysis (SCA) tools for continuous scanning between assessments. As my colleague John Dziurłaj wrote in his piece on security testing on repeat, these tools enable frequent testing with every new build, and our software bill of materials analysis catches known vulnerabilities in third-party libraries.
This kind of continuous vigilance is precisely what's needed when adversaries may be using AI to discover new vulnerabilities daily.
Internet-connected systems
Internet-connected election systems are the systems most exposed to the threats Mythos Preview represents—electronic pollbooks, voter registration databases, election night reporting platforms, and electronic blank ballot transmission solutions. Unlike air-gapped systems, these are reachable over a network, which means an attacker armed with an AI-discovered vulnerability doesn't need physical access—just a network path and an exploit. RABET-V's security requirements include targeted provisions for hosted and web components: boundary protections, intrusion detection and prevention systems, DDoS mitigation, network segmentation, and secure configurations. These are the controls that matter most for internet-facing systems, and RABET-V tests for them.
What the community should do now
So what can we do? Here's what I'd recommend:
- Government officials: Require RABET-V verification in your procurement processes. Ensure that the vendors you work with apply patches as they become available. The sample procurement language we created to integrate RABET-V into your procurement and security review processes can serve as a starting point, since security often starts at procurement. Work with cybersecurity experts, including the Center for Internet Security—directly or via the MS-ISAC and EI-ISAC—and the Election Security Exchange to determine the best ways to protect your organizations from cybersecurity risks.
- Election technology providers: Enroll in RABET-V and leverage the continuous scanning tools between assessments. When AI-discovered vulnerabilities become public, you need processes and tools to respond.
- State and local government agencies: You can work with RABET-V directly to verify your homegrown or procured technology—and not just elections technology, as RABET-V is available for any software-based solution—and not just elections technology. RABET-V is discounted for government agencies, making it more accessible to the jurisdictions that need it most.
- Policymakers: Fully fund election security. The federal cybersecurity landscape has shifted significantly: services that state, local, tribal, and territorial (SLTT) entities once relied on—including threat intelligence sharing through the Multi-State and Elections Infrastructure ISACs—have transitioned to paid models following changes in federal cooperative agreements. These transitions create real gaps for under-resourced jurisdictions at the worst possible time. Whether through direct federal funding or sustained support for programs like RABET-V, SLTT entities need the resources to defend election infrastructure against an escalating threat environment. Securing elections is not a partisan issue; it's a prerequisite for public trust in democracy. Additionally, if the elections community eventually engages with AI-driven security programs like Project Glasswing, policymakers should ensure that AI security standards are in place first and that the terms of engagement protect the election system's source code and intellectual property.
- Anthropic and the tech industry: Demonstrate that you can secure your own systems before asking the elections community to trust you with ours. Share relevant vulnerability findings with the elections community without requiring access to source code.
We don't have to start from scratch
The threat environment is real and urgent. My push back is on the idea that the path forward runs through AI companies. The elections community doesn't have to start from scratch or take on new, poorly understood risks to respond to this moment. RABET-V is operational today, with products actively being verified and technology providers using continuous scanning tools between assessments. The framework, the security requirements, and the assessor network already exist.
The 2026 midterms are approaching, and the threat environment is here. AI in cybersecurity may be inevitable, but so is caution. The elections community should strengthen what it can control now, and approach new tools with the rigor they deserve.
If you'd like to learn more about RABET-V or how it can help strengthen your security posture, reach out to us at team@rabetv.org or visit rabetv.org.
RABET-V™ is a trademark of the Center for Internet Security Inc.
Jared Marcotte