In a first-of-its-kind political intervention, two AI-aligned super PACs have taken opposing sides in a single congressional race—centered not on AI ethics or deployment, but on transparency and accountability. The candidate is Alex Bores, a Democratic challenger in New York’s 12th district, who introduced the RAISE Act (Responsible AI Safety and Evaluation Act) during his state senate tenure. The bill would require AI developers to publicly disclose safety testing protocols and report “serious system incidents”—defined as failures causing harm, deception at scale, or unauthorized autonomous action—to an independent federal office.
One PAC, “AI Forward,” disclosed in FEC filings that it received $1.2 million from Anthropic—a company whose leadership has publicly endorsed “constitutional AI” and regulatory guardrails—then spent over $400,000 supporting Bores’ campaign. Meanwhile, a rival group, “Tech Innovation Now,” ran digital ads attacking Bores as “anti-innovation” and “regulation-first,” citing unspecified “chilling effects on open-model development.” Tech Innovation Now has not disclosed its donors, though TechCrunch notes its ad targeting, messaging cadence, and legal counsel overlap with firms tied to large inference-focused startups—notably those with no public safety reporting frameworks in place.
This isn’t abstract policy theater. RAISE is narrow: it mandates disclosure and incident reporting—not bans, caps, or licensing. It mirrors existing requirements for clinical trial reporting (FDA Form 1572) or aviation incident logs (NTSB Form 6120), both of which coexist with robust innovation. Yet the intensity of the pushback suggests something else is at stake: control over narrative, timing, and definition. As TechCrunch reports, neither PAC cites empirical evidence that RAISE-like rules have slowed R&D elsewhere—nor do they reference the EU AI Act’s similar transparency obligations, now live for general-purpose AI systems since February 2026, with no documented drop in EU-based foundation model releases.
What this means for thinking
This episode reveals how quickly “AI safety” and “AI innovation” are being weaponized as rhetorical binaries—even when the actual policy under debate is modest, precedent-based, and technically lightweight. RAISE doesn’t ban models, restrict compute, or mandate audits. It asks developers to say what they tested for—and when something went wrong. That’s not radical; it’s baseline professional accountability. Yet framing it as “pro- or anti-AI” flattens real trade-offs: e.g., whether speed of release should outweigh traceability of harm, or whether opacity serves users—or shareholders.
Critics rightly note that disclosure alone won’t prevent misuse—but neither does silence. And while some argue voluntary frameworks (like the Frontier Model Forum’s “Safety Framework”) are sufficient, that forum includes only four companies, publishes no third-party verification, and defines “serious incident” more narrowly than RAISE (excluding, for example, coordinated disinformation campaigns unless they involve physical infrastructure). In contrast, RAISE’s definition aligns with NIST’s AI Risk Management Framework v2.0 (Section 3.2), which explicitly includes “harm to democratic processes” as a high-impact risk category.
Also worth flagging: TechCrunch’s source is a news report—not a press release—but the article relies heavily on unnamed “campaign insiders” and doesn’t independently verify claims about RAISE’s economic impact. Anthropic’s funding of AI Forward is transparently disclosed; Tech Innovation Now’s opacity raises questions about who benefits from painting transparency as obstruction. This isn’t about “more regulation vs. less.” It’s about who gets to define what counts as responsible behavior—and whether that definition emerges from public law or private consensus documents written behind closed doors.
None of this proves RAISE will work perfectly. But treating basic incident reporting as a partisan litmus test—rather than a minimal step toward legibility—distorts the conversation before it begins. Real accountability starts with knowing what happened. Not speculating what might happen.