Let's Put an End to Lies & Disinformation

Blog post description.

10/29/202510 min read

Radio Changed Everything in 1927. AI Is Doing It Again. Here's What We Learned.

When radio could reach millions instantly, America created rules. When AI can deceive millions instantly, we're paralyzed. History shows us the way forward.

Two days before the 2024 New Hampshire primary, thousands of voters answered their phones to hear President Biden's voice telling them not to vote. The message was clear, the voice unmistakable, the intent obvious: suppress Democratic turnout.

But it wasn't Biden.

It was artificial intelligence—a deepfake commissioned by political consultant Steve Kramer, who paid a magician $150 to clone the president's voice. The call reached between 5,000 and 25,000 voters. It urged them to "save" their vote for November, giving the false impression that voting in the primary would somehow disqualify them from the general election.

The consequences? Kramer now faces a $6 million FCC fine and 26 criminal charges, including 13 felony counts of voter suppression. The telecom company that transmitted the calls paid $1 million to settle. FCC Chair Jessica Rosenworcel called it "unnerving" and warned that "any one of us could be tricked into believing something that is not true."

But here's what should terrify you more than the crime itself: the technology that created this fake cost $150 and took eight minutes to produce.

Think about that. Eight minutes. One hundred fifty dollars. Democracy on the line.

And if you think this was an isolated incident, a one-off stunt that got shut down—you're not paying attention.

I've Watched This Movie Before

In 1938, Orson Welles convinced millions that Martians were invading Earth with a single radio broadcast. The panic was real. The chaos was immediate. And the lesson was clear: when a new technology can reach millions instantly, society needs rules.

But we didn't learn that lesson in 1938. We learned it eleven years earlier.

Before 1927, American radio was chaos. Anyone could broadcast on any frequency. Stations drowned each other out. Powerful transmitters obliterated weaker ones. Important communications—ship distress signals, emergency broadcasts—disappeared into static. Without government control, the medium was becoming useless, a cacophony of competing voices where none could be clearly heard.

So Congress passed the Radio Act of 1927.

The Act created the Federal Radio Commission and gave it a simple mandate: grant broadcasting licenses only when doing so served the "public convenience, interest, or necessity." By 1929, the FRC had articulated what would become known as the Fairness Doctrine: "The public interest requires ample play for the free and fair competition of opposing views."

This wasn't censorship. It was accountability.

Radio could reach millions instantly. That power came with responsibility. Broadcasters were given access to a scarce public resource—the airwaves—in exchange for serving the public good. They couldn't just blast propaganda. They couldn't drown out opposing voices. They had to present controversial issues fairly, giving reasonable opportunity for contrasting viewpoints.

In 1949, the FCC formalized these principles in "In the Matter of Editorializing by Broadcast Licensees." In 1959, Congress enshrined the Fairness Doctrine into law, amending the Communications Act to require that "a broadcast licensee shall afford reasonable opportunity for discussion of conflicting views on matters of public importance."

The Supreme Court upheld it in 1969 in Red Lion Broadcasting Co. v. FCC, ruling that the public's right to be informed outweighed a broadcaster's desire to push only one viewpoint.

Why? Because democracy requires informed citizens, not manipulated masses.

The Fairness Doctrine survived until 1987, when the Reagan administration abolished it, citing a "chilling effect" on free speech and arguing that cable television and multiple media outlets made it unnecessary. Whether that decision was wise is debatable—many trace today's hyperpartisan media landscape directly to the doctrine's repeal.

But here's what's not debatable: the pattern.

Transformative communication technology → chaos → regulation → democratic stability.

Radio could reach millions instantly, so we created rules. Television could shape public opinion overnight, so we created standards. The internet could connect the world, so we created... well, we're still figuring that part out.

And now AI can generate believable lies in eight minutes for $150, and we're doing... what, exactly?

I've spent three decades as a CEO, COO, and systems engineer. I built AI-powered, camera-guided robots in 1991—back when "AI" wasn't a buzzword, it was just really hard programming. I've watched every major technology paradigm shift since the 1980s. I've seen this movie before.

The question isn't whether we'll regulate AI's influence on democracy. It's whether we'll do it before it's too late.

Here's What's at Stake

Let me be blunt: we're not facing a hypothetical future threat. We're living in it right now.

Deepfake attempts increased 3,000% in 2023, according to fraud detection specialists. That's not a typo. Three thousand percent. And 2024 was the biggest election year in human history—3.7 billion eligible voters in 72 countries went to the polls while AI-generated misinformation flooded their feeds.

The New Hampshire Biden robocall wasn't an anomaly. It was a preview.

In Gabon, a deepfake video of the president sparked a coup attempt by eroding trust in public institutions. In Taiwan's 2024 election, China deployed AI-generated disinformation through Taiwanese proxies specifically to undermine faith in democracy itself. In India, deepfakes showed Bollywood celebrities criticizing Prime Minister Modi and endorsing opposition candidates—spread virally through WhatsApp and YouTube. In the United States, AI-generated images showed Trump with Black supporters he never met, Kamala Harris in Soviet garb, and hurricane victims in disaster areas that didn't exist.

Gary Marcus, a cognitive scientist at NYU who studies AI, explains the economics that make this so dangerous: "Anybody who wants to do this stuff... can make more of it at very little cost, and that's going to change their dynamic. Anytime you make something that was expensive cheaper, that has a huge effect on the world."

Translation: propaganda used to require a state-sponsored troll farm, professional video editors, and significant resources. Now it requires a laptop, an internet connection, and pocket change.

But the scale isn't even the worst part. The worst part is that it works.

Research shows that people cannot reliably distinguish AI-generated news from authentic journalism. AI-generated propaganda achieves measurable psychological manipulation. When people view content on smartphones, they blame poor quality on bad cellular service rather than questioning authenticity. When deepfakes confirm existing biases, people don't scrutinize them at all.

And then there's what I call the "liar's dividend"—the most insidious effect of all.

Once deepfakes become commonplace, politicians can dismiss real evidence by claiming it's AI-generated. Donald Trump already does this, questioning crowd sizes at Kamala Harris rallies and suggesting the footage is fake. The phrase "that's probably AI" becomes a get-out-of-jail-free card for anyone caught saying or doing something damning—even when the evidence is real.

When everything can be faked, nothing can be trusted. When nothing can be trusted, democracy becomes theater. When democracy becomes theater, eventually, it becomes tyranny.

We're not just losing political arguments. We're losing the ability to have arguments based on facts. We're losing shared reality. We're losing the foundation on which democratic discourse depends: the basic agreement that some things are true and some things are false.

This isn't a technology problem. It's an existential threat to self-governance.

Every File Has a Signature

Here's the good news: the technology to solve this already exists. We just lack the will to implement it.

Every digital file has metadata—a signature that contains information about its origin, creation date, editing history, and authenticity. Audio files, video files, images—all of them carry digital fingerprints. Multiple rounds of editing degrade or eliminate this source data, which is both the problem and the potential solution.

Some companies are trying to address this. Meta, YouTube, and TikTok now require users to disclose when they post AI-generated content. Meta is working with OpenAI, Microsoft, and Adobe to develop industry-wide standards for automatically labeling AI-generated images. Biometric AI tools can verify the authenticity of voices, faces, and other identifying characteristics—fighting AI with AI.

But voluntary compliance doesn't work. It never has.

Why? Because there are no consequences for violations. No standardization across platforms. No enforcement mechanisms. And most importantly, bad actors can easily strip metadata or circumvent disclosure requirements.

The current approach is like asking bank robbers to please wear name tags during heists.

What we need is mandatory authentication protocols—not suggestions, not guidelines, but requirements with teeth.

Here's what that looks like:

Mandatory authentication for anyone claiming to report "news." If you're going to present yourself as a journalistic entity, you must verify the authenticity of audio and video before publication. Blockchain-style verification systems that preserve the editing chain. Real-time flagging of AI-generated content. Heavy penalties—criminal and civil—for organizations that knowingly strip authentication data or publish unverified deepfakes.

I built AI-powered systems in the 1990s before most people knew what machine learning was. Trust me: the technology to authenticate digital content exists. Voice analysis can detect synthetic speech patterns. Video forensics can identify AI-generated frames. Metadata verification can track content from creation to publication.

What's missing isn't the technical capability. What's missing is the regulatory framework that forces everyone to use it.

We have the tools. We need the rules.

We Need Standards for the Digital Age

So here's what a modern AI Truth in Broadcasting Standard would actually look like—a framework based on the principles that worked in 1927, adapted for the challenges of 2025.

First, clear definitions.

What constitutes "news" versus "entertainment" versus "opinion"? Right now, we have entities calling themselves news organizations while operating as propaganda outlets. The line between journalism and performance art has been deliberately blurred. A new standard would define who gets to claim journalistic authority—and what responsibilities come with that claim.

If you want the privileges of press protection, you accept the obligations of press standards.

Second, mandatory disclosures.

Twenty-four states now regulate political deepfakes, but they do so inconsistently, creating a patchwork that sophisticated bad actors can easily exploit. Minnesota prohibits political deepfakes within 90 days of an election. Arizona requires disclosure for AI-generated political material in the same timeframe. Florida and other states have their own variations.

This state-by-state approach is well-intentioned but ultimately futile. We need a federal standard that supersedes the patchwork—one that applies uniformly across all platforms, all states, all election cycles.

Third, strict verification requirements.

News organizations must verify source authenticity before publication. AI-generated content must be clearly labeled—not in fine print, not in metadata only tech-savvy users can access, but prominently and unmistakably. Edited content must preserve original metadata showing the chain of modifications. Think of it as a chain of custody for digital evidence.

Fourth, real enforcement mechanisms.

The FCC has already established it can regulate AI-generated communications. In February 2024, shortly after the Biden robocall, the agency outlawed AI-generated robocalls altogether. Steve Kramer's $6 million fine and 26 criminal charges sent a clear message: election interference through AI carries serious consequences.

We need to expand that precedent.

Heavy fines for violations. Criminal penalties for intentional deception. Civil liability for platforms that knowingly distribute deepfakes. Revocation of broadcasting licenses for repeat offenders. The consequences must be severe enough to deter bad actors and wealthy organizations alike.

Fifth, critical exceptions to protect free speech.

This is where critics will scream "First Amendment violation!" So let's be crystal clear about what this isn't.

This isn't censorship. It's truth in labeling.

Satire and parody remain protected—but must be clearly labeled as such. Artistic expression is protected. Private communications are protected. News organizations operating in good faith retain Section 230 protections with reasonable reforms that balance platform immunity with accountability.

The Radio Act wasn't censorship—it was accountability. It didn't restrict what could be said; it restricted who could monopolize the saying of it, and it required those with access to public airwaves to serve the public interest.

Similarly, an AI Truth Standard wouldn't restrict speech. It would require authenticity in representation. It would mandate transparency about content origin. It would create consequences for deliberately deceiving the public about who's speaking, what's real, and what's manufactured.

Freedom of speech doesn't include the freedom to lie about who's speaking. It never has.

The Radio Act survived constitutional challenges precisely because it served a compelling public interest: ensuring that transformative communication technology didn't become a weapon against democracy itself. That same principle applies to AI today—arguably even more urgently.

The Window Is Closing

Let me tell you what keeps me up at night.

It's not that AI can generate convincing deepfakes. It's that we're acting as if this will somehow solve itself. As if market forces will naturally create a functioning system. As if the same technology companies that profit from engagement at any cost will suddenly prioritize truth over clicks.

The Brennan Center for Justice put it starkly: "AI-fueled deception could become an enduring feature of political campaigns, eroding the very foundation of democratic governance." Not might become. Could become. The difference between possibility and probability is action.

And the window for action is closing.

Every day, the technology improves. Deepfakes become harder to detect. Generation becomes faster and cheaper. Distribution becomes more automated. Foreign adversaries develop more sophisticated influence operations. The 2026 midterms are eighteen months away. The 2028 presidential election is just beyond that.

How many more Biden robocalls do we need before we admit we have a problem? How many fabricated disasters? How many fake endorsements? How many foreign interference campaigns?

But here's what gives me hope: we've done this before.

The Radio Act passed in 1927 when chaos demanded it. The Clean Air Act passed when pollution threatened public health. Securities regulations passed when fraud endangered markets. Food safety standards passed when contamination killed people.

The pattern is always the same: Crisis. Recognition. Action.

We're at the recognition phase. The crisis is undeniable. The question is whether we'll act while democracy still works—or wait until AI has made "truth" meaningless.

History offers a roadmap. In 1927, Congress faced a transformative communication technology that could reach millions instantly. Radio was creating chaos, drowning out truth, enabling bad actors to manipulate public opinion on an unprecedented scale.

Congress acted. It created rules. It established standards. It mandated accountability. And democracy adapted.

AI is radio on steroids. It's faster, cheaper, more sophisticated, more accessible, and infinitely more dangerous. It doesn't just amplify messages—it fabricates reality. It doesn't just shape opinion—it manufactures false consent.

If radio required the Radio Act, AI demands the AI Truth Standard.

We know what needs to be done. The technology exists. The precedent exists. The constitutional framework exists. What's missing is political will.

So here's what you can do:

Contact your representatives. Demand federal AI truth and labeling standards. Support organizations like the Brennan Center, the Electronic Frontier Foundation, and state-level groups fighting for election integrity. Practice digital literacy—verify before you share, question before you believe, and refuse to engage with unlabeled AI content.

Treat unverified claims the way you'd treat unsigned letters: with suspicion.

If enough people demand accountability, politicians will deliver it. If enough voters refuse to be manipulated, manipulation becomes less effective. If enough citizens insist on truth in digital communication, we can build systems that enforce it.

But we have to act. Now. Not after the next election. Not after the next scandal. Not after deepfakes have destroyed what's left of our shared reality.

Radio changed everything in 1927, seemingly overnight. We adapted by creating rules that balanced innovation with responsibility, free speech with accountability, technological power with democratic values.

AI is changing everything faster—and the stakes are immeasurably higher.

We can adapt again.

But only if we stop pretending this will solve itself.

The technology exists to verify truth. The legal framework exists to require it. The historical precedent exists to justify it. What we need now is the courage to demand it—before democracy becomes just another word we used to believe in.

The question isn't whether we need this. History already answered that in 1927.

The question is whether we'll act while democracy still works—or wait until AI has made "truth" meaningless.

Radio changed everything overnight. We adapted.

AI is changing everything faster. We can adapt again.

But only if we act now.