How California will Regulate Chatbots | EP 158

H
Hard Fork Oct 17, 2025

Audio Brief

Show transcript
This episode examines new California laws regulating AI and social media, corporate intimidation tactics against AI critics, and the growing influx of AI-generated content. There are three key takeaways from this discussion. First, state-level regulation, particularly from California, is now the primary force shaping US AI and tech policy. Second, the conflict between AI developers and safety advocates is intensifying, with powerful companies potentially using legal tactics to intimidate critics. Third, the internet is being flooded with AI-generated "slop," requiring users to develop new critical awareness. California is emerging as a de facto national leader in tech policy due to federal legislative inaction. New state laws cover AI transparency, deepfake liability, and enhanced protections for minors online. These measures mandate safety protocols for AI companions, create liability for deepfake pornography, and require age verification with mental health warnings on social media platforms. The podcast highlights aggressive legal strategies employed by major AI labs. OpenAI, for instance, served a subpoena to an AI safety advocate at his home, raising concerns about attempts to silence dissent. This underscores a growing tension between non-profit groups advocating for AI safety and the profit-driven motives of large tech companies. The hosts introduce "The Hard Fork Review of Slop," a new segment analyzing the proliferation of AI-generated content. "Slop" encompasses the vast amount of low-quality or bizarre AI content flooding the internet, ranging from malicious hoaxes to humorous art. This deluge of misinformation, exemplified by a hoax about Dolly Parton's death, necessitates a new level of critical awareness for distinguishing between harmless content, malicious deception, and useful applications. Ultimately, this episode underscores the complex challenges of regulating rapidly evolving AI technology amidst a fragmented policy landscape and a transforming digital information environment.

Episode Overview

  • The podcast examines a series of new California laws aimed at regulating AI and social media, establishing the state as a de facto national leader in tech policy due to federal inaction.
  • It features an interview with an AI safety advocate who was served a subpoena at his home by OpenAI, sparking a discussion on corporate intimidation tactics against critics.
  • The hosts introduce a new segment, "The Hard Fork Review of Slop," to analyze the growing wave of AI-generated content, from malicious hoaxes and bizarre art to novel projects with positive intentions.

Key Concepts

  • California as a Regulatory Leader: Due to a lack of federal action, California is setting the national standard for tech regulation with new laws covering AI transparency, deepfakes, and protections for minors.
  • Corporate Accountability and Intimidation: The episode highlights the aggressive legal strategies used by major AI labs like OpenAI, which served a subpoena to a critic at his home, raising concerns about attempts to silence dissent.
  • AI Safety vs. Commercial Interests: There is a growing tension between non-profit groups advocating for AI safety and the profit-driven motives of large tech companies, as seen in both OpenAI's legal actions and its public statements on safety.
  • The Proliferation of "Slop": The hosts coin the term "slop" to describe the vast amount of low-quality or bizarre AI-generated content flooding the internet, ranging from malicious hoaxes to weirdly satisfying videos and humorous art.
  • Impact of AI Hoaxes: The discussion covers the real-world consequences of AI-generated misinformation, such as a hoax about Dolly Parton's death that required public debunking from multiple celebrities.
  • New Tech Legislation: The hosts break down specific California bills, including those mandating safety protocols for AI companions, creating liability for deepfake pornography, and requiring age verification and mental health warnings on social media.

Quotes

  • At 20:59 - "God, I wish we had a Congress that could do something about this. We need federal lawmakers paying attention to it too." - Kevin Roose lamenting the lack of federal action on tech regulation.
  • At 26:55 - "When I opened the door there was a sheriff's deputy... who was there to serve me a subpoena from OpenAI." - Nathan Calvin describing the surreal moment he was legally served at his home.
  • At 36:27 - "I believe that that is what they were doing. That is my best guess, and that is how I received it." - Nathan Calvin confirming that he perceived OpenAI's actions as a form of intimidation.
  • At 47:31 - "...we should introduce a new segment that we are calling the Hard Fork Review of Slop." - Kevin Roose officially kicking off the new segment dedicated to reviewing AI-generated content.
  • At 58:26 - "Because why is Santa throwing ass?" - A TikTok user commenting on a bizarre AI-generated image on a Walmart cookie tin that depicts Santa Claus in a strange pose.

Takeaways

  • State-level regulation, particularly from California, is currently the primary force shaping AI and tech policy in the United States.
  • The conflict between AI developers and safety advocates is intensifying, with powerful companies potentially using legal tactics to intimidate their critics.
  • The internet is being flooded with AI-generated "slop," which requires users to develop a new level of critical awareness to distinguish between harmless fun, malicious deception, and useful content.