How A.I. Is Shaping the Minneapolis Narrative

H
Hard Fork Jan 30, 2026

Audio Brief

Show transcript
This episode explores the evolving relationship between state power, citizen surveillance, and the erosion of truth, arguing that governments are now adopting influencer tactics to win narrative wars against citizens. There are three key takeaways from the discussion. First, the concept of the Liar’s Dividend suggests that the biggest danger of AI isn't just people believing fake images, but that the existence of deepfake technology allows bad actors to dismiss real evidence as fabricated. As the line between truth and fiction blurs, public trust erodes, enabling authorities to claim that genuine documentation of abuse or violence is merely AI-generated. This creates a post-truth environment where the provenance of evidence matters more than the visual proof itself. Second, civil unrest has shifted into what the hosts call Phone-to-Phone Combat. We are moving away from citizens simply filming the state for accountability and toward a form of symmetrical warfare where the state also films citizens to identify and intimidate them. In this landscape, social media strategy has become a primary objective of government agencies, often superseding traditional policy goals as they prioritize creating viral content to secure a narrative advantage. Third, the computing landscape is shifting from an App Model to a Genie Model through agentic AI. Instead of users opening specific software for specific tasks, the future interface involves stating a desired outcome to an AI agent that executes complex tasks across a system. While this promises to abstract away friction, adoption faces significant institutional roadblocks. A wide gap is forming between power users granting AI full autonomy and the general corporate world, where IT policies and bureaucracy block even basic tools. The episode concludes by noting that while the technology for autonomous agents is accelerating, real-world implementation will likely be slower than expected due to necessary safety protocols and organizational risk aversion.

Episode Overview

  • This episode explores the evolving relationship between state power, citizen surveillance, and the erosion of truth, arguing that governments are now adopting "influencer tactics" to win narrative wars against citizens.
  • It introduces the "Liar's Dividend," a phenomenon where the mere existence of deepfake technology allows bad actors to dismiss genuine evidence of abuse as AI-generated fabrications.
  • The discussion shifts to the cutting edge of personal AI, specifically comparing local, open-source AI agents against corporate cloud models, and the security risks versus privacy benefits involved.
  • The hosts analyze the "Genie Model" of computing, predicting a move away from individual apps toward agentic AI that executes complex tasks across your system, while acknowledging the current "productivity theater" and bugs.

Key Concepts

  • The "Liar’s Dividend" The danger of deepfakes isn't just that people believe fake images; it's that the existence of AI tools allows authorities to dismiss real evidence as fabricated. When the line between truth and fiction blurs, public trust erodes, allowing the state to claim that genuine documentation of abuse or violence is merely AI-generated.

  • Phone-to-Phone Combat (Symmetrical Surveillance) Civil unrest has shifted from citizens filming the state for accountability (e.g., George Floyd protests) to a symmetrical warfare of documentation. Citizens film police for protection, while the state now films citizens to identify, dox, or intimidate them. The smartphone is no longer a tool of passive observation but an active weapon used by both sides to secure a narrative advantage.

  • Social Media as State Policy "Winning on social media" has become a primary objective of government agencies, often superseding traditional policy goals. Agencies are operating like content houses—employing producers to create viral clips—rather than just law enforcement bodies. This represents the "weaponization of spectacle," where crises are exploited specifically to generate content that serves a political agenda.

  • Tech as Political Infrastructure Tech platforms are not neutral spaces for debate; they are the infrastructure providers for conflict. CEOs of these companies act as "politicians" managing varied constituencies (users, employees, the White House), often releasing vague statements to avoid political retaliation while managing the algorithms that prioritize outrage.

  • The "Genie" vs. "App" Operating Model Agentic AI proposes a shift from the "App Model" (users opening specific software for specific tasks) to a "Genie Model." In this future, the user states a desired outcome ("book a reservation"), and the AI figures out which tools to use. This aims to abstract away the friction of managing individual apps, though current iterations remain buggy.

  • Persistent Memory via Markdown To solve the "goldfish memory" problem of current Large Language Models (LLMs), newer local agents use a workaround: they write important information and preferences into a local Markdown file. The agent "reads" this file before executing tasks, effectively simulating long-term memory without relying on limited context windows.

  • The AI Adoption Gap A widening chasm exists between "Wireheads" (power users granting AI full autonomy) and the general corporate world, where IT policies often block even basic tools. This "institutional friction" suggests that while technology accelerates, societal adoption will be slowed by bureaucracy and safety protocols.

Quotes

  • At 0:02:08 - "There is actually now state power, the government, that is using these tools in ways that we have not seen before in America." - Context: Highlighting that the government is no longer just regulating the internet, but actively using influencer tactics and content creation as a weapon.
  • At 0:04:30 - "These guys are politicians too. They are in their own way heads of state... they represent hundreds of millions of users and giant employee bases. And so they have to get in there and play politics." - Context: Explaining why tech CEOs release lukewarm statements on violence; they are balancing regulatory risk against employee morale.
  • At 0:09:29 - "Winning on social media has become almost the entire point... Yes, there are policy objectives here, but in a very real way, they seem secondary to getting the most retweets." - Context: Identifying a shift where law enforcement strategy is driven by engagement metrics rather than public safety.
  • At 0:12:08 - "Because people know that evidence can be fabricated, no matter what evidence you see now, you're always wondering, 'Was this maybe fabricated?' And that just erodes trust in our society more broadly." - Context: Defining the core danger of the "Liar's Dividend"—it is a war on shared reality.
  • At 0:12:34 - "But when the administration has been asked... 'Why are you sharing these obviously doctored images?' a spokesman for the White House just said, 'The memes will continue.'" - Context: Illustrating the government's adoption of internet troll culture as official communication policy.
  • At 0:17:36 - "There's this recognition in the Trump administration that being filmed and having your video put on social media is dangerous to you... We've started to see a lot of threats against people who are doing this." - Context: Marking a policy pivot where the state attempts to reframe the constitutional right to film police as a form of "doxing."
  • At 0:28:43 - "What Multibot does is it just writes memories to a markdown file and then it continuously revisits that. In my experience, it has been a little bit better at understanding... if you built a tool with it the previous day." - Context: Explaining the technical workaround allowing local AI agents to "remember" user preferences across sessions.
  • At 0:35:25 - "What if instead of having a bunch of apps on your computer... there was just a genie who lived inside your computer. And every time you had a wish, you could go to the genie and say... 'I wish for you to make me a website.'" - Context: Articulating the "Genie Model" as the future interface of computing, replacing discrete apps.
  • At 0:39:19 - "People in SF are putting multi-agent claudeswarms in charge of their lives... people elsewhere are still trying to get approval to use Copilot in Teams, if they're using AI at all." - Context: Highlighting the extreme inequality in AI adoption based on corporate risk tolerance.
  • At 0:42:09 - "The diffusion of AI technologies will be slower than the accelerationists think because it will just run into a lot of institutional roadblocks and bottlenecks along the way. Like here in the real world, we do have IT policies." - Context: A reality check on AI hype, noting that bureaucracy is the main governor of technological speed.
  • At 0:45:33 - "[Andrej Karpathy said] 'Easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks.'" - Context: Validating that for specific verticals like coding, productivity gains are already transformative and real.

Takeaways

  • Expect skepticism of all visual evidence: Prepare for a "post-truth" environment where even genuine video evidence of misconduct will be dismissed as AI-generated. Verify sources rigorously before accepting viral content as fact.
  • Learn the "Genie Model" workflow now: Even if current AI agents are buggy, start practicing the workflow of delegating complex tasks to AI rather than doing them manually. The skills of prompting and managing agents will remain relevant even as the tools change.
  • Recognize the "Liar's Dividend" in arguments: Be aware that in disputes (legal, political, or corporate), bad actors will increasingly use the possibility of AI manipulation to cast doubt on real evidence. Documentation chains and metadata will become more critical than the image itself.
  • Adopt "Markdown Memory" for personal AI use: If you use AI tools extensively, manually maintain a "preferences" or "context" file (in Markdown or text) that you can paste into new chat sessions to simulate long-term memory and skip the onboarding repetition.
  • Evaluate security vs. convenience in local AI: Before running local AI agents that have access to your file system, understand the security risks (like prompt injection). The privacy of keeping data local comes with the risk of giving an autonomous agent control over your files.
  • Anticipate "Institutional Friction": If you work in a large organization, expect a significant lag in AI adoption due to IT policies. You may need to advocate for "safe" AI tools or find compliant workarounds to maintain a productivity edge over competitors who are blocked by bureaucracy.