Government at the Speed of Progress

R
Roots of Progress Institute Feb 17, 2026

Audio Brief

Show transcript
This episode explores Jennifer Pahlka's argument that the government's industrial-era operating model is failing to meet the demands of a modern, complex world. There are three key takeaways from the discussion on Recoding America. First, digitizing bureaucracy without simplifying it is a trap. Second, procedural bloat is actively destroying state capacity. Third, artificial intelligence must be used to reduce complexity, not just manage it. Jennifer Pahlka argues that simply moving paper forms to a website is insufficient. Instead of dragging industrial processes into the internet era, governments must leapfrog directly to an AI-driven model. The focus should be on "work simplification," an Eisenhower-era concept that interrogates whether a task is necessary before automating it. If a process requires nineteen steps for approval, the goal should be to use political capital to reduce those steps to three, rather than building expensive software to manage all nineteen. The conversation highlights a dangerous cycle known as the procedure fetish. Anxiety about government legitimacy leads to more oversight rules, which paradoxically slows down service delivery and causes failure. Pahlka illustrates this with the Oakland Police Department, where officers spend eighty-five percent of their time on paperwork due to nine different oversight bodies. This creates a state where the government drives with the gas pedal down on funding while the emergency brake of compliance is fully engaged. Regarding artificial intelligence, Pahlka offers a crucial warning. There is a strong temptation to use large language models to simply handle existing complexity, such as summarizing thousands of pages of regulations. This is the wrong approach. Instead, AI should be deployed to identify, untangle, and remove contradictory or obsolete regulations. Using technology to automate a bad process will only accelerate government failure, whereas using it to deconstruct legal frameworks can restore actual state capacity. Ultimately, modernization is not about buying software, but about shifting from rigid waterfall planning to a test-and-learn operating model that prioritizes outcomes over compliance.

Episode Overview

  • Jennifer Pahlka, author of Recoding America, argues that the current "industrial era" operating model of government is fundamentally incompatible with the speed and complexity of the modern world.
  • Through examples ranging from insect extinction permits to failed 911 responses, she illustrates how "procedural bloat" and rigid adherence to outdated processes are destroying state capacity.
  • The talk outlines a critical pivot: rather than simply digitizing existing bureaucracy, we must use AI and modern methodologies to deconstruct and simplify the underlying legal and regulatory frameworks.
  • This episode serves as a wake-up call for technologists and policymakers, warning that using AI to merely manage complexity—rather than reduce it—will accelerate government failure.

Key Concepts

  • State Capacity vs. Policy Intent

    • Pahlka distinguishes between the intent of a law (e.g., getting unemployment checks to people) and state capacity (the actual ability to deliver that outcome). She argues that policymakers often ignore the "plumbing," resulting in a collapse of implementation where grand promises are made but the machinery to fulfill them does not exist.
  • The Procedure Fetish (Procedural Bloat)

    • This is a cyclical trap where anxiety about government legitimacy leads to the creation of more oversight rules and procedures. Ironically, these added layers slow down service delivery and cause failure, which increases public anxiety, leading to demands for even more procedures. Pahlka uses the example of the Oakland Police Department having nine different oversight bodies, forcing officers to spend 85% of their time on paperwork rather than patrolling.
  • The "Operating Model" Mismatch

    • Government currently operates on an industrial-era model defined by waterfall planning, rigid adherence to 20-year-old project plans, and a focus on compliance over outcomes. Pahlka argues we cannot fix this by simply "buying technology." We must shift to a digital-era operating model characterized by iterative testing, "skunkworks" teams, and empowering civil servants to make judgment calls rather than blindly following rigid statutes.
  • The Danger of AI in Bureaucracy

    • A critical distinction is made regarding the use of Large Language Models (LLMs). There is a temptation to use AI to handle complexity (e.g., using AI to read 7,000 pages of regulations). Pahlka warns this is the wrong approach, likening it to Mickey Mouse in Fantasia losing control of the broomsticks. Instead, AI should be used to reduce complexity—to identify, untangle, and simplify contradictory regulations so they are human-readable and executable.

Quotes

  • At 3:06 - "A system that is this at odds with the needs of society actually cannot hold, and if you don't disrupt it, someone else will. And you may not like how someone else disrupts it." - explains the urgency of reform, framing bureaucratic inefficiency not just as an annoyance, but as an existential threat to democratic stability.

  • At 11:13 - "In order to have that state capacity, we need to update the operating model of government... Do you have the right people? Are they focused on the right work? Do you have purpose-fit systems for the work? And are we able to operate in test-and-learn frameworks?" - clarifies that "modernization" is not about buying software, but about changing the fundamental four pillars of how the government functions.

  • At 14:50 - "When we constantly add and never remove, that increased proceduralism progressively overburdens the state and exacerbates the exact problems that you were trying to solve... creating inaction, poor service, bad outcomes, and increasing that anxiety." - describes the negative feedback loop that traps government agencies in a state of paralysis, preventing them from serving the public.

  • At 20:00 - "It's very easy to let AI solve the problems of complexity by just handling the complexity, and that is the wrong choice. We have to actually change that... we have to reduce that complexity." - provides a crucial warning about the application of AI, arguing that automating a bad process is dangerous compared to using technology to simplify the requirements.

Takeaways

  • Leapfrog to the AI Era, Don't Just Digitize

    • Stop trying to drag industrial-era processes into the internet era (e.g., digitizing paper forms that shouldn't exist). Skip the "digitization" phase and move directly to using AI to interrogate the necessity of the work itself. Use tools to analyze regulatory codebases and identify contradictions, redundancies, and obsolescence that human staffers no longer have the capacity to review.
  • Rebalance "Stop Energy" and "Go Energy"

    • When evaluating government projects or reforms, assess the balance of constraints versus enablement. Pahlka notes we currently drive with the gas pedal down (funding/mandates) while the emergency brake is fully engaged (compliance/oversight). To apply this, identify specific compliance requirements (like the Paperwork Reduction Act or NEPA reviews) that can be simplified or removed to release the "brake."
  • Implement "Work Simplification" before Automation

    • Before applying technology to a government problem, apply the Eisenhower-era concept of "work simplification." Audit the workflow to see if the underlying steps are necessary. If a process requires 19 steps to approve a decision, use political capital to reduce the steps to 3 before building the software to manage those steps.