Measuring the impact of AI on software engineering – with Laura Tacho
Audio Brief
Show transcript
This episode covers data-driven insights challenging the hype around AI in software engineering, aiming to separate reality from media narratives.
There are three key takeaways from this discussion. First, automating enjoyable coding tasks with AI can paradoxically decrease developer satisfaction by increasing the proportion of less desirable administrative "toil." Second, the most significant time-saving benefits of AI tools come from debugging and refactoring existing code, not simply generating new code. Third, organizations must measure AI's true impact through Developer Experience and adopt a structured, experimental approach to its adoption.
Expanding on these points, the "AI Paradox" suggests that while AI streamlines creative coding, it leaves developers with a higher percentage of meetings and administrative work, potentially lowering job satisfaction. Companies should focus AI implementation on eliminating genuine toil like debugging and boilerplate rather than just automating creative tasks.
Regarding AI adoption, internal support doesn't guarantee widespread use; weekly adoption hovers around 65%. A significant barrier isn't developer resistance, but practical issues like organizations failing to provide licenses to all developers. Prioritize auditing these practical barriers before assuming user pushback.
Data reveals that stack trace analysis and refactoring existing code are AI's most impactful time-saving applications, outperforming standard mid-loop code generation. Training should emphasize these high-value use cases to maximize realized benefits.
Finally, justifying AI investments requires framing them as tools to improve developer experience, which directly links to better business outcomes and increased velocity. Companies must establish baseline metrics for productivity and satisfaction *before* rolling out new tools to accurately quantify AI's benefits and ensure clear ROI. This approach also encourages treating AI strategy as a portfolio of experiments, rather than rigid roadmaps.
In conclusion, a data-driven, experimental approach to AI adoption, focused on genuine toil reduction and developer experience, will yield the most effective outcomes for software engineering teams.
Episode Overview
- The podcast challenges the hype around AI in software engineering, focusing on data-driven insights to separate reality from media narratives.
- It explores the "AI Paradox," where automating enjoyable coding tasks can paradoxically decrease developer satisfaction by leaving them with more administrative "toil."
- It identifies that the most significant time-saving benefits of AI tools come from debugging and refactoring existing code, rather than simply generating new code.
- The discussion provides a framework for leaders to measure AI's true impact through Developer Experience (DevEx) and advocates for a structured, experimental approach to adoption.
Key Concepts
- The AI Paradox: Automating enjoyable creative tasks like coding can inadvertently lower developer job satisfaction by increasing the proportion of less desirable work, such as meetings and administrative toil.
- AI Adoption Barriers: Even with strong internal support, weekly adoption of AI tools hovers around 65%, with a significant reason being practical issues like organizations not making licenses available to all developers.
- High-Value AI Use Cases: Data shows that the most impactful time-saving applications for AI assistants are stack trace analysis and refactoring existing code, which outperform standard mid-loop code generation.
- Measuring ROI Through DevEx: The most effective way to justify AI investment is to frame and measure it as a tool for improving developer experience, which is a direct precursor to better business outcomes like increased velocity.
- The Need for Baselines: To accurately quantify the benefits of AI, organizations must first establish baseline metrics for developer productivity and satisfaction before rolling out new tools.
- AI as an Architectural Catalyst: The rise of AI is prompting a positive architectural shift, encouraging companies to create cleaner service interfaces and better documentation that serves both human developers and AI agents.
- Speed vs. Stability Trade-off: While AI can increase development velocity, it can also negatively impact system stability, as it may lead to larger and more complex code changes that introduce new risks.
Quotes
- At 0:07 - "What they found was that many developers were actually feeling less satisfied." - Laura Tacho shares a counterintuitive finding from DORA research on developer sentiment after adopting AI tools.
- At 0:15 - "What was left over was more stuff that they didn't enjoy: the toil, the meetings, the administrative work." - Tacho describes the less desirable tasks that remain after the enjoyable parts of the job are automated by AI.
- At 19:26 - "We're saying that about 65% of devs use it weekly... So that means 35% are still like, 'Nope, I'm good. I'm just going to, you know, do what I did before, right?'" - Gergely Koch highlighting the significant portion of developers who do not adopt AI tools even with company-wide support.
- At 20:47 - "Some of it is just that the organization doesn't make a license available to them. They would like to use it, but the licenses aren't available." - Abi Noda providing a key, practical reason for lower-than-expected adoption rates of AI tools.
- At 24:34 - "Interestingly enough, code generation, like mid-loop code generation, is the third highest use case for saving time. But actually stack trace analysis and refactoring existing code were saving more time than the mid-loop code generation." - Abi Noda revealing that AI's primary time-saving benefits come from analysis and maintenance.
- At 27:35 - "Typing speed has never been the bottleneck in development. But now we have all this code generated faster than we can type, that's great. But it still takes me time to review that code." - Abi Noda explaining that AI shifts time from typing to cognitive tasks like code review.
- At 37:31 - "Should we be writing documentation for AI or for humans? And my... answer to that question is, yes, both." - Abi Noda highlighting the new imperative for clear, structured documentation.
- At 43:48 - "AI is a tool to improve developer experience. When you improve developer experience, you have better outcomes. It is, it follows like that." - Laura Tacho providing her core advice on how companies should reason about AI's role and impact.
- At 45:15 - "Start measuring now... you're just delaying success." - Laura Tacho's advice to companies hesitant to begin measuring their developer productivity, emphasizing the need for a baseline.
- At 57:32 - "I think roadmaps are on their way out in the age of AI... I think the companies that are going to win with AI are not ones that think about things in roadmaps... but think about it more as experiment portfolios." - Laura Tacho predicting a strategic shift in software development planning.
- At 67:41 - "Data beats hype every time." - Laura Tacho's ultimate advice for engineering leaders, stressing the importance of relying on metrics and evidence.
Takeaways
- To maintain developer satisfaction, focus AI tool implementation on eliminating genuine toil (e.g., debugging, boilerplate) rather than just automating the creative coding work developers enjoy.
- Before assuming developers are resistant to AI, audit practical adoption barriers and ensure that everyone who wants a license has access to one.
- Prioritize AI training on its most effective use cases—debugging stack traces and refactoring code—as this is where the most significant time savings and value are realized.
- Justify AI investments by measuring their impact on developer experience (DevEx), and establish a performance baseline before tool rollout to demonstrate clear ROI.
- Use the adoption of AI as a catalyst to improve your team's core engineering practices, such as creating better documentation and cleaner APIs.
- When implementing AI tools to boost speed, simultaneously monitor stability metrics like change failure rate to ensure that increased velocity doesn't compromise quality.
- Adopt a flexible, experimental approach to AI strategy by treating it as a "portfolio of experiments" rather than a fixed, long-term roadmap.