Meta’s “creepy” new Ray Bans and a visit from Lovable CEO Anton Osika! | E2181

This Week in Startups This Week in Startups Sep 18, 2025

Audio Brief

Show transcript
This episode covers an in-depth discussion with Lovable CEO Anton Osika on the future of AI-driven software development and security. There are four key takeaways from this discussion. First, specialized AI platforms are predicted to surpass human experts in producing secure code for standard applications within the next 18 months. Second, the rise of "spec work 3.0" enables individuals to rapidly prototype and demonstrate value using AI tools for career advancement or new ventures. Third, AI-assisted coding should be viewed as a fundamental efficiency tool, not cheating, akin to using a calculator. Finally, nuanced AI regulation is vital to foster innovation and prevent the capture of the industry by large incumbents. Within the next 18 months, AI on opinionated, full-stack platforms is expected to produce more secure code for standard applications than human experts. Lovable’s strategy emphasizes a controlled, end-to-end environment that provides critical guardrails for AI, creating a strong synergy for reliable and secure applications. This approach prioritizes security as foundational, predicting a future where AI systems become the standard for secure software engineering. AI tools enable "spec work 3.0", allowing individuals to rapidly build and prototype functional software. This trend empowers young entrepreneurs and intrapreneurs to demonstrate value with minimal resources, whether launching a startup or accelerating their career. Proactively using these new tools within an organization can significantly boost visibility and career advancement. AI-assisted coding should be embraced as a powerful efficiency tool, not seen as "cheating." This is analogous to using a calculator, which was once considered a shortcut but is now a standard tool. AI boosts development efficiency and will become a foundational skill for modern engineers. Nuanced AI regulation is essential to prevent regulatory capture by large incumbents and foster continued innovation. Overly restrictive or premature policies, such as those seen in some regions, risk stifling competition and hindering technological advancement. Smart regulation should aim to create a balanced environment for growth and responsible development. The conversation underscores AI's transformative potential in software development, security, and career empowerment, while highlighting the need for thoughtful regulatory approaches.

Episode Overview

  • The episode kicks off with an analysis of Meta's new AI-powered Ray-Ban glasses, comparing them to the failed Google Glass and discussing the social and technological hurdles of wearable AR.
  • A deep dive with Anton Osika, founder of the AI development platform Lovable, focuses on the future of AI-generated code, with a bold prediction that AI will write applications twice as secure as human experts within 18 months.
  • The conversation broadens to cover audience questions on AI regulation and regulatory capture, the secrets behind Stockholm's successful startup scene, and the hosts' take on recent tech news, including Google's AI integration in Chrome.

Key Concepts

  • Meta's AI-Powered Ray-Ban Glasses: The new version's key upgrade is a built-in heads-up display, representing the technological vision that Google Glass aimed for a decade ago.
  • Social Acceptance of Wearable Tech: The hosts discuss the "creepy" factor and privacy concerns of covert recording, suggesting features like a clear recording indicator are necessary for social adoption.
  • Persistence in Technology: The failure of Google Glass is used as a case study to emphasize that long-term perseverance is often required to bring ambitious technological visions to fruition.
  • AI and Application Security: Security is positioned as the highest priority for AI-generated applications, with the argument that AI will soon be more reliable than humans at writing secure code due to the inevitability of human error.
  • AI Regulation and Regulatory Capture: A discussion on how to regulate powerful AI without stifling innovation from smaller companies, focusing on placing the burden on large entities to prevent them from solidifying their market dominance.
  • Stockholm's Startup Ecosystem: The city's success in creating unicorns is attributed to a deep pool of serious, grounded, and product-focused engineering talent with a long-term building mindset.
  • The "Lovable Mafia": The concept that successful and innovative companies like Lovable will eventually produce a new generation of founders and startups, similar to the "PayPal Mafia."
  • Antitrust and AI Integration: Analysis of Google's integration of Gemini into Chrome, suggesting the company feels legally empowered to bundle its services more aggressively following recent antitrust court battles.

Quotes

Top 12 notable quotes with ABSOLUTE TIMESTAMPS and context from across the podcast. Each quote MUST be its own bullet point.

  • At 2:30 - "They're incredibly creepy. I have a friend Amanda who brings them out at dinner and turns them on and tries to record the dinner and we all boo her when she does." - Jason Calacanis describes the social awkwardness and privacy concerns associated with the previous generation of Meta's Ray-Ban glasses.
  • At 4:30 - "In the technology business, not quitting is how you succeed." - Jason Calacanis uses Google Glass as an example to make a broader point about the importance of persistence for long-term success in the tech industry.
  • At 21:47 - "We take security as the top priority because if you are building something that's not secure, you should not have built it at all." - Anton Osika states his company's foundational principle regarding application security.
  • At 22:21 - "Humans always make errors, and we're going to see the same thing happen to computer security where you're not just simply not going to trust a human writing secure code. You need an AI system to write secure code." - Anton Osika argues for the inevitability of AI surpassing humans in writing secure code.
  • At 23:59 - "I think on average... it'll be more secure, 2x more secure, in the coming 18 months." - Anton Osika makes a bold prediction about his platform's ability to generate code twice as secure as a top 10% human security expert.
  • At 25:04 - "There is actually much more steps than you might imagine... what we're doing is to build out that full platform where we make it as easy as possible for the AI to take the decisions within that domain." - Anton Osika explains that Lovable is building a complete, opinionated ecosystem to support the AI, not just a code generator.
  • At 46:04 - "I I hate like over-regulating things. I I think Europe is a good example of where we're maybe too happy to pull the trigger on over-regulating before you see a problem." - Anton Osika expresses his concern about premature regulation in response to a question about AI.
  • At 47:06 - "Can we do it in a way that prevents regulatory capture? And I think the answer is yes." - Anton Osika argues that AI can be regulated in a way that doesn't stifle smaller players and only places the burden on large companies.
  • At 51:17 - "Specifically, there is just a deep pool of engineers here who are very curious, grounded, and and serious about building good products." - Anton Osika attributes Stockholm's startup success to a culture of dedicated and product-focused engineers.
  • At 54:11 - "In 10 years, people will talk about the 'Lovable Mafia.'" - Alex Wilhelm references Anton Osika's prediction that successful startups will spawn a new generation of founders.
  • At 58:04 - "Your boss is gonna name drop you when they take your work one level up... and suddenly that person who never knew who you were, knows exactly who you are. Huge for your career." - Alex Wilhelm highlights the career benefits of proactively using new tools like Lovable to create value.
  • At 1:00:22 - "This certainly feels like Google now does not have to divest Chrome... So to me this is Google saying, 'We've shaken off the courts, and now we're going to go straight pedal-to-chest.'" - Alex Wilhelm speculates that Google's new AI features for Chrome are a direct result of feeling legally unburdened after recent antitrust rulings.

Takeaways

  • To succeed with ambitious technology, persistence is paramount; giving up too early, as Google did with Glass, can cede a future market to a more patient competitor.
  • For wearable technology to gain mass adoption, developers must solve for social acceptance and privacy by building in transparent features that create trust.
  • Make security a foundational, non-negotiable component of any application from the very beginning, especially when using AI for development.
  • Engineers should embrace AI as a tool for writing more secure and robust code, as AI systems are on a trajectory to surpass human capabilities in security.
  • Advocate for smart regulation that fosters innovation by targeting large, dominant players, thereby preventing "regulatory capture" that harms startups.
  • Cultivating a deep culture of serious, product-focused, long-term engineering is a more potent recipe for a successful tech ecosystem than chasing hype.
  • Proactively investing in and using new tools to create value at your job is a powerful way to gain visibility with leadership and accelerate your career.