The Algorithm That IS The Scientific Method [Dr. Jeff Beck]

M
Machine Learning Street Talk Dec 31, 2025

Audio Brief

Show transcript
In this conversation, the speaker argues for Bayesian inference as the normative framework for intelligence, critiques the limitations of current AI paradigms, and proposes a new path forward for building truly general AI. There are four key takeaways from this episode. First, the brain operates on Bayesian principles, which offers the optimal approach to empirical inquiry. Second, current AI's singular focus on scaling function approximators is insufficient for general intelligence and has reached its limits. Third, a new AI architecture must be cognitively inspired, using object-centric, composable models grounded in macroscopic physics to enable continuous, interactive learning. Fourth, AI alignment can be achieved by allowing systems to infer human values through communication, rather than relying on brittle, pre-specified reward functions. The episode establishes Bayesian inference as the foundational argument for how the brain models the world. This framework formalizes the scientific method, constantly updating beliefs based on new data and providing the "only right way" to approach empirical inquiry. A significant critique is leveled against the current AI paradigm. Its over-optimization for function approximation and scaling, primarily with models like transformers, has reached a wall. Simply making these models bigger will not achieve Artificial General Intelligence, a realization now acknowledged by major AI labs. The proposed path forward involves building cognitively-inspired systems that mimic how the brain works. This means developing "lots of little models" that are object-centric, composable, and grounded in the macroscopic physical world, not just abstract language. This architecture enables continual, interactive learning, allowing AI to adapt to novel situations post-deployment. Finally, a novel solution to the AI alignment problem is presented. Instead of pre-specifying reward functions, AI should infer human values by observing actions and communicating with users. This allows AI to disentangle human beliefs from goals, offering a more robust and adaptable method for aligning AI with human intentions. This episode challenges the prevailing AI development approach, advocating for a return to cognitively-inspired, interactive, and grounded systems to achieve true general intelligence and alignment.

Episode Overview

  • The episode begins with the foundational argument that the brain operates on Bayesian principles, using evidence and uncertainty to model the world, and that this framework is the normative approach to all empirical inquiry.
  • The conversation critiques the current AI paradigm, arguing that its singular focus on scaling function approximators (like transformers) has hit a wall and is insufficient for creating general intelligence.
  • A new path forward for AI is proposed, centered on building cognitively-inspired systems with "lots of little models" that are object-centric, composable, and grounded in macroscopic physics rather than just language.
  • This new architecture enables continual, interactive learning and offers a more robust solution to the AI alignment problem by allowing AI to infer human values through communication, rather than relying on brittle, pre-specified reward functions.

Key Concepts

  • Bayesian Inference as a Normative Framework: The core philosophical stance that Bayesian inference is the "only right way" to think about the empirical world, formalizing the scientific method of updating beliefs based on new data.
  • The Bayesian Brain Hypothesis: The theory that the brain is an inference machine, proven by behavioral evidence like "optimal cue combination," where it efficiently weighs noisy sensory inputs based on their reliability.
  • Critique of Modern AI: The argument that the field has over-optimized for function approximation and scaling, neglecting other key aspects of intelligence and leading to a dead end where simply making models bigger will not achieve AGI.
  • Cognitively-Inspired AI Architecture: A proposed shift from monolithic models to building "brain-like models at scale." This involves creating systems grounded in the macroscopic, physical world we evolved in, not just abstract language data.
  • Object-Centric Models ("Lots of Little Models"): An architectural approach where AI builds a world model from a library of composable, sparse models, each representing an object and its potential interactions, similar to assets in a video game engine.
  • Continual and Interactive Learning: The concept that AI systems must be able to learn continuously from new data after deployment, adapting to novel situations, rather than being static, pre-trained artifacts where "learning is turned off."
  • Alignment via Communication: A novel approach to the AI alignment problem that moves beyond manually specifying reward functions. Instead, the AI infers a user's true values by observing their actions and communicating with them to disentangle their beliefs from their goals.

Quotes

  • At 0:13 - "Bayesian inference provides us with like a normative approach to empirical inquiry and encapsulates the scientific method writ large." - Introducing his central thesis on the fundamental importance of the Bayesian framework.
  • At 1:40 - "...humans and animals do optimal cue combination. We're surprisingly efficient in... using the information that comes into our brains with regards to again, these low-level sensory motor tasks." - Citing the key behavioral evidence for the "Bayesian Brain" hypothesis.
  • At 20:35 - "And they turned it into an engineering problem. It made it possible to experiment with different architectures, different networks, different nonlinearities, different structures..." - The speaker describes the critical shift that allowed for the rapid advancement of neural networks.
  • At 21:46 - "What got lost in the mix though was the notion that there's more to artificial intelligence than just like function approximation. We got really good function approximators, but that's not the only thing you need to develop like proper AI." - He identifies the core limitation of the current AI paradigm.
  • At 22:23 - "...starting to see them not living up to the hype... AGI is no longer a huge priority... because they've begun to realize that just function approximation isn't going to deliver." - He observes that major AI labs are acknowledging the limits of the scaling-is-all-you-need approach.
  • At 23:10 - "Let's do the same thing for cognitive models. Let's take what we know about how the brain actually works... and start building an artificial intelligence that thinks like we do by incorporating these principles." - He outlines his proposal to apply the "engineering problem" mindset to building brain-like, cognitive architectures.
  • At 32:13 - "What's the right domain in which to ground your models in order to get them to think like we do?... We want to ground it in something that is a good model of our world, and that's why we've chosen to focus on models that are grounded in the domain of macroscopic physics as opposed to language." - He argues that models must be grounded in the physical world to achieve human-like intelligence.
  • At 46:55 - "This is something that really doesn't exist in contemporary AI... when you build your big model, you've spent millions of dollars training it, and then you're done." - He critiques the current paradigm of training large, static models that cannot adapt after their initial training is complete.
  • At 48:29 - "From a simulation perspective, the way that this gets simulated is remarkably like the way a video game engine simulates the world." - He explains that his proposed architecture for an AI's world model is analogous to a game engine built on discrete objects.
  • At 51:43 - "This is like one of the critical missing elements in the robotics space, is that... training models in a simulated environment does not translate really very well to real-world environments." - He attributes the sim-to-real gap to AI models lacking the proper internal structure to represent the world.
  • At 59:55 - "The beliefs that these systems have... are not the same as our beliefs. And the reward functions that we specify for these artificial agents are definitely not the same as our reward functions." - He explains the fundamental misalignment between current AI systems and humans.
  • At 63:20 - "The way that we solve this problem as people is we talk about our beliefs." - He proposes a solution to the alignment problem where an AI can infer human values by communicating with people to disentangle their beliefs from their goals.

Takeaways

  • Adopt a Bayesian mindset to reason about uncertainty, constantly updating your beliefs with new evidence rather than holding static views.
  • Recognize that simply scaling up current AI models is not a viable path to AGI; true intelligence requires more than just powerful function approximation.
  • To build more general and robust AI, shift focus from monolithic, language-based models to composable, object-centric models grounded in the physical world.
  • Design AI systems for continual, interactive learning, allowing them to adapt to new information after deployment instead of remaining static.
  • Conceptualize complex systems as a library of modular components that can be combined, rather than a single entity, to improve efficiency and flexibility.
  • To solve the sim-to-real problem in robotics, focus on building AI with internal world models that accurately reflect the structure of reality.
  • Approach AI alignment by enabling systems to learn human values through communication and inference, rather than relying on brittle, hand-coded reward functions.