The Gradient Podcast - Kevin Dorst: Against Irrationalist Narratives

T
The Gradient Jul 18, 2024

Audio Brief

Show transcript
This episode challenges the popular irrationalist narrative of human thinking, proposing instead that everyday cognition often represents sophisticated computational feats underestimated by conventional psychology. There are four key takeaways from this discussion. First, practice epistemic empathy by resisting the urge to dismiss opposing viewpoints as simply irrational. Second, appreciate the immense computational power of everyday human cognition. Third, distinguish whether new evidence challenges the reliability of your initial information or the rationality of your own reasoning process. Fourth, observe the behavior of advanced AI as a potential tool for understanding human rationality. Epistemic empathy involves recognizing that differing views often arise from complex, though perhaps hidden, reasoning, rather than mere irrationality. Simplistic narratives tend to erode this crucial understanding. Furthermore, what appears simple in human cognition is often incredibly complex. Apparent errors in thinking may actually be features of a sophisticated system operating under real-world constraints, not simple flaws. When re-evaluating a belief, it is critical to distinguish between evidence that undermines the reliability of your initial information and evidence that challenges the soundness of your own reasoning process. These are distinct issues requiring different philosophical responses. If advanced AI models replicate human biases, this could indicate that these traits are not just flaws, but functional aspects of intelligence, offering a unique mirror for human cognition. Ultimately, the conversation urges a re-evaluation of human rationality, suggesting a more nuanced and appreciative view of our cognitive capabilities informed by insights from artificial intelligence.

Episode Overview

  • This episode challenges the popular "irrationalist narrative" that portrays human thinking as inherently flawed, arguing that people are often more rational than they are given credit for.
  • It explores an alternative perspective, inspired by AI and computer science, which reframes everyday human cognition as a series of "fantastic feats of computation" that we consistently underestimate.
  • The conversation distinguishes between different types of higher-order evidence, specifically evidence that undermines the reliability of our information versus evidence that challenges our own rationality.
  • It examines the philosophical implications of sociological influences on belief and proposes that the behavior of advanced AI might provide evidence that human "biases" are not simple errors but features of a rational system.

Key Concepts

  • Irrationalist Narratives: The pervasive psychological view that human thinking is systematically flawed and biased, which is contrasted with an AI-inspired perspective that highlights the sophistication of human cognition.
  • Epistemic Empathy: The ability to understand the reasoning or evidence that leads others to hold different beliefs, an ability that is said to be eroded by simplistic irrationalist narratives.
  • Reflective Equilibrium: A philosophical method that seeks to balance formal principles and theorems with intuitive judgments about specific cases of what is and is not rational.
  • Moravec's Paradox: The principle that tasks easy for humans (like perception and movement) are hard for AI, and vice versa, which underscores the underestimated complexity of everyday human cognitive abilities.
  • Higher-Order Evidence: Evidence about your own evidence or your reasoning process, which can challenge either the reliability of your information or the soundness of your rationality.
  • Reliability vs. Rationality: A crucial distinction between evidence suggesting your information is an unreliable guide to the truth (like a red light making a wall look red) and evidence suggesting your reasoning process itself was flawed (like a pilot suffering from hypoxia).
  • The Nihilism Problem: The argument that fully accepting the irrationalist view of belief formation (i.e., that beliefs are shaped by arbitrary factors) leads to a paralyzing nihilism where one must abandon their own controversial convictions.
  • AI as a Philosophical Mirror: Using the behavior of advanced AI, like large language models, to test theories of human rationality; if AIs exhibit the same "biases" as humans, it may suggest these are not simple errors.

Quotes

  • At 1:07 - "Your opponents, and even you, just might be more rational than you think." - The host summarizes the core thesis of Kevin Dorst's work, which challenges the prevailing narrative of human irrationality.
  • At 28:03 - "There's a different tradition, more inspired by computer science and AI, which says, 'Look at the things people do every day and realize how they are... these fantastic feats of computation and engineering.'" - Dorst introduces the alternative perspective that frames human cognition as powerful and sophisticated.
  • At 30:06 - "Whatever is going on to explain why people come to believe the crazy things they do... it's not going to be as simple as just 'they use this really dumb, stupid rule.'" - He states his core thesis that explanations for seemingly irrational beliefs must be more complex and nuanced.
  • At 96:14 - "That way lies nihilism about beliefs... or any sort of interesting, controversial, religious, or political beliefs." - Dorst explains that if we accept that arbitrary influences make beliefs irrational, we would have to become nihilistic about our own deeply held convictions.
  • At 103:59 - "It does the reverse. That's got to be evidence that these things aren't biases." - Speaking hypothetically, Dorst explains his counter-intuitive argument that if an incredibly "smart" AI still made the same "mistakes" as humans, it would suggest those "mistakes" are not simple errors of irrationality.

Takeaways

  • Practice "epistemic empathy" by resisting the urge to dismiss opposing viewpoints as simply irrational; instead, consider the complex, though perhaps hidden, reasoning that might underlie them.
  • Appreciate the immense computational power of everyday human cognition; what seems simple is often incredibly complex, and apparent "errors" may be features of a sophisticated system operating under constraints.
  • When re-evaluating a belief, distinguish whether new evidence challenges the reliability of your initial information or the rationality of your own reasoning process, as these are distinct issues.
  • Observe the behavior of advanced AI as a potential tool for understanding human rationality; if AI models replicate human "biases," it may indicate these are not mere flaws but functional aspects of intelligence.