InformationInformation
Loading . . .

What can AI teach us about the mind? - Many Minds

Rating
Current rating: not rated yet
  • 0 Stars0 Stars
  • 1 Star1 Star
  • 2 Stars2 Stars
  • 3 Stars3 Stars
  • 4 Stars4 Stars
  • 5 Stars5 Stars
Fav.
FavoriteFavorite
Waveform
Action
PlayPlay Play nextPlay next Play lastPlay last Add to Temporary PlaylistAdd to Temporary Playlist Post ShoutPost Shout
  •  Link Link Link
  • LinkLink DownloadDownload
    Title
    What can AI teach us about the mind?
    Description
    <p class="p1"><span class="s1">Everyone is talking about AI these days. Often these conversations are about how AI might upend education, or work, or social life, or maybe civilization itself. But among cognitive scientists and psychologists the conversation inevitably drifts toward other questions. What does this latest generation of AI tell us about the human mind? Is it putting old ideas and theories to rest? Is it ushering in new ones? Will AI—in other words—also upend cognitive science?</span></p> <p class="p1"><span class="s1">My guests today are <a href= "https://web.stanford.edu/~mcfrank/"><span class="s2">Dr. Mike Frank</span></a> and <a href= "https://psych.wisc.edu/staff/lupyan-gary/"><span class="s2">Dr. Gary Lupyan</span></a>. Mike is a Professor of Psychology at Stanford University, where <a href= "https://langcog.stanford.edu/"><span class="s2">his lab</span></a> focuses on language learning and cognition in children. Gary is a Professor of Psychology at the University of Wisconsin Madison, where <a href="http://sapir.psych.wisc.edu/"><span class="s2">his lab</span></a> studies language and its role in augmenting human cognition. Both Gary and Mike have more recently been thinking a lot about AI and how it is challenging and deepening our understanding of the human mind. </span><span class= "s1"> </span></p> <p class="p1"><span class="s1">In this conversation, we talk about being interested in AI as cognitive scientists—while also being concerned about the technology as people. We discuss the linguistic abilities of frontier LLMs compared to the linguistic abilities of adult humans. We talk about a glaring "data gap" here—the fact that, even though LLMs often rival human abilities, they require orders of magnitude more data to do so. We contrast the capabilities of large language models with so-called BabyLMs. We consider the fact that, as LLMs master language, they also master other abilities—capacities for mathematical reasoning, causal understanding, possibly theory of mind, and more. And we talk about why language might be an especially potent form of input for an AI. Along the way, we touch on reference and the symbol grounding problem, the Platonic Representation Hypothesis, stimulus computability, confabulated citations, pattern matching and jabberwocky, the poverty of the stimulus argument, congenital blindness, Quine's topiary, the limits of in principle demonstrations, the WEIRD problem, and what the astonishing sophistication of disembodied AIs might suggest about the role of bodily experience in human cognition.</span></p> <p class="p1"><span class="s1">Before we get to it, one small request: we're currently running <a href= "https://forms.gle/TKxYMphGheakcjfT9"><span class="s2">a short survey</span></a> of our listeners. You can find the link in our show notes. If you have a few minutes, we'd really love your input!</span></p> <p class="p1"><span class="s1"> </span><span class= "s1">Alright friends, here's my conversation with Mike Frank and Gary Lupyan. I think you'll enjoy it!</span></p> <p class="p1"><span class="s2"> </span></p> <p class="p1"><span class="s2"><em>Notes</em></span></p> <p class="p1"><span class="s1">5:00 – For more discussion of "stochastic parrots" and other ways of framing AI systems, see our <a href="https://disi.org/seven-metaphors-for-ai/"><span class= "s2">recent episode</span></a> with Melanie Mitchell. For the "octopus test," see <a href= "https://kottke.org/23/03/the-octopus-test-for-large-language-model-ais"> <span class="s2">here</span></a>.</span></p> <p class="p1"><span class="s1">8:00 – "BabyLMs" are—in contrast to large LMs (aka LLMs)—models that are trained on a more human-scale amount of linguistic input. For more on the BabyLM community, see <a href= "https://www.thetransmitter.org/neuroai/the-babylm-challenge-in-search-of-more-efficient-learning-algorithms-researchers-look-to-infants/"> <span class="s2">here</span></a>.</span></p> <p class="p1"><span class="s1">12:00 – For broad discussion of the use of AIs as "cognitive models,"
    Publication Date
    2026-03-26T19:45:00+00:00
    Status
    completed
    Website
    https://manyminds.libsyn.com/what-can-ai-teach-us-about-the-mind
    Length
    81:18
    File
    /podcasts/Many Minds/1774554300-5275.mp3
    Size
    111.65 MB
    Bitrate
    187-CBR
    Channels
    1

    Queries: 30 | Cache Hits: 0 | Load Time: 0.1266 | 2 MB
    Show/Hide PlayerShow/Hide Player
    UberViz
    FPS