Erik Larson on AI’s Myths and Limits: Why Machines Can’t Think Like Humans | Festina Lente - Your leading source of AI news | Turtles AI

Erik Larson on AI’s Myths and Limits: Why Machines Can’t Think Like Humans
Explore the thought-provoking insights of Erik J. Larson on the myths surrounding AI, its limitations, and the future of human and machine intelligence
DukeRem

Welcome to an exciting new interview, exclusively for Turtle’s AI. Today, we have the pleasure of featuring Erik J. Larson, the author of "The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do”, a book published in 2021 when AI was still a topic for "the few" and chatGPT was at least a year away from appearing on the screens of millions of people. He writes the popular Substack Colligo, where he explores the intersection of technology and society, moving “Toward a humanistic theory in an age of data".

Erik J. Larson is not only an accomplished author but also a seasoned computer scientist and entrepreneur. Larson’s Ph.D at the University of Texas at Austin was a hybrid and included faculty from computer science, philosophy, and linguistics. He’s made significant contributions to artificial intelligence, particularly in natural language processing and machine learning. He has founded two DARPA-funded startups and was involved in the Cyc project at Cycorp, an ambitious effort to build a comprehensive commonsense knowledge base. His extensive experience in AI research and development, particularly in hierarchical classification techniques and supervised machine learning methods, has informed his critical perspective on the limitations of AI and the myths surrounding its potential.

For this very reason, the interview aims to explore a range of themes discussed in that "ante litteram" book, bringing to the forefront many contemporary questions of a technological, semiotic, and psychological nature related to AI.

Additionally, Erik is currently working on a new book that will explore the new concept of "Elusive Intelligence”, a topic that, from its name alone, deeply intrigues me.

I leave you with this exclusive interview, which, as always, is a source of enrichment for me as well, even if I’ve been in the field of machine learning for over 25 years.


Duke Rem: Let’s start “softly”…How do you differentiate between human intuition and machine ingenuity, and why do you believe machines can’t replicate intuition?

Erik Larson: I learned of the distinction in what we might call the “philosophy of mathematics” and Turing developed the concepts somewhat in his Ph.D. thesis. Roughly, (and as Turing intended), “intuition” is that faculty of cognition that “sees” something to focus attention on. In the case of math, the mathematician’s intuition guides him or her toward interesting problems. Intuition according to Turing sits outside the formal system the mathematician will be working in. Intuition sets the stage, you might say. By contrast, ingenuity is the cognitive ability to crank through the problem and solve it. I’m not sure if machines can replicate human intuition in the unconstrained way that I think Turing meant. One problem with ascribing this type of faculty to machines is that it seems to involve mindful concepts like “interest” and “motivation.” Whether this can be replicated by a machine is, I suppose, still an open question, but I’m skeptical that machine intelligence reaches every corner of human minds—we are not just cognition but also this quality of attention, focus, and interest.

 

Duke Rem: In discussing "The Intelligence Error”, you critique the view that human intelligence can be reduced to problem-solving. Can you elaborate on the philosophical implications of this error?

Erik Larson: Philosophically, reducing intelligence to problem-solving is an oversimplification. Problem-solving is certainly a component of intelligence, but it’s not the whole picture. Human intelligence is deeply intertwined with our subjective experiences, our ability to understand context, and our capacity for abstract thinking, which are not easily reducible to algorithms or computational processes. True, AI keeps making progress on aspects of intelligence that aren’t directly tied to this reductionist view, but there’s a curious inversion at work here, where the yardstick of problem-solving is applied to us, seemingly because it’s well-defined for computers.

Really, we should think about intelligence in terms of our largest and most expansive judgments about our own potential, and then look at computers as a technology and an artifact that we hope can capture more and more of what we care about. But the idea that we start with “problem-solving” because that fits well with straightforward interpretations of AI programs isn’t very inspiring. It limits our conception of intelligence to what is computationally tractable, rather than what is humanly meaningful. Our view of intelligence should be expansive—Einstein’s reconceptualizing space and time was a bit more than “problem solving.”

 

Duke Rem: What are your thoughts on the potential for machines to achieve general intelligence given the current limitations of AI, as outlined in your book? Please feel free to update your thoughts with the (many) advancements of the last 3 years.

Erik Larson: ChatGPT certainly moved the needle in my field, natural language processing. There’s no question about that. But—and I get this question a lot, as you might expect—I don’t think it changes anything substantive about my argument. The easiest way to see this is to venture out into the real physical world, where AI faces major, unsolved challenges. While robotics is making progress, it would be pure folly to release the most advanced robot onto a busy street in a major city. For similar reasons, we don’t hear much anymore about self-driving cars, largely because there’s no real progress to report toward Level 5, fully autonomous driving.

The difference? Unlike in the cyber realm of the world wide web, where ChatGPT “lives”, the actual world is open-ended, and the scenarios one might encounter are effectively infinite. Even in the more controlled environment of web data and transformers, we see that the difficulty in token-level systems is bridging to the logical and conceptual. Some AI researchers are deeply concerned about hallucinations—instances where models generate incorrect or nonsensical information. Initially, I assumed these hallucinations would be ironed out over time, but nearly two years since the launch of ChatGPT, it appears that hallucinations are baked into the approach.

This strongly signals that we’re not on a path to achieving artificial general intelligence (AGI). Without true understanding at the conceptual level, it will be impossible to reliably reproduce human-like intelligence. AGI requires not just pattern recognition or problem-solving within a defined set of data but the ability to navigate and understand an open-ended, unpredictable world—something current AI, despite its advancements, still struggles with fundamentally.

 

Duke Rem: You mention that current AI advancements are akin to picking "low-hanging fruit" (I really like this metaphor, btw). What do you believe are the most significant challenges that AI must overcome to advance beyond narrow applications?

Erik Larson: Well, see my prior response! But more broadly, it’s extremely unlikely that we can keep feeding deep neural networks more tokenized data and expect to make progress toward AGI. I used to avoid open-ended words and phrases like “understanding”, but after ChatGPT showed the ability to effectively simulate without “understanding”, I’m more inclined to say that the challenge AI faces is to develop a convincing proof of concept for, basically, an “understanding” machine. We can see that LLMs can answer questions about cause and effect and even simulate abductive inference, or reasoning from an observed effect to a plausible cause. But in the errors we can see that the system hasn’t achieved a concept-level understanding of what it’s talking about. That’s the challenge, and just as I wrote in the Myth, we really don’t have a clue how to do that.

 

Duke Rem: How do you envision the role of abductive reasoning in achieving human-like intelligence, and why do you think it’s been largely ignored in AI research?

Erik Larson: Abductive reasoning, often described as inference to the best explanation, is crucial for achieving human-like intelligence because it allows us to generate hypotheses and make educated guesses in situations of uncertainty. It’s a way of thinking that goes beyond mere pattern recognition or deduction; it involves creatively filling in gaps when data is incomplete, which is something humans do naturally but AI struggles with. The reason it’s been largely ignored in AI research is that abduction is inherently difficult to formalize within the rigid frameworks that AI typically relies on. Most AI models excel at processing large amounts of data to find patterns or optimize outcomes based on clear rules, but abductive reasoning requires a flexibility and contextual understanding that current models lack. This oversight may stem from the focus on developing systems that can handle well-defined problems efficiently, rather than those that can navigate the ambiguous and often messy nature of real-world situations. As a result, AI systems today are still far from replicating the human capacity for creative problem-solving and insight, which is deeply tied to our ability to reason abductively.

 

Duke Rem: Can you expand on your concept of "technological kitsch" and how it relates to public and scientific perceptions of AI?

Erik Larson: "Technological kitsch" refers to the oversimplified, sentimental, or aesthetically pleasing representations of technology that mask its underlying complexities and limitations. In the context of AI, this concept highlights how both the public and scientific communities can be seduced by flashy demonstrations, sleek user interfaces, and exaggerated promises of what AI can achieve. This kitsch presents AI as a magical solution to almost any problem, fostering unrealistic expectations and obscuring the genuine, often nuanced challenges involved in developing truly intelligent systems. The danger of technological kitsch is that it encourages a superficial understanding of AI, leading to both overhyped optimism and misplaced fears. It reduces AI to a series of impressive tricks rather than a field of serious inquiry with profound implications for society. This has the effect of distorting public perception and even influencing research agendas, where the focus shifts to what looks good or sells well rather than what is scientifically and ethically sound.

 

Duke Rem: What are the dangers of the cultural myth surrounding AI, today that AI is reaching most people, in various forms?

Erik Larson: The cultural myth surrounding AI—that it is on the verge of becoming as intelligent as or even surpassing human beings—poses significant dangers as AI reaches more people in various forms. This myth can lead to a false sense of security, where people overtrust AI systems in critical areas like healthcare, finance, and law, assuming these systems possess a level of understanding and judgment that they do not. It can also stoke unwarranted fears, such as the belief that AI will inevitably take over all jobs or even pose existential threats to humanity. Both extremes distract from the real issues, such as the ethical implications of data usage, the reinforcement of biases in AI models, and the widening gap between those who benefit from AI and those who are marginalized by it. Moreover, this myth can divert resources and attention from addressing the actual challenges in AI development, such as creating systems that are transparent, accountable, and aligned with human values. In essence, the myth of AI as a near-human intelligence can distort public understanding, policy-making, and research priorities, leading to societal impacts that are both misguided and potentially harmful.

 

Duke Rem: How do you see the relationship between AI and human cognition evolving in the (near and far) future?

Erik Larson: In the near future, I see AI complementing human cognition by serving as a powerful tool that can enhance our abilities to process information, identify patterns, and make decisions. AI will likely continue to excel in specific, well-defined tasks, acting as an augmentation to human intelligence rather than a replacement. For example, AI can assist in analyzing large datasets, suggesting new avenues for research, or even providing real-time recommendations in complex scenarios like medical diagnostics. However, this relationship will be one of partnership rather than parity—AI will amplify our cognitive capabilities, but it will still rely on human judgment, creativity, and ethical reasoning to navigate the complexities of the real world.

In the far future, the relationship between AI and human cognition could become more integrated, with AI systems becoming more deeply embedded in our daily lives and decision-making processes. However, significant challenges will remain in achieving true human-like cognition, particularly in areas that require understanding, empathy, and adaptability in open-ended environments. I’m frankly not sure how far this integration can go and still represent a safe and fruitful direction for society. But the ongoing evolution will likely raise important ethical and societal questions, especially concerning the boundaries of AI’s role in decision-making and the preservation of human autonomy and agency.

 

Duke Rem: What insights from your analysis of Gödel’s incompleteness theorems do you think are most relevant to the future development of AI?

Erik Larson: Gödel’s incompleteness theorems reveal that within any sufficiently complex formal system, there are truths that cannot be proven within the system itself. Though I didn’t expound on this in the book, I suspect that it has profound implications for AI, particularly in the pursuit of artificial general intelligence (AGI). The most relevant insight is that AI systems, which are ultimately based on formal rules and algorithms, will always encounter limitations in their ability to fully replicate human reasoning and understanding.

For AI development, a “Gödel aware” position might be the recognition that there are inherent limitations in what can be achieved through algorithmic approaches alone. Of course, the argument using G’s theorem applied to the question of AI is now decades old, and as you might expect, the experts still disagree. For my part, I’m comfortable believing that the implications of incompleteness in formal systems suggest that mechanism has limits; as Turing once argued, intuition is “outside the system.”

 

Duke Rem: You often suggest (without stating it directly) that human intelligence is inherently social and contextual. How (and why) does this perspective challenge current AI methodologies?

Erik Larson: Human cognition is deeply embedded in social interactions, cultural norms, and the specific contexts in which we operate. We learn, reason, and make decisions not just in a vacuum, but in dynamic environments where understanding others’ intentions, emotions, and perspectives plays a crucial role. Importantly, the social nature of human intelligence means that much of what we know and how we act is influenced by our interactions with others. One obvious challenge to extant AI research here is that today’s AI doesn’t dynamically learn from a conversation. So there’s really no possibility of capturing the “social” in a real conversation. Making progress here takes us into deep water, and I’m not sure what will prove useful or not. I think the question of dynamic learning is at least one important component.

 

Duke Rem: How do you interpret the term "Elusive Intelligence”, which you plan to explore in your next book? Can you give us a glimpse?

Erik Larson: I’m working with Chee-We Ng, a venture capital investor in Los Altos with a strong background in machine learning and AI from MIT, and the business aspects of AI from Harvard Business School. We teamed up to write Elusive Intelligence after discovering that we share a deep intuition: AI is incredibly powerful and useful, but intelligence itself remains an ongoing scientific mystery. Too often, “business as usual” in AI ignores this fact, stalling progress both in the field and in society at large.

AI has made remarkable strides, but even its biggest advocates admit that intelligence remains elusive. The field risks stagnation if it doesn’t break out of its current paradigms. We argue that a deeper understanding of neuroscience and the brain will be essential, especially in the quest for AGI. Indeed, the brain likely holds the keys to unlocking the next level of intelligent systems. We look at some recent work on the neocortex and explore themes like natural versus artificial learning.

Culturally, we also see the discussion about AI today as missing the mark. Progress in AI is often framed as a race to replace human intelligence, but it underscores just how extraordinary natural intelligence is. The real opportunity lies in using AI to amplify what makes us uniquely human. In the book we advocate for a bold approach: not to mimic human intelligence, but to elevate it. If we get this synergy right, AI and human intelligence together could achieve the extraordinary.

 

Duke Rem: In your book, you discuss the limitations of machine learning. What are the most significant misconceptions about machine learning and AI in popular discourse?

Erik Larson: A major misconception about machine learning and AI is the belief that they are close to replicating human intelligence. Machine learning is fundamentally based on inductive inference, which allows it to recognize patterns within narrow, predefined tasks. However, induction alone is not adequate for achieving general intelligence—it lacks context, common sense, and genuine reasoning. Another myth is that AI can make autonomous decisions in complex, real-world scenarios; in truth, these models are only as effective as the data they’re trained on and can falter in unfamiliar situations.

There’s also a false notion that more data and computational power will naturally lead to smarter AI. While these can enhance performance, they don’t solve fundamental issues like the lack of true understanding or the inability to generalize across domains. The hype often obscures these limitations, leading to inflated expectations and a misunderstanding of the profound complexity of human intelligence.

 

Duke Rem: You critique the idea of "superintelligence" as presented by Nick Bostrom and others. What are the foundational flaws in this concept according to your analysis?

Erik Larson: The concept of "superintelligence”, as presented by Nick Bostrom and others, has several foundational flaws. First, there is no evidence to support the idea that an intelligent system can create a more intelligent version of itself, leading to an "intelligence explosion." This scenario seems far-fetched; what’s more likely are machines that replicate themselves to perform specific tasks. However, even in this case, human designers will be responsible for overseeing and guiding the process, rather than passively observing it from the sidelines.

Bostrom also appears to operate under a linear progress model, where intelligence can be continually "added" to systems as if it’s a straightforward accumulation. This is an oversimplification of how intelligence actually develops and functions. Moreover, Bostrom introduces the notion of motivations and autonomy in AI—suggesting that machines might one day have goals of their own—which is more philosophical flair than anything grounded in the reality of engineering. In practice, we don’t see such attributes emerging in computational systems; motivations and autonomy are distinctly human characteristics that are not inherently present in AI. These oversights lead to a speculative narrative that, while intriguing, doesn’t hold up under closer scrutiny.

 

Duke Rem: How do you view the recent emergence of generative AI models, such as GPT, in the context of your arguments about the limitations of AI? Do you believe they represent a meaningful step towards general intelligence, or are they another form of "technological kitsch"?

Erik Larson: I would not go as far as technological kitsch, since LLMs and other transformer-based approaches to AI have indeed moved the needle on many tasks and tests in natural language processing. I doubt they are a meaningful step toward AGI, for the simple reason that nearly everyone who interacts with them—while clearly impressed—realizes that the systems simulate understanding. An AGI system can’t be a scaled-up version of a simulated intelligence, but must have cracked the secret to gaining insight and understanding of a domain or topic. As far as I can tell, that’s still an engineering mystery; it’s not more of the same.

This distinction is crucial. While LLMs excel at generating coherent and contextually relevant text, they lack the underlying mechanisms to truly grasp meaning or generate insights. AGI, by definition, would require a level of comprehension and adaptability that goes far beyond pattern recognition. Until we unlock the engineering principles that enable genuine understanding, we’re still dealing with highly advanced tools, not the kind of general intelligence that can autonomously reason and learn across diverse domains.

 

Duke Rem: How does the historical context of AI’s development, as influenced by figures like Alan Turing and Kurt Gödel, impact current research paradigms?

Erik Larson: I think this question overlaps with others we’ve discussed, so forgive me if I hit on the same point. The historical context of AI’s development, particularly the contributions of figures like Alan Turing and Kurt Gödel, has had a lasting impact on current research paradigms. Turing’s work laid the groundwork for computational theory, introducing the idea that machines could perform tasks traditionally associated with human intelligence. His concept of the Turing Test still influences how we think about machine intelligence today, framing the challenge of AI as one of simulating human-like behavior.

However, Gödel’s incompleteness theorems remind us that there are inherent limitations to formal systems—limitations that are often overlooked in the push to develop more powerful AI. Gödel showed that even within a logically consistent system, there are truths that cannot be proven within that system, highlighting the inherent complexity and limitations of any computational approach to intelligence. This insight is crucial because it underscores the difference between human intelligence, which operates beyond formal logic, and AI, which remains constrained by it.

 

Duke Rem: What role do you see for creativity in AI development, especially in light of your discussion on intuition and ingenuity?

Erik Larson: Creativity plays a crucial role in AI development, but it’s a role that is often misunderstood or oversimplified. In the context of AI, creativity involves more than just generating novel outputs—it requires intuition and ingenuity to tackle complex, open-ended problems in ways that go beyond brute computational force. Current AI systems, while capable of producing creative outputs like art or music, do so by recombining existing patterns and data rather than through any genuine understanding or inspiration.

True creativity, as we see in human intelligence, involves the ability to leap beyond the given, to make connections that are not immediately obvious, and to intuit solutions in the face of uncertainty. This kind of creativity is deeply tied to human intuition and the capacity to grasp the essence of a problem in a way that AI, as it stands, cannot replicate. Developing AI that can truly engage in creative thinking would require a breakthrough in how we understand and model intelligence itself, moving beyond current approaches that rely on pattern recognition and induction.

In the development of AI, creativity from human researchers is indispensable. It’s their ingenuity and intuition that drive the progress of AI, after all. But until AI can develop its own form of true creative thinking, its role will remain limited to that of a powerful tool, in my view.

 

Duke Rem: How would you respond to AI proponents who argue that advancements in hardware will eventually lead to true AI?

Erik Larson: Advancements in hardware will undoubtedly improve the efficiency and capabilities of AI systems, but they won’t inherently lead to true AI. While better hardware allows for more powerful computations and larger models, it doesn’t address the fundamental limitations of current AI approaches, as I’ve discussed above. True AI requires breakthroughs in how we model and replicate the complexities of human intelligence, not just more computational power. Simply scaling up hardware might enhance performance in narrow tasks, but it won’t bridge the gap to achieving general intelligence.

 

Duke Rem: You discuss the "narrowness trap" in AI development. Can you provide examples of how this has manifested in recent AI technologies?

Erik Larson: The basic idea with the concept of a “narrowness trap” is that AI actually succeeds by its designers picking a problem that computation can solve, and then engineering a solution to it. The narrowness is baked in, as it’s the problem and its solution that drives the engineering process. All of AI exhibits this so far, including LLMs. LLMs can provide conversational AI capability across topics or domains, of course, but their designers now call their training “pretraining”, and the rest of the training is tailored to a specific problem or user need. Narrowness, narrowness. In some sense, it is not even a “trap” as I put it in the Myth, but again, it’s how AI succeeds. If we’re taking AGI seriously, though, continuing to engineer narrow solutions to problems most certainly is a trap.

 

Duke Rem: What psychological or semiotic theories do you think best explain the human tendency to anthropomorphize AI (btw, something that I personally hate)?

Erik Larson: The human tendency to anthropomorphize AI stems from our innate drive to see patterns and assign meaning where there is none. We naturally attribute human-like qualities to AI because it mimics human interactions, even if only superficially. The Theory of Mind also plays a role, as we instinctively project our own thoughts and emotions onto these machines. This can be frustrating because it leads to misconceptions, making us believe that AI systems have human-like understanding or consciousness when they do not. I’m not sure there’s an easy answer here, as there are probably good reasons we have these tendencies in the first place.

As I’m sure readers who read my book or follow me online know, I spill quite a bit of ink discussing this issue, as it’s the source of much confusion. My main frustration here is, ironically, that the actual design and development of AI systems requires a clear view of what machines are and what we’re doing when we program them. In a sense, the anthropomorphizing actually burdens the field of AI, to my mind.

 

Duke Rem: How do you envision the future of AI research if the current mythologies surrounding it are dismantled?

Erik Larson: If the current mythologies surrounding AI are dismantled, I envision a future where AI research becomes more grounded and realistic. Instead of chasing the elusive dream of replicating human intelligence, researchers might focus on developing AI as a powerful tool that complements human capabilities. This shift could lead to more practical applications, where AI is used to solve specific, real-world problems without the burden of inflated expectations. By acknowledging the limitations of AI, we can direct resources and innovation toward areas where AI can genuinely make a difference, fostering collaboration between AI and human intelligence rather than competition. This pragmatic approach could ultimately lead to more meaningful and impactful advancements in the field.

 

Duke Rem: Is there anything that you’d like to add, by asking one more question to yourself?

Erik Larson: One more thing I’d like to add is that I’m continuing the discussion I started with The Myth of Artificial Intelligence over at my Substack, Colligo. The platform has grown in ways I hadn’t anticipated, and it’s given us the space to really dig into the important topics surrounding AI and its broader impact on society. The community there is incredibly engaged, and it’s been deeply rewarding to explore these conversations together, in real-time, with readers who genuinely care about the future of technology and its role in our lives. I encourage your readers to find me there and join the discussion.

 

Duke Rem: A huge thank you, Erik!