Will multimodal large language models ever achieve deep understanding of the world?

Abstract

Despite impressive performance in various tasks, large language models (LLMs) are subject to the symbol grounding problem, so from the cognitive science perspective, one can argue that they are merely statistics-driven distributional models without a deeper understanding. Modern multimodal versions of LLMs (MLLMs) are trying to avoid this problem by linking language knowledge with other modalities such as vision (Vision Language Models called VLM) or action (Vision Language Action Models called VLA) when, for instance, a robotic agent, is acting in the world. If eventually successful, MLLMs could be taken as pathway for symbol grounding. In this work, we explore the extent to which MLLMs integrated with embodied agents can achieve such grounded understanding through interaction with the physical world. We argue that closing the gap between symbolic tokens, neural representations, and embodied experience will require deeper developmental integration of continuous sensory data, goal-directed behavior, and adaptive neural learning in real-world environments. We raise a concern that MLLMs do not currently achieve a human-like level of deep understanding, largely because their random learning trajectory deviates significantly from human cognitive development. Humans typically acquire knowledge incrementally, building complex concepts upon simpler ones in a structured developmental progression. In contrast, MLLMs are often trained on vast, randomly ordered datasets. This non-developmental approach, which circumvents a structured simple-to-complex conceptual scaffolding, inhibits the ability to build a deep and meaningful grounded knowledge base, posing a significant challenge to achieving human-like semantic comprehension.

Publication
2025 Frontiers in Systems Neuroscience
Stefan Wermter
Stefan Wermter
Networking Lead Expert