LLMs, AGI and Embodied Intelligence: A Reflection Beyond the Hype

What is intelligence? Now that the hype has cooled down, I can finally talk about it. Just a few weeks ago, our feeds were flooded with news about “AI replacing us,” stock markets and finance riding the wave, companies boasting about AGI, market and process transformations powered by AI… a revolution… all because of LLMs (Large Language Models)…
But what are these LLMs really? They are large-scale language models, trained on massive amounts of text to predict the next word in a sentence. Behind this apparent simplicity, however, lies unprecedented computational power. These systems don’t understand in the human sense, but they simulate understanding. As Marvin Minsky, one of the founding fathers of artificial intelligence, warned us: intelligence is not a single thing, but a “society of mind,” a collection of simple processes working together.
After months, poof, the scaling problem of LLMs emerges: making models bigger with today’s hardware and architectures does not yield better performance, sometimes the opposite… Yet on social media, creators of all kinds — from finance gurus to food coaches — kept showcasing AI’s “amazing abilities” that would soon replace us all:
they are more expert than me
,
they know all the recipes, the techniques
,
they do data analysis better
(that one might even be true)…
But the question remains: when we replace people with numbers… what is really better?
If this question feels impossible to us, data analysts and statisticians see it differently: yes, to manage the massive scale of society — global logistics, financial markets, databases, goods, and yes, also people, citizens, inhabitants. Even for local administration (think of the City of Milan, that “small” metropolis which for Italians feels like a global New York), AI could become a fundamental organizational tool.
But let’s rewind. First, what is artificial intelligence (AI)? The term was coined in 1956 at Dartmouth College by pioneers such as John McCarthy. The idea was simple yet revolutionary: build machines capable of simulating certain human cognitive functions like reasoning, planning, learning, and perception. Today’s weak AI focuses on specific tasks — image recognition, answering questions — while AGI (Artificial General Intelligence) is still unrealized: a system with general cognitive abilities similar to humans.
For many, AGI is either utopia or marketing distortion, as cognitive scientist Domenico Parisi argued. For decades he studied evolutionary AI models. In his pioneering experiments in the 1990s at Italy’s CNR Institute of Cognitive Sciences and Technologies, Parisi built simulated robots — artificial creatures placed in virtual environments — to study the evolution of intelligence through environmental interaction. Using artificial neural networks and simulated Darwinian evolution, these agents learned to move, perceive, choose, adapt — in a word, survive.
According to Parisi, intelligence cannot exist without body and environment. It’s not just an algorithm, but emergent behavior shaped by sensorimotor experience and emotions. He introduced simulated emotional inputs into his agents — signals like “pleasure” and “pain” — showing how emotions play a functional role in guiding learning and decision-making. These inputs aren’t embellishments: they are cognitive modulators that help systems select more adaptive behaviors. In other words, there is no thinking without feeling.
In this view, a language model like an LLM — however advanced — is a mind without a body: incapable of acting, perceiving, and suffering. Far from intelligence as understood by evolutionary biology. For Parisi, AI will truly be intelligent only when it is embodied, situated, and emotionally motivated. This isn’t just a technical issue, but a paradigm shift: predicting the next word isn’t enough, one must live in the world.
This brings us to the second question: what is AGI really? Artificial General Intelligence is a system able to operate autonomously across different contexts, adapting as a human would. But what does autonomy mean? And more importantly, what does intelligence really mean?
Even we humans are not autonomous in isolation — we are shaped by our environment. Intelligence does not arise in a vacuum. We have multiple layers — biological, familial, cultural, social — to simplify. Autonomy remains a complex concept, even for humans as individuals, though it is the very drive that defines us as a species: the pursuit of emancipation, autonomy, freedom… perhaps even from being human.
Back to the theme… artificial intelligence.
Between those claiming we already have AGI and those calling it a “stochastic parrot” — nothing like intelligence, just a probabilistic abacus — the debate is heated.
And yet, it’s clear: there is no single definition of intelligence. We can limit ourselves to the human kind, but that doesn’t mean we shouldn’t fear AI, even if it were “just” a stochastic parrot.
Human intelligence, as Minsky noted, is not one entity, but a complex system of cognitive, emotional, and perceptual micro-processes. It is the ability to learn, adapt, interpret. It is logical-mathematical, but also musical, bodily-kinesthetic, interpersonal, as Howard Gardner suggested in his theory of multiple intelligences.
And we already know the truth: language models are just one layer, part of a brain that analyzes words and interprets reality through them. When there will be a body, a perceptual hardware, a symbolic system, and emotional drivers reinforcing inputs from the environment — for example, the entire network of connected devices — then the gap will shrink. But it will still exist.
So let’s not fear AI — let’s fear humanity and its will. From its appearance on Earth, through social changes, discoveries, and technological revolutions, humankind has constantly reshaped reality and its own nature…








