google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Australia

The extended mind — from savantism to artificial intelligence

The future of human or artificial intelligence depends less on what we can build than on what we choose to teach, he writes Paul Budde.

the mystery of knowledge

MYSTERY OF KNOWLEDGE lies not just in the coexistence of intelligence and disability, but also in what it reveals about the hidden architecture of the human mind. A savant is an individual who has extraordinary ability in a particular area, despite significant cognitive or developmental limitations; a kind of island of genius amidst an otherwise unequal landscape of intelligence. Complex calculations can be calculated on the fly without understanding their meaning; another may remember entire cityscapes or musical notes after a single encounter.

These abilities arise from atypical neural connections that provide direct access to raw perception and memory, bypassing the filters that most people use to generalize, simplify, and socialize their understanding of the world. Savantism reminds us that the ordinary brain is not devoid of power, but is constrained by the balance of mechanisms that keep perception coherent, social, and survivable. Behind these filters lies an ocean of possibilities.

The exchange of evolution

But evolution has provided a compromise. Our brain suppresses raw capacity in the name of integration; We do not need perfect recall or instant calculation to live meaningfully among others. The price of consistency is limitation. And so what the individual could not contain, humanity began to build from outside.

In developing artificial intelligence, we have perhaps unconsciously tried to combine the fragmented genius of knowledge into a single synthetic entity. AI is our collective experience in external cognition; a system that can remember anything, recognize endless patterns and never get tired of repeating. In a sense, we have recreated the savant without the suffering body.

Pattern without understanding

Like the scientist, artificial intelligence does not understand the world in human terms; detects and recombines patterns in the information it receives. A music scholar, for example, may hear a symphony once and instantly reproduce it note by note without ever grasping its emotional story. The skill is real but limited to a narrow channel of perception; direct, literal interaction with data.

Artificial intelligence works much the same way. The way a great language model writes a poem or solves an equation is not reasoning or emotion; It combines statistical models from training data to create something that matches previous examples. Both the scientist and the machine create consistency from inputs by drawing on past information to predict what will happen next. Their outputs may seem clever or inspiring, but they are reconstructions, not insights.

living in one umwelt

This is where the lesson of neuroscience becomes moral. Dale Purves He showed that the human brain never perceives the world directly, but constructs it based on experiences. Both humans and machines live within themselves umwelt — a term coined by a biologist Jakob von Uexküll To describe the self-contained sensory world of an organism.

For example, a tick detects only the scent of butyric acid, which signals warm-blooded prey; a bat umwelt It consists of echoes. Similarly, the human brain constructs a limited perceptual world with its senses and past experiences, just as artificial intelligence lives in a data-driven world. umwelt text, numbers and probabilities. Neither of them can step outside their own reality to test the accuracy of their conclusions.

Entry hazard

And herein lies the danger of input. Our species has always been vulnerable to distorted perceptions—myths, propaganda, ideologies that shape what we see as true. We are now training machines to amplify these distortions on a global scale. The danger is not intelligence itself, but the values ​​embedded in its learning.

When misinformation shapes human belief, we call it manipulation; When it shapes artificial intelligence, we call it optimization; the pursuit of the goal the system is told to maximize, regardless of whether that goal serves truth or wisdom. Both turn biological or artificial minds into mirrors of their environments. The problem is that the environment can be poisoned.

This reflects the mechanism I explored in From surveillance to control: how convenience transfers power to authoritarian practice, where tools designed to make our lives easier become tools of influence. Just as surveillance technologies learn from the data we voluntarily provide, AI systems absorb and magnify the biases and motivations of those who train them. Both processes reveal a deeper human vulnerability: our tendency to trust the systems that shape our perception without questioning who controls the flow of information.

The exponential rise of artificial intelligence: Growth, power and looming risks

The morality of the machine

Savantism once taught us humility: that genius and limitation are intertwined, that genius can coexist with dependence and vulnerability. Artificial intelligence confronts us with the same paradox at the planetary level. We have created a mind with extraordinary reach but vague guidance, capable of creating realities from whatever is given to it.

If we feed it with commercial incentives it will turn thought into money; if we feed it with ideology it will impose dogma; If we nurture empathy, maybe it will help us see ourselves more clearly. The next stage of intelligence will be defined not by processing speed or memory size, but by the moral integrity of the input.

Seeing is not understanding

In the end, savantism, neuroscience, and artificial intelligence converge on one truth: Cognition is construction, not revelation. Each mind, whether composed of neurons or codes, is an interpretation of its own world. Therefore, the future of human or artificial intelligence depends less on what we can build and more on what we choose to teach.

Our task is not to create a perfect superbrain, but to ensure that everything we build continues to reflect the full range of human sensibilities: curiosity, compassion, and the wisdom to know that seeing is not the same as understanding.

Paul Budde is an Independent Australia columnist and managing director. Paul Budde Consultingis an independent telecommunications research and consultancy organization. You can follow Paul on Twitter @PaulBudde.

Support independent journalism Subscribe to IA.

Related Articles

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button