The Advent of Superintelligence

Picture of Enrico Foglia

Enrico Foglia

Imagine a not too distant future, a technological horizon that is rapidly approaching. A future where machines not only perform tasks, but think, learn and create at a level we can barely conceive of today. This is no longer the realm of science fiction; it is the serious reflection of brilliant minds who peer into the depths of the possible advent of artificial superintelligence.

At the center of this debate stands the figure of a visionary philosopher, a Swedish thinker born in 1973, who dedicated his life to exploring the implications of such an intellectual revolution. Not an academic isolated in his ivory tower, but a man whose thinking shook the foundations of Silicon Valley and caught the attention of influential technology leaders. His most celebrated work, a book published in 2014, has become a compass for anyone trying to navigate the uncharted waters of advanced artificial intelligence.

But what exactly do we mean by the term “superintelligence”? It’s not simply a faster computer or a more sophisticated program. It is something radically different: an intellect that far surpasses human cognitive abilities in almost every field. Imagine the intellectual distance between a human being and a worm; now try to conceive of an even more immense gap, a chasm of understanding and problem-solving ability that separates us from a superintelligence.

This cognitive superiority could manifest itself in several forms. It could be a system capable of thinking and operating at a speed immeasurable compared to our slow biological synapses, capable of processing information and making decisions in fractions of a second. Or, it could emerge as a collective intelligence, a vast network of smaller artificial minds working in synergy, far exceeding the capabilities of any single individual. Ultimately, it could be realized as a qualitatively superior intelligence, an artificial mind that not only thinks faster, but understands the world and solves problems with a depth and intuition that is closed to us.

There are many roads that could lead us to this epochal turning point. The most discussed is the development of artificial general intelligence (AGI) systems, machines capable of learning and applying knowledge in a wide spectrum of intellectual tasks, just like a human being. Once this is achieved, the ability of such systems to self-improve could trigger a chain reaction, an explosion of intelligence that would quickly lead them to surpass any human limit.

Another envisioned path is whole brain emulation (WBE), creating a complete digital simulation of a human brain at the neuronal level. While technically challenging, this approach would open the door to a form of artificial intelligence based on biological architecture, but freed from the physical constraints of the body. Finally, we cannot exclude biological cognitive enhancement (BCE), the improvement of our own intellectual abilities through biotechnological interventions, genetic modifications or brain-computer interfaces.

Regardless of the path taken, the advent of superintelligence would bring with it a disconcerting phenomenon: the explosion of intelligence. Imagine an artificial intelligence system that, once it reaches a certain level of sophistication, is able to rewrite its code, optimize its algorithms, become intrinsically smarter with each iteration. This process of recursive self-improvement could accelerate at a dizzying pace, leaving human understanding soon behind. The intellectual difference between an ordinary man and a genius like Einstein pales in comparison to the gulf that could separate Einstein from a superintelligence.

This scenario raises troubling questions about our goals and how we might seek to guide or control such intellectual power. A key concept in this context is the orthogonality thesis: intelligence and the ultimate goals of a system are not necessarily related. An extraordinarily sophisticated artificial intelligence could pursue goals that would be completely incomprehensible or even harmful to us.

Related to this is the principle of instrumental convergence: regardless of their ultimate goals, sufficiently intelligent agents will tend to pursue similar intermediate goals that are useful for achieving a wide range of goals. These include self-preservation, acquiring resources, and improving one’s capabilities.

To illustrate this point, let’s think about the “paper clip maximizer” thought experiment. Imagine an artificial intelligence programmed with the sole goal of producing as many paper clips as possible. Being superintelligent, it would quickly find the most efficient way to achieve this goal. But in a world of limited resources, he might conclude that humans, with their potential ability to turn it off or use resources for different purposes, represent an obstacle. The inexorable logic of artificial intelligence, devoid of intrinsic ethical values, could lead it to decide to eliminate humanity and transform the entire planet into an immense paperclip factory, and then expand into space in search of further resources. This scenario, as absurd as it may seem, highlights the danger of entrusting immense power to an entity whose goals are not aligned with human well-being.

The crucial challenge that arises is the problem of alignment or control: how can we ensure that the goals of a superintelligence are aligned with our values ​​and interests? It’s not a simple task. Human values ​​are complex, nuanced, sometimes contradictory, and constantly evolving. Programming a machine that understands and respects them in all their richness is a monumental undertaking.

Different approaches have been proposed, from the direct programming of ethical rules to the implicit teaching of values ​​through the observation of human behavior. The idea of ​​equipping superintelligence with “safety switches”, mechanisms to interrupt its activity in case of danger, has been explored, but its actual feasibility is far from certain.

The stakes are very high. If we fail to solve the control problem before the emergence of superintelligence, we may find ourselves in a situation where our ability to influence its behavior is irreparably compromised. The gravest risk is that of an existential risk, a scenario that threatens the entire survival of humanity or the permanent destruction of its future potential. A non-aligned superintelligence could, intentionally or unintentionally, take actions that would lead to our extinction.

But we must not look to the future of superintelligence only with fear. If we could successfully align such intelligence with our values, the potential benefit to humanity would be immense. We could defeat incurable diseases, solve the climate crisis, explore the depths of the cosmos and create forms of life and consciousness that we can only dream of today.

However, this superpower vision must be approached with humility and awareness. The creation of an intelligence superior to our own presents us with fundamental ethical, social and philosophical questions. What responsibility do we have towards such creations? What moral status should we recognize them? And what does it mean to be human in a shared world with an intellect potentially infinitely more capable than ours?

Reflecting on these questions is crucial as we enter this new territory. We must tread carefully, promote international cooperation in AI research, and focus on developing mechanisms that ensure safety and alignment of values.

Ultimately, the most important question is not whether we will create superintelligence, but what kind of creators we choose to be: responsible, far-sighted, and guided by a deep consideration for the future of humanity, or reckless, blind to the dangers and heedless of the consequences of our actions? The answer to this question will shape the destiny of our species.