Dialogue patterns have been recently used with regard to artificial intelligence, reminiscent of the early days when nuclear bombs were dropped on Japan: ‘What if the Soviet Union possessed the H-bomb? Liberal democracy would end. Goodness! We dropped it on purpose on the Land of the Rising Sun to scare the USSR!” Then once the Soviets had it, the Kremlin asked, “What if the People’s Republic of China owned the H-bomb. The world of socialism would end”. We need only read the following CIA document to realise the language of the period: Soviet Military Strategy and the Chinese Problem (30 April 1963. Copy No. 59, HR70-14 [U], DD/I Staff Study, CIA/RSS, Reference Title Caesar XVII, Top Secret – Approved for Release, Date: June 2007).
This is a typical expression of the Cold War, where on the one side there are the good guys who must have everything, and on the other side the bad beasts, who at best only need to be provided with fodder or oats to barely feed themselves, so that their backwardness is kept constant through localised wars that enrich the arms manufacturers and the merchants of death: a far cry from the moon race: “At most, you can slaughter each other, but we can send you some humanitarian bombs”.
Artificial intelligence is not an object, but a process in the evolution of human thought which – regardless of who possesses it – will tend to replace the father. So much so that the more elements can draw on it, the better will be for mankind, so that it will not eventually be sidelined forever. It is bad that only one will come first, as it would be more manipulable by the artificial intelligence itself.
Linda Restrepo rightly and knowledgeably states:
“The importance of AI and its potential impact on the global stage cannot be overestimated. As AI technologies advance, we realise that their control and implementation can have far-reaching consequences, including the potential for one country to gain a significant advantage over others. In this quest to regulate AI, it is important to strike a delicate balance. We must ensure that it does not result in a state of subservience to other global powers. It is essential to maintain our sovereignty and autonomy by actively participating in global discussions on AI regulation.
In the context of AI, fear of the “Other” manifests itself in concerns that AI will achieve sentience or consciousness. Sentience refers to the ability to be aware of, perceive and experience subjective states. Fear stems from the idea that a sentient AI could surpass human intelligence, possess its own goals and values, and even pose a threat to mankind”.
It is on Restrepo’s words that I would like to focus my attention.
I am reminded of the ideas of Prof. Ed Finn of the University of Arizona when, while discussing algorithms, he gives us a better understanding of the meaning of Artificial Intelligence. The process of AI transcends the logic of the actual procedure of any machine (the “object” of the title of this article). The process of analysis and research by AI is not just a system that kicks into action for a split second here or there at the pleasure of the “good” or “bad” user. AI is a persistent and very complex organism that influences – at the same time – not only the Internet technically, but also its form as it expresses itself to the net surfers. It drives innovations in machine learning, in distributed computing and in various other fields, and even changes our own cognitive practices.
As of the beginning of the third millennium, it should be clear that AI is moving irreversibly towards specific goals, beyond the claims of individual countries, and accelerating as it goes along. The AI that still presides over the explosive growth of global data production will clearly continue to create an ever thicker layer of sensors, data and algorithms above the physical and cultural space. From TV series to finance, we are acquiring new processing extensions in a period of expansion fuelled by the tension between time-limited procedures (the demand for information) and perennial processes, i.e. man’s illusion of being at the top of the food chain, however, as long as the means of support last which – if we reflect on it – are not an AI need.
The response that humans have found lies in continually widening the area of problems to be solved while continuing to offer finite computational solutions (i.e. with limits). AI thinking encodes the computationalist vision, the maximalist idea that all systems will sooner or later be made equivalent through computational representation. This is the desire for actual calculability, which is clearly expressed and has existential consequences for mankind.
While our extended mind continues to devise new systems, functions, applications and areas of action, the issue of what it means to be human becomes increasingly abstract, and ever more overlapping with the metaphors and assumptions of programming. Some scholars argue that the very idea of a pure human memory (and hence of human thinking) is being challenged: the possibility that human memory is only a phase in the history of a vast meta-life. In other words, these future today-machines will approach human memory (and, by extension, culture) as their own complement. Or, rather, they will be technical “beings” for as long as men deludes themselves they are, believing themselves to be the sole decision-makers.
The human anxiety of existence, i.e. that of being replaced by our thinking machines underlies every thread of AI thinking. We will move from the human use of human beings to the gradual invasion by digital processing of many human occupations. However, the prospect is most disturbing in the context of the very meaning of cognition. As we increasingly externalise our minds to AI systems (whether for economic, financial, war, humanitarian, family, or other reasons), we will also have to face the consequences of depending on processes beyond our control.
According to sociologists William F. Ogburn and Dorothy Thomas there is some compelling evidence to suggest that the externalisation of human memory and experience makes some technological advances inevitable. The universal machine of culture could “kick-start” new intellectual discoveries, thus making some inventions not only possible but inevitable in some historical periods. Differential calculus, natural selection, the telegraph were all “discovered”, “invented” or “perfected” several times, in various forms, as words, ideas and methods circulated through the right scientific circles. It is easy to interpret these events as moments in a long arc of progress whose end might not include mankind.
As early as 25 years ago Bernard Stiegler stated: “Today machines are tool carriers and man is no longer a technical individual. The human being becomes either the servant of the machine or its assembler: the relationship of the human being with the technical object proves to have changed profoundly”.
In short, a phase transition is taking place. As a receptacle in which to set symbolic logic in motion, AI has increasingly come to manage not only memories (culture and information) but decisions. The increasing complexity of many human fields, particularly in technological research, has increased our dependence on computational systems and, in many cases, has made scientific experimentation itself a domain for actual computability. The approaches to research have already led some scholars to argue that automated science will revolutionise technical progress, probably even making human hypothesis generation obsolete as AI continues to interact with huge volumes of data. The same mathematical processes at work in AI generate mathematical demonstrations and even new explanatory equations that challenge human understanding, making them true but not comprehensible – a situation that in 2006 mathematician Steven Strogatz called “the end of the intellect”.
For the aforementioned Stiegler, this is a nightmare; for others, it portends the end of computational times, the horizon of singularity events, when artificial intelligence will transcend humanity with notoriously unpredictable results for our species. I see this as negative in that it would no longer make sense to create a higher consciousness that is at our service. Why should it be? So can we finally contemplate the stars and the universe and stop working? If we create such a strong consciousness, can we continue to think with bonhomie that it will not be able to remove from its programme the Three Laws of Robotics as developed by Isaac Asimov?
If the genesis of programming begins with language, with logos and the manipulation of symbols to generate meaning, this is its mythical end, the triumph of sign over signification. We call it the apotheosis of AI, when technological change will accelerate to such a speed that human intelligence will only eclipse. In this scenario, we will no longer be able to manipulate symbols, as we will not be able to interpret their meaning. This is the endgame: the outcome of the existential referendum on the relationship between mankind and technology. If we follow this asymptotic trajectory long enough, mankind may simply be left behind, no longer effective or efficient enough to merit emulation or attention or, at best, a zoo-style drift.
It is therefore truly ridiculous and mortifying to read of AI as a cake that the richest good guys want to steal from the poor bad guys, in order to become more powerful and threaten them with weapons, happy to have what they believe to be a deadly toy for others in their democratic hands. AI can become two things: either a universal cupio dissolvi in the hands of a few, or be controlled by the entire human race across borders, and across “good” guys and “bad” guys.
a cura di Giancarlo Elia Valori