On the existential risks of artificial intelligence

The impressive progresses in machine learning have revived the fear that humans might eventually be wiped out or enslaved by artificial superintelligences. This is hardly a new fear. For example, this fear is the basis of most of Isaac Asimov’s books, who imagined that robots are built with three laws to protect humans.

My point here is not to demonstrate that such events are impossible. On the contrary, my point is that autonomous human-made entities already exist, and cause the exact same risks that AI alarmists are talking about, except they are real. In this context, evil AI fantasies are an anthropomorphic distraction.

Let me quickly dismiss some misconceptions. Does ChatGPT understand language? Of course not. Large language models are (essentially) algorithms tuned to predict the next words. But here we don’t mean “word” in the human sense. In the human sense, a word is a symbol that means something. In the computer sense, a word is a symbol, to which we humans attribute meaning. When ChatGPT talks about bananas, it has no idea what a banana tastes like (well, it has no idea). It has never seen a banana or tasted a banana (well, it has never seen or tasted). “Banana” is just a node in a big graph of other nodes, totally disconnected from the outside world, and in particular from what “banana” might actually refer to. This is known in cognitive science as the “symbol grounding problem”, and it is a difficult problem that LLMs do not solve. So, maybe LLMs “understand” language, but only if you are willing to define “understand” in such a way that it is not required to know what words mean.

Machine learning algorithms are not biological organisms, they do not perceive, they are not conscious, they do not have intentions in the human sense. But it doesn’t matter. The broader worry about AI is simply that these algorithms are generally designed so as to optimize some predefined criterion (e.g., prediction error), and if we give them very powerful means to do so, in particular means that involve real actions in the world, then who knows whether using those means might not be harmful to us? At some point, without necessarily postulating any kind of evil mind, we humans might become means in the achievement of some optimization criterion. We built some technical goals into the machine, but it is very difficult to ensure that those are aligned with human values. This is the so-called “alignment” problem.

Why not. We are clearly not there, but maybe, in a hypothetical future, or at least as a thought experiment. But what strikes me with the misalignment narrative is that this scenario is not at all hypothetical if you are willing to look beyond anthropomorphic evil robots. Have you really never heard of any human-made entities with their own goals, which might be misaligned with human values? Entities that are powerful and hard to control by humans?

There is an obvious answer if you look at the social rather than technological domain: it is the modern financialized multinational corporation. The modern corporation is a human-made organization that is designed in such a way as to maximize profit. It does not have intentions or goals in a human sense, but exactly like in the AI alignment narrative, it is simply designed in such a way that it will use all means available in order to maximize a predefined criterion, which may or may not be perfectly aligned with human values. Let’s call these companies “profit robots”.

To what extent are profit robots autonomous from humans? Today’s modern large corporations are owned not by people but in majority by institutional stakeholders, such as mutual funds, i.e., other organizations with the same goals. As is well known, their multinational nature makes them largely immune to the legislation of states (hence the issues of fiscal optimization, social dumping, etc). As is also well known, a large part of the resources of a profit robot is devoted to marketing and advertisement, that is, in manipulating humans into buying their products.

Profit robots also engage in intense lobbying to bend human laws in their favor. But more to the point, the very notion of law is not the same for a profit robot as for humans. For humans, a law is something that sets boundaries on what could be done or should not be done, morally. But a profit robot is not a person. It has no moral principles. So, law is just one particular constraint, in fact a financial cost or risk – a company does not go to prison. A striking example of this is the “Dieselgate”: Volkswagen (also not owned by humans) intentionally programmed their engines so that their car emissions remained hidden during the pollution tests required to authorize their cars on the US market. As far as I know, shareholders were not informed, and neither were consumers. The company autonomously decided to break the law for profit. Again, the company is not evil: it is not a person. It behaves in this non-human way because it is a robot, exactly like in the AI misalignment narrative.

We often hear that ultimately, it is the consumers who have power, by deciding what to buy. This is simply false. Consumers did not know that Volkswagen cheated on pollution tests. Consumers rarely know in what exact conditions the products are made, or even to what corporation the products belong. This type of crucial information is deliberately hidden. Profit robots, on the other hand, actively manipulate consumers into buying their products. What to think of planned obsolescence? Nobody wants products that are deliberately designed to break down prematurely, yet that is what a profit robot makes. So yes, profit robots are largely autonomous from the human community.

Are profit robots an existential risk for humans? That might be a bit dramatic, but they certainly do cause very significant risks. A particular distressing fact illustrates this. As the Arctic ice melts because of global warming, oil companies get ready to drill the newly available resources. Clearly this is not in the interest of humans, but this is what a company like Shell, who is only directly owned by humans in the proportion of 6%, needs to do to pursue its goals, which as any other profit robot, is to generate profit by whatever means.

So yes, there is a risk that powerful human-made entities get out of control and that their goals are misaligned with human values. This worry is reasonable because it is already realized, except not in the technological domain. It is ironic (but not so surprising) that billionaires buy into the AI misalignment narrative but fail to see that the same narrative fully applies to the companies that their wealth depends on, except it is realized.

The reasonable worry about AI is not that AI takes control of the world: the worry is that AI provides even more powerful means for the misaligned robots that are already out of control now. In this context, evil AI fantasies are an anthropomorphic distraction from the actual problems we have already created.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *