Tireless efforts to achieve ethical artificial intelligence abound these days. We have celebrated the anniversary of ChatGPT, the chatbot that—no more nor less—has managed to install it in the conversation of the neighborhood café.
In a turbulent period like few others, there is an undisputed new actor: artificial intelligence, a tool that has come to change everything, including ourselves. We are going through an unprecedented identity crisis; Although it is difficult for us to recognize it, it threatens to move us from the center of the scene with impunity without even asking our permission. In the words of Ray Kurzweil, the technological singularity (the moment when human intelligence is surpassed by artificial intelligence) is near.
In this crisis of human identity, moral debates are the order of the day: do I want technology? Yes, but to what extent and at what cost? More questions than answers.
For centuries, transhumanism has promoted reflection on the physical and psychological improvement of the human condition through the incorporation of technology. He summum: the posthuman, a being with widely extended capabilities. Robotic prostheses or discoveries like CRISPR combined with artificial intelligence (AI) make this closer every day.
It is precisely the recent neurorights, the new generation of human rights, that are trying to put a stop to it. The danger is not in what is conscious and palpable. For example, just one button is enough: Elon Musk and his company Neuralink—already authorized by the US FDA to carry out tests on humans—want to place a chip in our brains and fuse us with AI before “it becomes something so powerful that it destroys us.” to the human race.” The real risk is in the imperceptible: algorithms that manage to influence our tastes, partners, and vote without us being able to realize it.
We have arrived at ChatGPT’s birthday party with a surprise gift: FreedomGPT, a model similar to that one, but with a somewhat “peculiar” difference: it does not have any type of censorship. If we ask him to describe a business model for marketing weapons, he will do it. Where will it lead us?
The road sign warns that the road leads towards absolute personalization. PinAI, a device the size of a smartphone, has just been launched tamagotchi connected to the mobile network with which we can perform countless functions through a voice assistant with AI. Or glasses with AI (Ray-Ban Smart Glasses), which will constitute a permanent filter between reality and what we see and will even be able to read our iris. FreedomGPT is the icing on the cake of this fairy tale: a chatbot in our image and likeness to whom we will be able, without any concealment, to make our worst confessions and demand our most guilty desires. This dialogue will not only lay bare our infamous miseries, but also, and even more dangerous, will enhance them (the well-known “pernicious feedback loop”).
Thus, increasingly personalized AI presents a central dilemma. It is not a alter ego, but rather our stunt double. What would Freud think about this? A duplicate of our self is created, an other self that rests on an indecipherable algorithm and, above all, invisible to ordinary mortals. Today, more than ever, “what is essential is invisible to the eyes.”
Would we agree to control our response to finally be aware of our prejudices? As?
The first of the obstacles is the question of privacy: how to consent to the control of that system that knows more about us than we do ourselves? Do we really want to be aware of our biases?
Another question is linked to how this symbiosis with our other self affects on a psychic level (for example, self-perception), our life in relationship and how it contributes to the radicalization of our thoughts (for example, tunnel vision). The concern increases if we think about children and adolescents who are building their personality in a digital world and never had the contrast with the analog world, as we millennials had, for example, some centennials and previous generations.
We can propose two solutions. On the one hand, education. Cases like that of Almendralejo, where the same children from several schools used AI to create naked images of their classmates, teach us that prohibiting is absurd. Educating and raising awareness is mandatory.
On the other, regulation (such as the European Union AI law). The reluctance to accept it in some cases is terrifying. It is not about prohibiting at random, but rather about preventing and punishing in case of non-compliance. Innovation is not in danger if we create standards based on an interdisciplinary approach and elastic to the exponential growth of technology. Nor is it about our latitudes being left out. Paradoxically, the most regulated industries (banking, pharmaceuticals) are those that profit and innovate the most.
The way forward is to audit high-risk algorithms. We need to evaluate them to know what principles guide them, what variables govern them, and what kind of predictions they make. Today it seems unthinkable to us; In the future, it will be the rule.
Subscribe to continue reading
Read without limits
_
#Freuds #crisis