The appearance of Samuel Altman, the executive director of OpenAI, at the US Capitol renewed the conversation about the dangers of artificial intelligence and about the need to regulate it. Altman noted «My worst fears are that we… the technology industry, cause significant harm to the world.» and in some way his words made me reflect about that book by Benjamín Labatut that talked about some of the most significant scientific advances of the twentieth century, and their risks. In When We Cease To Understand The World (Un verdor terrible), the Chilean writer makes one travel between reality and fiction surrounding the life and works of scientists as complex as Erwin Schrödinger, Karl Schwarzschild, Werner Heisenber, or Alexander Grothendieck. His book is difficult to define in a column; according to what the author confessed in an interview, this work «is born of my obsession with certain mysteries in history, science, physics, and mathematics»; and after talking about the singularity of the black hole or the horrors of the World War, he continued with the notion that «all these things defy our comprehension and open a door facing the abyss.»
Regarding Artificial Intelligence, we have spent months hearing, through an open letter from the Future of Life Institute, that perhaps it would be better to pause in the training of systems because we find ourselves «in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.» And I, who am not able to understand what this means, know that I feel an infinite and inexplicable fear that perhaps will never disappear, and that takes me again to those dark abysses so beautifully recreated by Labatut…