The Royal Swedish Academy of Sciences will present its prestigious Nobel Prizes on December 10 in Stockholm. The Physics will be awarded to John Hopfield and Geoffrey Hinton. The latter is a pioneering scientist in deep learning, a machine learning method that is the basis of many of the most well-known Artificial Intelligence (AI) models today. Hopfield, for his part, invented a type of artificial neural networks, which bear his name, inspired by the functioning of the brain’s associative memory, which can store and reconstruct images (and other types of patterns in data) based on only partial information of said images (or patterns).
On the other hand, the Nobel Prize in Chemistry will be awarded to David Baker, a biochemist at the University of Washington, who will share it with Demis Hassabis, co-founder and CEO of DeepMind, and with the director of this company, John Jumper. DeepMind researchers have been awarded for the development of AlphaFold, a tool that can predict the three-dimensional structure of proteins. Baker was recognized for his work on designing new proteins using AI. The implications of the contributions of the three awardees are enormous for understanding biology, health and disease. Their findings open the doors to the development of new and more effective therapies, offering hope for innovative treatments for various pathologies.
Both the Nobel Prize in Physics and Chemistry have rewarded advances in AI
In the computing world, when the scientific community seeks to make progress in solving a difficult and important problem, there are often periodic international competitions that measure progress toward solving the problem. In 2012, Hinton’s team surprised the world by winning an image recognition contest with an artificial neural network model, known as AlexNet, which marked a before and after in image recognition. Similarly, DeepMind swept the CASP competition in 2018 and 2020 (Critical Assessment of protein Structure Prediction) protein structure prediction. Using AlphaFold software, he demonstrated an unprecedented ability to predict protein structures with an accuracy superior to that of expensive classical methods such as X-ray crystallography, electron microscopy or nuclear magnetic resonance, and at a small fraction of the cost and time.
These achievements are not solely the result of algorithmic innovations. Progress has been possible thanks to the confluence of several factors, such as access to large volumes of data and improvements in hardware, especially in the use of graphics processing units (GPU), which considerably accelerated model training times. . All these developments are the fruit of a multidisciplinary effort. In important fields of application, the problems to be solved are identified and, with the collaboration of specialists in the corresponding fields of application, successes have been achieved such as those that have given rise to the Nobel Prizes.
The Nobel Prize could amplify Hinton’s absurd statements about supposed superintelligence that threaten humanity
Computer science in general and machine learning in particular, as an integral part of AI, are key pieces in a large number of multidisciplinary research. They have become transversal and necessary tools to achieve new discoveries. It is not an unpublished paper. Mathematics has served to advance other disciplines. You just have to remember some Nobel Prize winners in Economics.
In 1973 Wassily Leontief won the prize for a method of finding an equilibrium (input-output) that applies matrix algebra. In 1990 Harry Markowitz was awarded for having developed the theory of portfolio choice. John Forbes Nash developed the “theory of non-cooperative games” and negotiation processes, for which he also received the Nobel Prize in Economics in 1994. This case is perhaps the most popularly known because his life was narrated in the film “A Beautiful Mind.” ” (2001). Later, in 2015, Nash also won one of the most prestigious awards in mathematics, the Abel Prize, for his contributions to the theory of partial differential equations.
Multidisciplinary work advances knowledge towards solving problems in specific fields of knowledge. But transversal tools such as mathematics or AI, sometimes forced by the resolution of problems in specific areas, can also cause spectacular advances that are then successfully applied in other areas. The AI that is being awarded at the 2024 Nobel Prize has changed the rules of the game in multiple scientific fields. Neural networks and, more recently, language models based on transformerslike the one powered by ChatGPT, are examples of these transversal tools.
The progress of AI in recent years has caused this discipline to attract enormous media and popular attention. Possibly, if AI were not so fashionable, the Nobel Prize in Physics would not have gone to Hopfield and Hinton, taking into account, furthermore, that their contributions are not to Physics. What should not happen in any case is that this Nobel Prize in Physics served to amplify Hinton’s absurd statements about supposed future superintelligence that endanger humanity. On the contrary, we are convinced that the future of AI remains to be written. That is to say, it is up to us whether or not it serves to make the world a better place.
AI needs well-curated and standardized data repositories to continue advancing for the benefit of humanity
To finish, we want to reflect on the role of data. We are convinced that one of the most important challenges for the future development of AI will be to ensure that models are trained with high-quality data. The success achieved with AlphaFold has been possible, in large part, thanks to the protein database known as Protein Data Bank: a repository of highly curated and standardized data. Unfortunately, in many other areas, the data available is not of the quality necessary for the use of AI, no matter how sophisticated, to produce results as spectacular as those achieved with AlphaFold. AI is not magic, it depends entirely on the work of a broad community of researchers who have created the tools that are transforming science today.
Looking ahead, it is essential to ensure that AI continues to advance in a way that is rigorous and based on high-quality data. Only in this way can its transformative potential continue to benefit humanity.
Ramón López de Mántaras He is a Research Professor at the Artificial Intelligence Research Institute of the CSIC.
Pedro Larrañaga He is a professor at the Polytechnic University of Madrid
Antonio Bahamonde He is a professor at the University of Oviedo
#Nobels #join #trend