Most of the physical consequences derived from global warming are far behind us. And by not suffering them directly, it is difficult for us to internalize the extreme gravity of the climate crisis that the planet is going through. The Quebec AI Institute (MILA) has wanted to do its bit to make the population aware of the world to which we are going. A team of researchers from the prestigious center directed by Yoshua Bengio, considered one of the fathers of neural networks, has developed a tool that allows to visualize the effects of floods, fires or pollution clouds anywhere in the world. The simulation is carried out by an antagonistic generative network (GAN), a type of machine learning algorithm (machine learning) which is used, for example, to generate realistic images out of nowhere, such as the faces of non-existent people in thispersondoesnotexist.com or those of the videos deep fake.
The project, in which some thirty scientists have worked for two years, actually has a name inspired by the iconic fake people website: This Climate Does Not Exist (This climate does not exist). Open to the public from this Thursday, all the user has to do is type an address or select a point collected by Google Street View. Next, you must indicate what type of catastrophe you want to see: flood, wildfire or pollution cloud. The algorithm works its magic and returns the image with the requested effect. Of course, the images are not intended to be an exact portrait of what would happen in each specific place if no action is taken, but rather a recreation that shows the worst possible effects in the scenario that the user chooses.
The realism is particularly striking in the flood option, the one that has caused the most headaches for Bengio’s team. The algorithm takes the image proposed by the user, automatically places a layer of water on it and then paints it with the reflections of the environment of the image itself. The result is hyper-realistic.
“One of the most important challenges has been to make the algorithm capable of simulating floods in a wide variety of images,” explains Alex Hernandez-Garcia, one of the principal investigators of the project. “Simplifying it a lot, one module of the algorithm is in charge of detecting which parts of the image should be covered with water and another module is in charge of generating the texture of the water incorporating the context of the image, for example the reflection of the buildings. Finally, the result of these two modules is combined to generate the final image ”.
To detect where to cover with water and where not to cover it, Hernandez-Garcia and her colleagues combined various computer vision and machine learning techniques. “On the one hand, we generated a virtual city by computer that allowed us to have a series of images with and without water. We also adjusted an algorithm that was capable of making good predictions in that virtual world, detecting the different parts of a scene: ground, cars, buildings, trees, people, etc. ”, he illustrates. “On the other hand, the algorithm must be able to make good predictions in real images (those from Google Street View).” For the latter they used antagonistic generative networks (GAN).
The process is completed in a few seconds and, before showing it to the user, information is provided on the causes and consequences of the selected meteorological phenomenon and its relationship with climate change. For example, if you opt for a flood, it indicates that flash floods kill about 5,000 people a year, that the sea level is expected to rise two meters by the end of the century and that this major alteration of the planet will forever alter the the lives of at least 1 billion people by the end of 2050. “If we do nothing soon, we will face major climate catastrophes,” says Professor Bengio, MILA’s scientific director. “This website makes the risks of climate change much more real and personal for people,” he says.
Antagonistic generative networks
Artificial intelligence took a leap in quality around a decade ago with the emergence and consolidation of machine learning and deep learning (deep learning). These techniques are based on to train to the machine so that, after reaching certain conclusions on its own, it is capable of performing complex tasks. For example, if you want the algorithm to distinguish between muffins or Chihuahuas, the programmer provides a series of examples from each category and then returns thousands of unclassified images. The machine will establish its associations and, when it makes a mistake and the error is brought to its attention, it will refine its criteria of distinction.
Bengio won the 2018 Turing Prize, considered the Nobel computer science, along with Geoffrey Hinton and Yann LeCun for their contribution to the development of neural networks. They are a further step in machine learning and try to imitate the functioning of the human brain: they apply several simultaneous layers of processing, which increases its performance. They are behind the more complex classification systems, such as voice assistants or advanced prediction models.
The antagonistic generative networks (from the English generative adversarial networks, GAN) go a little further. They were also invented at MILA in 2014 and are capable of generating new content that, to human eyes, seems totally real. They are behind the increasingly sophisticated hyper-realistic fake videos (deep fake) generated by computer in which well-known characters can say or do what you want. They work thanks to the competition between two neural networks: one of them tries to make images as realistic as possible and the other tries to detect whether they are real or an invention. That happens thousands or millions of times, and in the process, the generating network learns to create increasingly successful images. When the first manages to fool the second we have the winning image. For example, that of the Plaza de Cibeles flooded or the Capitol submerged in a toxic cloud.
“We are using a new type of GAN developed by ourselves to generate the climate change images that can be seen on our website,” they explain from MILA. “In general, the limited availability of images and the need to adapt the algorithm to a multitude of situations have been the main technical challenges we have faced,” says Hernandez-Garcia.