For theAI develops drugs it took less than six hours Why invented 40,000 potentially lethal moleculeswith researchers putting AI – normally used to search for useful drugs – into a sort of “antagonistic” mode to show, in a conference on biological weapons control, how easily this could be misused.
All the researchers had to do was modify their methodology to search, rather than eliminate the toxicity, at which point the AI developing drugs produced tens of thousands of new substancessome of which are similar to the VX, the most powerful nerve agent ever developed. Shocked, the researchers published their findings this month in the journal Nature Machine Intelligence.
The publication of these results obviously shocked the community, and our colleagues at The Verge they immediately went out of their way to interview directly (here the full interview) the lead author of the study, Fabio Urbinaso as to understand a little how worried the story was.
Fabio Urbina, in addition to being the author of the study on AI develops drugs, is also one senior scientist at Collaborations Pharmaceuticals, Inc.a company that focuses on researching drug treatments for rare diseases, and its mission is to implement new models of machine learning in the area of drug discovery.
For example, if a certain drug has wonderful effects on a certain pathology, but has not insignificant but rather important side effects, the AI develops drugs blocks it, labeling it as too dangerous.
How it came to the discovery of the vulnerability of AI develops drugs
The drug AI study was developed following the invitation the author and his colleagues received for the Convergence by the Swiss Federal Institute for Nuclear, Biological and Chemical ProtectionSpiez Laboratory, a conference that aims to inform the community at large about new developments with tools that could have implications for the Chemical / Biological Weapons Convention.
During this conference, Urbina decided to talk about AI develops drugs, and about learning machines in general, and how can be misusedsomething they had never thought of before.
The turning point has been to understand that, while these AI models are being built, it develops drugs to improve more and more the prediction of toxicity in order to avoid it, all that it was enough do was turn the switchso that instead of moving away from toxicity, it would get closer.
In the study presented, however, the scientists chose to hide some details (as it should be, by the way), also because according to them “it is easy enough for someone to replicate what has been done“.
As Urbina says, the process still requires some experienceif someone put it together without knowing anything about chemistry, they would eventually probably generate things that wouldn’t be very useful, however if the first requirement were to be met, if they were to use Google’s generative models, they could find a series of generative models one- liner put together that people have released for free.
And then, if he were to search for toxicity datasets, there is a large number of open source toxicity datasets so, if he just combined these two things and then he knows how to program and build machine learning models (all that requires really is an internet connection and a computer), then it could easily replicate what was done with the AI develops drugs.
“Science is all about open communication, open access, open data sharing. Restrictions are antithetical to this notion. But a step forward could be to at least be accountable in a responsible way as to who is using your resources. “
says Urbina in the interview.
According to the researchers, this publication on AI developing drugs was rather unusual, they were a bit difficult about whether to publish it or not, this is because it is a potential misuse that did not take long to performand they wanted to get this information since no one had actually done it before.
Eventually they decided that somehow they wanted to move forward, because if it was possible for them to do so, it is likely that some contradictory somewhere is already thinking about it or will think about it in the future, and by then, our technology may have progressed even further. what we can do now.
The ultimate aim of this research on AI drug development is to make as many researchers as possible aware of the potential misuse, also because when you start working in the space of chemistry, you are informed about the misuse of the latter and you are in some responsible way to make sure you avoid it as much as possible while instead in machine learningthere is no such thing, there are no indications of the misuse of the technology.
Putting that awareness out there could help people really be aware of the problem, so it’s at least talked about in wider circles and it can at least be something to look out for as we get better and better at modeling toxicity, so be it.
In this regard, the researchers are keen to put stakes, in fact they do not want to assume that the AI of machine learning starts to create toxic molecules and there will be a slew of new biochemical warfare agents just around the corner, nor do they want to be alarmists to say that there will be AI-led chemical warfare, this is not the case either now or in the long run, however it is something that is starting to become a possibility.
If you are attracted to science or technology, keep following us, so you don’t miss the latest news and news from all over the world!
#develops #drugs #hours #creates #lethal #molecules