José Luis Hernández Ramos (Murcia, 37 years old) is a Marie Skłodowska-Curie researcher at the University of Murcia, where he studied the bachelor’s, master’s and doctoral degrees in Computer Science. “When he was a child and played with the little machines like the Game Boy, I wondered how it was possible that all those images came out when I inserted the cartridges,” he confesses. Throughout his professional career he has been a scientific officer at the European Commission and has published more than 60 research articles, in addition to collaborating for five years with the European Union Agency for Cybersecurity (ENISA) and the European Cybersecurity Organization (ECSO), where his interest in seeking the impact of his work arose. “Any researcher has to ask themselves what artificial intelligence can do in their field,” he says. Now, his Gladiator project has been one of the 58 selected by the Leonardo 2023 scholarships from the BBVA Foundationendowed with up to 40,000 euros, to develop an artificial intelligence tool that is capable of detecting cybersecurity threats and analyzing malicious computer programs.
Ask. How would you summarize the goal of the Gladiator project?
Answer. The research wants to apply or use a large language model, such as ChatGPT, Bard, or Llama, to understand how we can use them to address cybersecurity problems. When we want to adapt one of these models to a certain domain such as cybersecurity, we need to adjust the model to terminology related to that specific discipline. We want to understand how to adapt these models to detect an attack and, at the same time, adapt them with cybersecurity information to improve their performance and be able to solve problems of this type. The project will last until March 2025.
Q. How are the language models going to adapt to what the project needs?
R. Information related to cybersecurity is collected, with databases that contain information on threats, and the model is trained or adjusted with the information. This way you will be able to improve your understanding about what a cybersecurity threat is.
Q. How does artificial intelligence detect cybersecurity threats and combat them?
R. Many AI-based systems rely on learning what is an attack or not, for example, using data sets related to the network connections of a certain environment. What we are looking for with the project is to be able to analyze cybersecurity information that comes in text format, that may be related to vulnerabilities or that may be found on social networks and other sources, and determine whether it is a threat or not.
Q. What is the difference between the systems that were used before to combat cybersecurity and those now?
R. Security systems need to be increasingly intelligent to detect possible threats considering artificial intelligence techniques. Previously, these systems detected attacks by searching for known threats in databases. Systems need to evolve to be able to identify attacks that they are not aware of.
Q. And what types of attacks could be prevented or identified?
R. The application of artificial intelligence techniques in cybersecurity will allow us to improve the identification and detection of a wide variety of attacks. An attack of phishing, is a clear example of how the use of language models can help, analyzing the text or content that comes in an email. We can identify if multiple devices are colluding to launch an attack, and also if it is not just coming from one source, but from several.
Q. And from home, how can artificial intelligence be used to combat attacks?
R. We now have 24-hour access to artificial intelligence through tools like ChatGPT, and it gives us the ability to promote cybersecurity education and awareness. Anyone can ask the tool how to protect themselves or how to configure a device to make it less vulnerable. It is important to know that the models are not perfect, the results and the answers they offer still need to be compared.
Q. Would artificial intelligence help detect if an application has been altered?
R. Absolutely. It would help detect, for example, if an application is fake or malicious. In fact, one of the things that is also being seen with this type of analysis of applications, code and software in general, are the language model initiatives to analyze software code.
Q. Is artificial intelligence capable of detecting data theft or misinformation?
R. Yes, although attackers tend to be increasingly creative and logically use tools better.
Q. Does artificial intelligence help both those who want to create disinformation and those who want to combat it?
R. Yes, it is a double-edged sword. If in the wrong hands, it can be used to launch increasingly sophisticated attacks. There is also the danger of the access that we now have, in general, to artificial intelligence systems, such as ChatGPT or others.
Q. How can a tool like ChatGPT be used to define or generate an attack?
R. When you ask a system like ChatGPT to generate an attack, the first thing it tells you is that it does not generate one because it can lead to a cybersecurity problem. Instead, it’s very easy to trick the tool and tell it that you need to know the attack because you want to make your system more robust or you want to teach the attack in a class. In those cases the system does give you the answer.
Q. Will the project system allow for the design of a tool in which sensitive data does not have to be shared?
R. The research wants to try to understand what the problems are, and the limitations so that the adjustment of the language model is done in a decentralized way. Right now, a model is trained and tuned with various sources of information, such as what I give it myself when I interact with the system. The idea is that this process is decentralized, and instead of having to share sensitive information, related information can be shared, but without having to send information about the specific vulnerability so that the system can identify an attack.
Q. What is the goal you would like to achieve with your project when it ends, in 2025?
R. Improve our ability to understand language models to address cybersecurity problems, create a system that helps us identify an attack, understand how and why it occurred, and look for relationships between different attacks that help predict whether my system is going to be attacked. We also want to know how artificial intelligence is capable of generating a measure that addresses and resolves that cybersecurity attack, for example, by automatically implementing a security patch.
You can follow EL PAÍS Technology in Facebook and x or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
_
#José #Luis #Hernández #computer #scientist #easy #trick #artificial #intelligence #generating #attack