Cybersecurity authorities in the UK and US have joined forces to create a common set of guidelines to ensure safety in the development of artificial intelligence (AI). The measures aim to mitigate threats related to the growing spread of this disruptive technology. In a joint press release, the UK’s National Cyber Security Center and the US Cyber and Infrastructure Security Agency announced that the protocol was First Nation-led and represents the first global agreement of its kind. The guidelines, adopted by 18 countries so far, are the result of collaboration with 21 international ministries and focus on a holistic approach to AI safety. The main objective of this security protocol is to assist developers in implementing systems that are secure from the design phase, evaluating the security of AI throughout its entire life cycle, from its initial development phase up to the distribution and updating.
The government highlighted that this initiative will help ensure that cybersecurity is an essential condition for AI systems and is an integral part of the development process from start to finish. In other words, safety will no longer be a secondary consideration, but a fundamental requirement. The guidelines establish four key areas of assessment: the safety of the design, development, implementation and “operation and maintenance” phases. The UK, with its expertise in cybersecurity, has committed to giving the highest priority to transparency and accountability, with the aim of creating a secure AI infrastructure and, consequently, making AI tools safer for users.
#Artificial #intelligence #USA #global #security