Microsoft has announced a set of new features called Trustworthy AI to address the challenges of AI security and privacy. This step comes as part of the company’s efforts to promote responsible and secure AI development.
New features include mechanisms to correct AI “hallucinations” that may lead to inaccurate results, as well as improvements to assessment and classification tools to increase security.
Read also: Microsoft uses nuclear energy to train artificial intelligence
A new feature called Confidential Inferencing has also been launched for Azure OpenAI, which aims to protect data during inference operations.
According to Microsoft officials, these initiatives came in response to growing concerns about the security and reliability of AI systems, stressing that the company seeks to provide reliable technologies in various fields.
Strengthening these measures puts Microsoft at the forefront of companies developing responsible AI, strengthening its position in the cloud computing and AI market.
#Microsoft #unveils #Trustworthy #boost #security #privacy