There is an almost instinctive fear of the emergence of disruptive technologies and their impact on the labor market, where they destroy certain jobs and cause the emergence of new ones. It happened with electrification, computing and artificial intelligence (AI) is no exception. However, in the latter case, there is the paradox that it can be a powerful tool to incorporate into employment profiles that until now were not taken into account, offering more precise profiles that facilitate their hiring.
Baptized as EMI, the Galician Government presented last June its pioneering proposal to incorporate AI into the job search, a tool that allows job seekers to create a profile where the abilities and skills of the interested party are taken into account, offering alternatives. in jobs that fit these needs. “A tool that does not in any case replace a job counselor, but it does help to refine that orientation towards which occupations the job seeker should direct,” says the general director of Training and Qualification for Employment, Zeltia Lado. .
One more step in the application of the so-called generative AI, capable of creating images, text or video – in this case profiles -, since ‘machine learning’ has been used by administrations since the 90s. A more basic AI that is It is used for specific uses, such as calculating the risk of financial operations or the possibility of non-payment of a loan in the case of the Ministry of Finance. But EMI, in addition to offering a job profile, has the ability to predict future trends in the labor market, in this case in Galicia, even offering training aimed at filling positions that will be required in the future.
At the moment, the Xunta de Galicia states that 20,000 people have already been profiled through this tool, which has allowed 3,700 unemployed people to be reincorporated into the labor market, although they expect to reach the figure of 70,000 profiles by September 2025.
But the qualitative leap involved in integrating generative AI does not come without challenges, as indicated by the professor of Artificial Intelligence at the International University of La Rioja (UNIR), Juan Ignacio Rouyet. The doctor highlights two types of errors, on the one hand bias and on the other hallucinations. The first case is clearly understood through examples, such as when the AI is asked to create images of lawyers and the result is that they are all men and 80% of them are white. It happened in a Bloomberg study, where the bias against them was also noted, since when asked for images of professors, 70% were women. All of this taking into account that this study was carried out in English, where both lawyer and teacher are neutral words, equally used for men and women.
On the other hand, there are hallucinations, when the AI gives a result that apparently does not respond to the training data. Rouyet gives as an example a case in which ChatGPT is asked how to obtain silicon in a diet due to its benefits and she answers that the solution is to ingest crystals because they contain a lot of silicon. For this reason, the teacher emphasizes the need for there to always be a human supervision that controls possible inappropriate or harmful behavior. He comments how in his classes they have had numerous debates about whether there should be completely automated processes, but his answer is blunt: “No, especially if we talk about public administration.”
While waiting to see the potential of EMI, the Xunta is focused on developing new functionalities, such as offering the unemployed self-assessment tests to incorporate soft skills into their profile through their mobile phones, as shown on Friday by the Minister of Employment, Commerce and Immigration, José González, during the presentation of EMI at the Casa de Galicia in Madrid.
Regulation against past errors
After the entry into force of the European Artificial Intelligence Law in August of this year, it is the turn of the Member States to adapt the regulations to their countries and begin to incorporate it into their respective administrations. Legislation that establishes four levels of risk (from minimal to unacceptable) to avoid harm to Union citizens. It is no wonder, since there have already been controversial cases within Europe, as well as cases outside the Union, such as the Chinese social credit – which assigns each person a score depending on whether they are better or worse citizens. There are also cases such as the use of AI in the Netherlands to determine the probability that a citizen may commit a crime, with the social and judicial implications that this entails, judging individuals for actions that they have not committed.
#helps #job #reaches #administration