It is well known that research is the main engine of economic and social progress; but it must work correctly to allow us to truly improve our lives. Unfortunately, there is clear evidence that indicates the need to repair some substantial aspects of it, such as the lack of adequate financing, the low importance given in our country to interdisciplinary research, the need for a state pact on research, the evaluation of researchers and the attraction and retention of talent. In this article we will focus on these last two aspects: evaluation and talent.
In Spain, the research of researchers, university professors, and those who aspire to be such, is evaluated almost exclusively based on bibliometric criteria. That is, with an accounting of the volume of publications, citations and other measures of a numerical nature. But, with some exceptions, peer evaluation is excluded. That is, experts on the subject issue well-argued judgments about the most relevant contributions of researchers to the advancement of knowledge.
Proponents of using bibliometric evaluation criteria argue that they are more “objective” and less costly than peer evaluation. The key measure here is the so-called impact factor of the journals where researchers publish our results. This factor is calculated for each journal in the following way: each year, the impact factor of a journal is the result of dividing the total number of citations that the articles of that journal have received in the two previous years, by the number of articles published in said magazine during those two years.
The reality is that these indirect measures are not adequate to evaluate the scientific contributions of researchers. A study by Philip Campbell, then editor of Nature (see “Escape from the impact factor”, Nature Vol. 8, 2008) revealed that three quarters of the articles published in this prestigious journal do not contribute to the calculation of the journal’s impact factor. In fact, 80% of the articles published in Nature in a given year they receive fewer than 20 appointments over the next two years.
On the other hand, it is known that there are journal editors who, during the review process, ask the authors of an article to cite other articles recently published in the same journal. In this way they manage to artificially increase the impact of the magazine.
In short, it can be stated that the fact of publishing an article in a high-impact journal does not mean at all that said article is a high-quality scientific contribution. It would be necessary to analyze in detail the relevance of the citations received by the article in question and the reason for these citations, since an article can receive many citations due to the fact that it contains erroneous results. The best-known example is a paper that apparently demonstrated that nuclear fusion was possible at room temperature.
Malpractice
Measure academic production by weight has brought with it a good number of tricks characteristics of the picaresque so common in our country. Some examples of malpractice may be the following.
A researcher can increase his numbers making a publication agreement with others, who will appear as co-authors on their articles, in exchange for appearing as co-authors on the others’ articles. This agreement also extends to reciprocal dating.
The known clickbait, that is, articles with titles capable of attracting the attention of other researchers because they deal with a fashionable topic, although these articles are contributions without interest. Another strategy is to write articles that summarize the state of the art in a field (articles from survey), since they are more likely to receive more citations than articles containing original contributions.
Other times it is decided to divide a scientific result into pieces that do not exceed the minimum publishable unit. This trick is also known as publishing salami. This artificially increases the number of citations and publications.
In EL PAÍS, as well as in other media, various news items have recently been published that give an account of some of these bad practices that pervert the evaluation system and that reach extreme cases such as publications of articles, of one or two pages and null scientific relevance, containing hundreds of self-citations unrelated to the content of said articles. Our colleague José Luis Verdegay He also recently published a great article on this aspect (“Productivity and scientific quality: two sides of the same coin?” University 04/02/2024)
What should be done?
A reasonable idea to approach a more faithful measurement of the quality of scientific production is to use criteria that work internationally in the most scientifically advanced countries. For example, the Declaration on Research Assessment (DORA, San Francisco, 2013), the Joint Statement of Informatics Research Evaluation (Informatics Europe, 2020) or the Malaga Declaration of the Scientific Computer Society of Spain (SCIE, 2020). In all these cases it is insisted that the evaluation must include peer evaluation of the scientific contributions, analyzing in detail the contributions beyond the so-called “state of the art”. That is, beyond the current knowledge about the subject under investigation, as well as the possibilities that such contributions will be useful for other researchers to obtain more and better results, in accordance with Isaac Newton’s famous phrase: “If I have managed to see “The further it has been because I have stood on the shoulders of giants.”
But, above all, these evaluation methods need to be complemented by something that seems obvious: the responsibility of the person who uses the evaluation to make decisions.
The consequences of this way of evaluating
We want to highlight the negative consequences of the current system to set the goal that researchers must pursue to progress in their professional careers. We seek to give reasons to change things.
In too many cases, efforts are focused on producing articles with the goal of accumulating citations; not necessarily that they are important in any way. We have verified that this nonsense is a reality. As a result of setting this goal, many young people starting their scientific careers are tempted to join this dating game. Unfortunately, most of the time they are not called to participate in interesting research that represents an intellectual challenge with which to grow as researchers. The result is that the most talented young promises are they escape to other places far from the Spanish academic research system. This is especially serious and common in our field, Artificia
l Intelligence, since there are many claims from other countries or private companies offering better resources and salary conditions.
The flight of talent frustrates our growth options. In a certain sense, the illusion that we had placed in a future that the new generations should star in is destroyed. Our graduates need prospects for progress in their professional careers that motivate their interest in being an essential part of our R&D&I system. It is essential to offer stable contracts and decent working conditions, including competitive salaries that recognize the value of their work. Simply publishing potentially citable material should not be an element to be taken into account when attracting talent. In short, the attraction and retention of research talent in Spain requires a paradigm shift in the management of the knowledge generated.
We do not want to end these lines without pointing out some solution to attracting and retaining talent. Unoriginal perhaps. We suggest looking at the systems that countries around us follow and that have been successful. For example, the British model. There, the financing of research centers and university departments depends on external peer evaluations and the results of the evaluations have an impact on the salaries of researchers, including young doctoral students. Therefore, the responsibility of managers to make institutions grow includes measures not only to retain talent, but also to attract it. Apart from the salary conditions, the key piece is also the ability to excite young people with the quality of teaching and research projects.
We left many important questions unanswered. For example, is it desirable that there be universities where research is not a priority objective? Should research in large private sector companies be funded with public funds? These and other aspects should be the subject of debate if we want an R&D&I system of comparable quality to that of the most scientifically advanced countries.
You can follow SUBJECT in Facebook, x and instagramor sign up here to receive our weekly newsletter.
#perversion #science #evaluation #system