Quality of scientific publications beyond standard output indicators

By
Quality of scientific publications

We all know that it is not possible to determine absolutely or categorically the quality of scientific publications. However, there are quantitative indicators that allow an approximate assessment of their impact on the scientific community.  These indicators are useful to know which publications have higher or lower prestige in the different scientific disciplines and—more importantly but questionable—to make decisions about third parties, based on the prestige of a publication (for example, the promotion of faculty). In this context, the so-called standard indicators of scientific production whose common denominator is everything related to the citation of works published in journals, as well as their presence compared to their absence in reputable databases, play a relevant role. To name only the most well-known indicators, we could highlight impact factor, immediacy index, H index, G index, Eigenfactor (Journal Citation Reports of Web of Science), Scimago Journal & Country Rank, etc.

We should ask, at this time, whether the higher or lower prestige of a journal that disseminates scientific papers should fall on indicators that are based solely and exclusively on citations and on their presence or not in reputable databases. Undoubtedly, and contrary to many opinions, I believe that we could consider other supplementary types of indicators that give an approximate idea of the higher or lower quality of papers that are published in a journal from a methodological perspective. Leaving theoretical works aside, most scientific journals also incorporate works of an empirical nature, so it seems obvious, and at least reasonable, to ask what methodological quality these works possess.

Each discipline has its own methodological idiosyncrasy and jargon, although all of them, without exception, share a common core that is none other than the scientific method. The more or less orthodox proposal adapted to the disciplinary context of such method in each discipline is not only imperative, but also enriches the methodological spectrum and protects it from stagnation. In this sense, what I defend here is that, beyond putting into practice technical-rational or positivist methodological proposals or, rather, flexible, open, and dialectical methodologies, the focus of attention has to be placed on how these are implemented and whether or not they conform to the methodological-scientific legitimacy.

Therefore, there are not good or bad, scientific or unscientific methodologies, although deep down we acknowledge that this distinction between hard sciences and methodological rigor vs. social and human sciences of low methodological rigor is a firm and unjust belief. A methodology of approach will be more adequate as it is better adjusted to the research objectives or hypotheses formulated; that is, when it revolves around the steps and stages of the scientific method to a greater or lesser extent, of course, within a tolerable degree of discretion.

Then, it seems reasonable to propose the presence of methodological indicators that can determine the quality of the papers published in a journal beyond those based on citation procedures. I have proposed to consider a set of indicators of a methodological nature that would be able to approximate the quality of published works, in principle, in journals in the field of social sciences (but extrapolatable with relevant adaptations to other fields).

These are simple and superficial indicators that would not go deeply into more complex methodological issues. Even so, this first set of indicators could be very useful to denote the methodological-analytical strength displayed by articles published in a journal, since they would focus their attention on determining the existence or not of quality criteria (reliability and validity), sample characteristics, etc. In this new scenario, the findings and conclusions of studies are a direct consequence of the good development of the methodological process implemented and, as indicated by Guba and Lincoln (2012) in “Paradigmatic Controversies, Contradictions, and Emerging Confluences”, will be more or less credible (credibility), extrapolatable (transferability), dependent (possibility of replicating the study and obtaining similar results), and confirmable (guarantee that the results are not biased by the researcher’s filters), insofar as the methodological-analytical process implemented in those studies has become more consistent and rigorous. If this is fundamental in the field of social sciences and humanities, imagine the importance of findings in the fields of biomedicine (genetics, immunology, oncology, etc.), robotics, computers, etc., in which the citizens’ health, future well-being, etc. is at stake.

I will not finish without emphasizing the dangers that this proposal could entail and the caution and moderation with which it would have to be pondered when managing the results obtained. As these are novel indicators that could leave journals consecrated by standard indicators in a bad place, it would be convenient to demand the understanding of editors and directors in the conviction that the good use of results can be an excellent improvement strategy. Rigorousness, objectivity, and moderation in the interpretation of results and conclusions, as well as understanding and the desire for improvement, would be the most convenient to undertake this proposal.

Clemente Rodríguez-Sabiote

Full professor of the Department of Research Methods and Diagnosis in Education at the University of Granada, Spain. He has focused his research lines on institutional evaluation and advances in the analysis of different computerized data. E-mail: clerosa@ugr.es  

CompartirTweet about this on TwitterShare on FacebookShare on Google+

Comentarios