SciELO - Scientific Electronic Library Online

 
vol.33 número79Producción científica española en literatura desde una perspectiva de género a través de Web of Science (1975-2017) índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

Links relacionados

  • Não possue artigos similaresSimilares em SciELO

Compartilhar


Investigación bibliotecológica

versão On-line ISSN 2448-8321versão impressa ISSN 0187-358X

Investig. bibl vol.33 no.79 Ciudad de México Abr./Jun. 2019  Epub 08-Jan-2020

https://doi.org/10.22201/iibi.24488321xe.2019.79.57930 

Artículos

The binomial center-periphery and the evaluation of science based on indicators

El binomio centro-periferia y la evaluación de la ciencia con base en indicadores

Dirce Maria Santin* 

Sônia Elisa Caregnato* 

*Universidade Federal do Rio Grande do Sul, Brazil dirce.santin@ufrgs.br, sonia.caregnato@ufrgs.br


Abstract

The evaluation of science based on bibliometric indicators can generate greater visibility for the science of the peripheries or build and sustain peripheral situations in the scientific system. This article reflects on the center-periphery binomial in science, the division between mainstream and peripheral science, and the use of indicators to evaluate of peripheral spaces. It discusses the limited scope of mainstream science metrics to evaluate of peripheries and the need to adapt the indicators to the fields and contexts where phenomena occur, recognizing the objectives of science and technology systems. It concludes by pointing out the main challenges of science evaluation in peripheral spaces, puttingputting emphasis on the creation of data sources most representative of the science of these countries and on the search for more inclusive indicators, with a plural and contextual approach capable of representing more broadly the peripheral science.

Keywords: Science Evaluation; Scientometrics; Bibliometric Indicators; Center-Periphery; Peripheral Science

Resumen

La evaluación de la ciencia con base en indicadores bibliométricos puede generar mayor visibilidad para la ciencia de las periferias o construir y sostener situaciones periféricas en el sistema científico. Este artículo reflexiona sobre el binomio centro-periferia en la ciencia, la división entre la ciencia mainstream y periférica y el uso de indicadores para la evaluación de espacios periféricos. Discute el alcance limitado de las métricas de la ciencia mainstream para evaluar las periferias y la necesidad de adaptar los indicadores a los campos y contextos en que ocurren los fenómenos, con el reconocimiento de los objetivos de los sistemas de ciencia y tecnología. Concluye apuntando los principales desafíos de la evaluación de la ciencia en espacios periféricos poniendo énfasis en la creación de fuentes de datos más representativas de la ciencia de esos países y en la búsqueda de indicadores más inclusivos, con enfoque plural y contextual, capaces de representar más ampliamente la ciencia de las periferias.

Palabras clave: Evaluación de la Ciencia; Cientometría; Indicadores Bibliométricos; Centro-Periferia; Ciencia Periférica

Introduction

The evaluation of science based on bibliometric indicators is an increasingly common practice in countries around the world. These indicators are potentially useful for revealing science configurations in the most diverse contexts, but their scope is limited and they are often guided by paradigms and instruments from mainstream science. Science is an extremely complex system, with considerable differences among countries, regions and fields of knowledge. Therefore, the evaluation of science based on bibliometric indicators can either generate greater visibility for the science of peripheral countries or build and sustain peripheral situations.

Recent years were marked by important movements for the most appropriate use of indicators in science evaluation. The San Francisco Declaration on Research Assessment (DORA) was launched in 2012 during the Annual Meeting of the American Society for Cell Biology and recognized the need to improve the way in which the outputs of scholarly research are presently evaluated. This is a worldwide initiative covering all disciplines including funders, publishers, professional societies, institutions and researchers (“San Francisco”..., 2012). The Leiden Manifesto for Research Metrics, launched in 2015 by a group of experts of the Center for Science and Technology Studies (CTWS) of Leiden University and other institutions, has also emerged as an important reflection about the appropriate use of science metrics. The manifesto proposed ten key principles and aims to be accepted by managers, researchers and experts on the best practices in the use of metrics to evaluate research performance (Hicks et al., 2015). In addition, several studies have discussed the use of indicators in peripheral countries and criticized the current competition regime on science and the indiscriminated use of rankings based on the notion of academic excellence (Ràfols et al., 2012; Stilgoe, 2014; Vessuri, Guedón, and Cetto, 2014; Ràfols et al., 2016a).

In this context, understanding the center-periphery relationship and the multiplicity of aspects related to the quantitative evaluation of peripheral science is essential in order to broaden the debate on the use of indicators in peripheral contexts, to support the development of evaluation policies, programs and studies at the various levels, and to promote the theoretical development of Scientometrics, in addition to encouraging new perspectives of the study of science in the peripheries.

Based on the literature this article reflects on the center-periphery binomial in science, on the division between mainstream and peripheral science, and on the use of bibliometric indicators in diverse geographical and disciplinary spaces. Its objective is to review and discuss the center-periphery relations and the use of bibliometric indicators to evaluate science in peripheral spaces of the international scientific system. Special attention is given to the appraisal of peripheral science through the use of indicators, considering the principles of The Leiden Manifesto for Research Metrics (Hicks et al., 2015). Two main perspectives are laid on the evaluation of peripheral science based on indicators: the first is associated with social and political issues of the periphery, focusing on the center-periphery binomial and its influence on the evaluation of science in less favored contexts of world science; and the second contemplates technical issues regarding the use of indicators, such as the restrictive coverage of databases and the limited scope of mainstream science metrics to appreciate the evaluation of peripheral science.

The main challenges for assessing the evaluation of science in peripheral spaces are also discussed in order to reflect on the use of bibliometric indicators in these contexts. Special emphasis is given to the need to develop data sources sufficiently representative of peripheral science, without which science results are permanently under-represented, and to propose more inclusive indicators with a plural and contextual approach, capable of representing more widely the science of peripheral regions and fields. Finally, other challenges of peripheral science are also discussed to subsidize the reflection on the contextualized evaluation of the activity and the scientific impact of the peripheral spaces.

The Center-Periphery Binomial in Science

Ràfols et al. (2016a) define those peripheral countries that are “following” rather than “leading” in many scientific fields. From this perspective, peripheral countries can be identified in various ways: their researchers tend to study or receive training in more central countries; and also tend to be under-represented in the editorial boards of international journals; often their national journals are under-represented in mainstream bibliographic databases; and they usually give more citations than they receive. In summary: “they have a dependent asymmetrical relation in mobility and communication patterns” (Ràfols et al., 2016a: 1).

Others authors also portray peripheral science by aspects such as: absence of viable scientific community; limited access to scientific information and inadequate communication in the local and international community; long delay to participate in emerging research fronts; weak institutional infrastructures; excessive dependency on science from other countries for their growth and sustainability; and an insubstantial contribution to the world’s knowledge base, reflected in particular by citation data from publications (Argenti, Filgueira, and Sutz, 1990; Arunachalam, 1992; Fink et al., 2014; Salager-Meyer, 2015; Chinchilla Rodríguez, Miguel, and Moya-Anegón, 2015).

The terms “center” and “periphery”, denoting a dependence relationship, were quite common in the economic literature of the second half of the 20th century. In Sociology, the theme gained visibility with the publication of the study Center and Periphery, from the American sociologist Edward Shils, for whom the societies constitute quite similar structures in which it is possible to recognize a dominant central zone and several peripheral zones (Mueller and Oliveira, 2003). According to Shils (1975), the central zone is the center of the order of symbols, values and beliefs that govern society in its various aspects.

As well as as it happens with economics and society, the center-periphery dichotomy is also present in science. In all these cases, the periphery tends to be dependent on the center. Thus, the central value system constitutes the central area of science. Central values are pursued at a higher or lower degree by the peripheral zones, which see in the center a model to be followed, with values and beliefs to be incorporated. Mainstream science can therefore be described as a set of agents and structures legitimized by the central value system (Arunachalam, 1995; Guédon, 2011).

The structure of social organizations described by Shils (1975) is easily perceptible in the scientific field, where the central zone accumulates most of the knowledge and the best means of promotion, producing a larger and primarily more relevant amount of new knowledge. It is also from this center that arises the central value system, which controls the most influential scientific journals, indexes and databases in addition to establishing the evaluation criteria for scientific communities (Mueller and Oliveira, 2003).

The dichotomy in science presupposes the existence of a center that concentrates power and establishes a system of values recognized and adopted by consensus, although not altogether, by peripheral regions and countries (Mueller and Oliveira, 2003). This supposedly more creative center attracts the perspectives of the periphery and manifests its authority over it. In so doing, it establishes a value system that not only determines the norms of mainstream science, but sets the basis for its own legitimation and for the maintenance of power structures in the scientific field (Shils, 1975; Bourdieu, 1988).

On the other hand, the more dispersed the peripheral spaces are, the smaller the opportunities seem to influence the central order of mainstream science. The center itself is not cohesive and the complexity of the present time contributes to the establishment of smaller centers in the fragmented space of the main science. The dominant center may at any time lose power in the presence of another center that overcomes it, because being a center is not a permanent or peaceful condition, but rather one imposed by authority (Shils, 1975; Mueller and Oliveira, 2003). The dominant position is always in dispute, which seems to reinforce the view of Bourdieu’s fields of struggle (Bourdieu, 1988). Moreover, there is no longer a single center that reigns absolutly, but several centers that coexist and generate greater or lower influence in certain fields or geographical spaces (Schott, 1998).

The center-periphery conflict is probably more complex in the contemporary scenario. The center peak is no longer so high, perhaps not unique, and the periphery is no longer so distant, although it does not become less peripheral. The actual configurations of society and of the scientific field do not substantially change the center-periphery relationship. While some countries, fields and social groups are closer to the center, others are further away. Globalization, although increasing the integration of the scientific community, can also crystallize the center-periphery distance and generate new difficulties for peripheral areas.

The peripheral condition implies being far from the innovative center, and having more limited means of production and dissemination of science and lower international visibility (Mueller and Oliveira, 2003). The peripheries have generally unequal access to power and irregular relationships in several spaces. They make use of analytical categories from the core (mainstream science), but have little room to influence the main themes or the research agendas (Ràfols et al., 2016b). Peripheries sometimes establish irregular relationships, based on the individual efforts of researchers, and therefore tend to be less stable throughout time due to the lack of continuity encouraged or guaranteed by agreements between institutions and countries.

Being at the periphery does not only mean being outside the central zone, it also means being attracted to and influenced by the center’s perspective, even if partially. Nevertheless, the center’s power is not continuous and absolute, nor does it fully governs the principles and relationships established in global, regional, or local sciences. The periphery makes its “choices” in relation to science and technology, although they are partially dependent on the center.

The binomial center-periphery implies a situation of relationship that can generate dependence or revolt. The peripheries can remain as such or else make efforts to change their positions in relation to the center. In this perspective, the bibliometric indicators can either contribute to the maintenance of the central and peripheral positions or present themselves as alternative or new perspectives of peripheral science.

The indexing of journals in international databases is an example of a topic that stimulates the debate about the centrality of the main science and its strategic role in maintaining the power structures of world science. The foundation of the Institute for Scientific Information (ISI) in 1960, contributed to the strengthening of the paradigm of mainstream science, which considers the publication of articles in indexed and high impact journals as one of the most representative indicators of scientific productivity. The Science Citation Index (SCI) is a clear representation of mainstream science, which radically separates the main science from the rest of the publications and reinforces the division of science in the central and peripheral axes. However, the influence of databases and international publishers goes beyond this, because while it promotes mainstream science, it also contributes to the emergence and choice of scientific vocations, as well as to the definition of regional, national and institutional research agendas (Morales Gaitán and Aguado López, 2010; Guédon, 2011; Aguado Lopez et al., 2014; Vessuri, Guédon, and Cetto, 2014).

The Impact Factor is also a representative indicator of mainstream science. Expressed by the average number of citations of published articles in the previous biennium, the indicator has been widely used in scientific evaluation processes, although much criticized. Criticism refers particularly to the coverage bias in favor of journals published in English, the over-representation of hard sciences and minimal presence of journals in the Social Sciences and Humanities, and of journals from several regions worldwide, such as Africa and Latin America and the Caribbean (Aleixandre Benavent, 2009; Torres-Salinas and Jiménez-Contreras, 2010; Aguado Lopez et al., 2014).

The first decade of 2000 was marked by the emergence of new international databases, such as Google Scholar, Microsoft Academic Research and especially Scopus, as well as by the expansion of the Web of Science’s coverage. The increase in the number of regional journals in international databases, which at first glance could indicate increased production in peripheral regions, further reflects the openness of editorial policies from such databases rather than substantial changes in regional communities, production patterns or publication strategies. Although the emergence of new databases challenges the major indexes of international science, demanding scope expansion, this does not really cause the reconfiguration of the center-periphery division (Santa and Herrero-Solana, 2010; Guédon, 2011; Collazo-Reyes, 2014).

Another concern associated with the division between mainstream and peripheral science is the proliferation of rankings based on excellence indicators (Stilgoe, 2014; Vessuri, Guédon, and Cetto, 2014). Even using objective indicators, sometimes without adequate standardization, rankings can mask distortions and place countries, institutions, fields and journals in disadvantaged positions, even if they produce and disseminate relevant and quality research. Journal rankings, e.g., may have negative implications on interdisciplinary fields by discouraging interdisciplinarity in the systematic evaluation of disciplinary researches of excellence (Ràfols et al., 2012).

Indicators of excellence may also show systematic tendencies in favor of countries and institutions from the center, carrying serious implications for the research management and the allocation of financial resources in diverse contexts, including the peripheral spaces.

The publication of scientific journals by major editors is also a relevant topic in the debate over the center-periphery relationship. In a study on the scientific publishing industry in the digital age, Larivière, Haustein, and Mongeon (2015) pointed out the existence of an oligopoly of academic publishers, with emphasis on five commercial publishers that concentrate more than 50% of the publications indexed in the Web of Science, followed by the major international scientific societies that maintain their strength despite the progressive reduction of their presence in some fields. The study clearly indicates the decline in the proportion of journals published by small and medium-sized publishers, with significant differences among fields. This movement implies not only an increase in the participation of the major publishers in world scientific production, but also the expansion of their power and their control to define the mainstream science publishing lines, which favors high impact themes and journals characteristic’s from the central area of science.

Although the global changes of recent decades have led to reconfigurations in scientific communication and publishing strategies, including open access, as well as in internationalization policies and in the dynamics of scientific communication concerning the digital age, the paradigm of mainstream science remains in effect in the scientific system and is shown as a great challenge for countries and peripheral regions worldwide. This perspective is also applied for the evaluation of science based on bibliometric indicators, whose efforts should be directed towards different contexts in order to understand the patterns and practices of publication and citation of the several fields, geographic spaces and social groups.

Scientometrics and Science Evaluation

Scientometrics comprises the quantitative analysis of science based on the products and results of science and the processes of production and use of scientific knowledge. It includes studies of scientific activity, collaboration and citation and various other indicators of science and technology. It also examines the development of science and applies analysis based on historical, economic and social aspects. The evaluation of science, in turn, has a broader focus and contemplates the processes, activities, results and impacts of science in various contexts (Velho, 1990; Spinak, 1998; Maricato and Noronha, 2013).

Scientometrics developed from the bibliometric studies carried out since the beginning of the 20th century in order to measure phenomena related to information. The pioneering studies of the 1920s and 1940s used bibliographical statistics to understand the general characteristics of scientific information and to predict communication patterns. During this period the three fundamental laws of bibliometrics were also established: the Law of Lotka, which evaluates the productivity of authors by means of the frequency distribution in a set of documents; the Bradford Law, which identifies the core and dispersion areas in a set of jounals; and Zipf’s Law, which accounts for the frequency of use of words in a set of texts (López López, 1996).

The 1950s and 1960s were marked by the development of bibliometric studies aimed at assessing scientific activity, in particular by the contributions of science historian Derek de Solla Price and the emergence of the Institute for Scientific Information, created by Eugene Garfield in 1955, which gave rise to the SCI citation index, currently integrated with the Web of Science. In the same period, Price developed studies on the growth of science and related them to the increase of publications, formulating the Law of Exponential Growth of Science. Price also noted increased collaboration among scientists, especially within the “invisible colleges” (Price, 1986; Callon, Courtial, and Penan, 1995).

Scientometrics was established as the discipline that studies the structure and properties of scientific information and the general laws of science communication. Considered the “science of science” by Price (1986), it inherited the quantitative dimension of the tradition of Robert Merton’s Sociology of Science, sustaining the evaluation of scientific activity in the theoretical and epistemological assumptions of that paradigm (Velho, 1990; Spinak, 1998).

Scientometrics is related to the Sociology of Science, but also comprises, besides those indicators, other applications for the development and evaluation of scientific policies. The evaluation of science, however, has a broader focus and includes the monitoring of research in countries and institutions, and the dissemination of their contributions at local, national or global level; accountability of investors and benefits; subsidies for research funding decisions in relation to the allocation of resources and the definition of investments for the promotion of science; understanding of the patterns and trends of science and their impact on the creation of new knowledge and economic and social development; among others (Maricato and Noronha, 2013, Penfield et al., 2013).

Science evaluation is divided into two major and complementary approaches: qualitative, based on peer review, and quantitative, based on bibliometric indicators. Peer review has a strong tradition in science and essentially comprises aspects relating to the quality of publications. It originated in the 17th century with the establishment of the first scientific societies and the creation of the journal Philosophical Transations, founded by the Royal Society of London in 1665, and Journal des Sçavans, founded by Dennis de Sallo in 1665. In the 20th century, peer review was consolidated as a central method for the evaluation of quality in science. The system is based on parity, plurality of ideas and, in most cases, the anonymity of authors and evaluators. It stands at the basis of the social control of science and reward system, since it guarantees not only the quality of registered knowledge, but also the recognition of the priority of discoveries and the autonomy of the scientific fields (Maltras Barba, 2003; Stumpf, 2008).

For obvious reasons of time and cost, it would be unthinkable to use peer review to evaluate the entire output of a national or institutional research system. On the other hand, bibliometric indicators cannot cover the entire range of publications resulting from research. Another concern refers to the use of the counting of publications as a single indicator of productivity and of citations as an indicator of the quality of science, since productivity alone may not reveal significant aspects of the science, and citations do not always reflect the quality of the publication. Bibliometric indicators can also be affected by the manipulation of the data, which raises questions about their use in the evaluation of institutions and research fields. On the other hand, subjective judgments of reviewers may be influenced by positive or negative attitudes of one researcher in relation to another, which means that intentional bias can occur both in an objective analysis as well as in qualitative judgments of peer review (Abramo and D’Angelo, 2011).

The complementary use of qualitative and quantitative approaches thus emerges as the most probable balanced assessment of science. The objectives, context, and variables of evaluation can shift the weight of preference in favor of one or another method. Previous studies have shown a positive, albeit moderate, correlation between the quality estimates attributed by the peer-review and the citations received by the publications, which reinforces the complementary nature of the two methods (Abramo and D’Angelo, 2011; Schroder et al., 2014).

Science Evaluation in Peripheral Spaces

The last decades have been marked by the growth of evaluation policies and practices in countries around the world. In this scenario, the evaluation of science and technology has received the attention of peripheral and developing countries that have used indicators and showed an interest in developing suitable methodologies for the evaluation of science in peripheral contexts. For instance, a number of Latin American and Caribbean countries are looking for responses to their evaluation systems, although many of these indicators cover mainstream science more widely and do not adequately address the region’s research themes and agendas (Russell, 2000; Velho, 2004).

The plurality and heterogeneity of science require that evaluation policies and processes follow the specificities of each field, country or institution, with their scientific profiles and their publication cultures. The universalistic perspective of science evaluation (Chavarro, 2016) may conflict with social and political demands of local knowledge, which do not necessarily align with international science (Ràfols et al., 2016b). Despite the supposed universality of the indicators, their use requires careful adaptation to the social, political and economic context in which the phenomena occur, as well as the recognition of the objectives that guide each institution or science and technology system. This attention is particularly important for small countries, economies in development and with limited experience with science and technology indicators (Argenti, Filgueira, and Sutz, 1990).

Bibliometric indicators are quantitative measures of science based on publication and citation data (Price, 1986). They are characterized by a quantitative approach and evaluation scales, which can be macro, meso or micro, and reveal the scientific performance of a particular field, country, institution or research group and allow the analysis of the configurations of science over time (Glänzel, 2003). The bases for the identification of these data are the citation indexes, which gather information about academic literature and its impact. In addition to supporting bibliographical research and the access to scientific information, the indexes favor the understanding of the characteristics and dynamics of scientific output and its impact and subsidize the processes of science evaluation.

Indicators are potentially useful instruments for the management and evaluation of science and technology systems because they reduce time and cost, increase objectivity and transparency, and reduce the complexity of results, making them more accessible to different audiences (Ràfols et al., 2016b). The increasing use of quantitative methods for the evaluation of science accompanies the need for greater governance in science and can be associated, according to Gläser and Laudel (2007), to three main factors: lower cost and greater agility in the face of an increasing demand for evaluation; greater objectivity and reliability than peer review; and easier interpretation, making the results more accessible to non-specialists. The advantages of indicators in relation to qualitative evaluation lie not so much in the higher evaluation effectiveness of the results, but in the possibility of evaluating large volumes of data. This characteristic confers robustness, accuracy, validity, functionality and viability of time as well as adequate cost for the evaluation of science with metric studies (Abramo and D’Angelo, 2011).

Bibliometric indicators are important for the evaluation of scientific activity and impact, but do not capture many aspects of science. Some phenomena may be better understood with qualitative evaluation and others require multiple and complementary approaches. Traditional and alternative indicators (altmetrics) may also complement each other. Criticism within the scope of indicators, however, is not limited to the quantitative approach or the exclusive use of traditional metrics, but also refers to the dominant focus of indicators, marked by the mainstream science domain, predominantly based on academic excellence.

When referring to the scope of indicators, Ràfols et al. (2016b) proposed a scheme to illustrate the coverage limitation of science and technology evaluation indicators. The scheme consists of three concentric circles that illustrate the space of problems (large circle), the space of research (intermediate circle) and the space of research “illuminated” by the indicators (small circle). The figure reveals the breadth of science and scientific problems and the limited scope of indicators for understanding phenomena, while at the same time indicates the possible exclusion of activities and contexts due to the lack of “illumination” by indicators. Other developments of the scheme of Ràfols et al. (2016b) also indicate that the scope of the indicators is limited to the aspects that they can reveal, considering the geographical, cognitive, linguistic, sectoral and social spaces. In addition, a given indicator may be sufficiently representative of the results of science in some countries and may prove to be inadequate for other contexts, especially in peripheral countries or topics of local or regional interest.

The scheme reveals not only the limits of the indicators, but suggests the existence of a wide space to be explored in order to ensure higher comprehensiveness for the evaluation of science, especially in peripheral areas. Peripheral science is characterized by the use of local languages in publications, by the non-indexing of regional journals in international databases and by the low impact of publications, among other aspects. These attributes characterize the science of the peripheral spaces, but do not cause the invisibility of science in the processes of evaluation. This stems, to a large extent, from the limited scope of the indicators themselves, as shown in the figure proposed by Ràfols et al. (2016b).

Several regions worldwide are considered peripheral, as well as fields of knowledge and groups of lower visibility. Peripheries tend not to be adequately covered or targeted by mainstream science indicators. Each periphery has its own systems of generation and use of knowledge and its evaluation may require different types of indicators, or multiple indicators capable of contemplating local and regional potentialities. The simple transposition of indicators from the main science to the peripheral spaces tends to generate an inadequate analysis and harmful effects to science, with possible consequences to the science and technology systems of countries and regions, as well as implications at the individual and institutional levels (Vessuri, Guédon, and Cetto, 2014; STI Conference, 2016).

As well as for the center, it is important for the periphery to have and value elite research. Research excellence, in the search for scientific and technological advance, is not discussed here. What is being argued is that these research works do not necessarily supplant local and regional interests, and that these should not be underestimated, but preserved and valued. Valuation and care concern not only scientists, it also involves those responsible for scientific policies and evaluation systems, who are equally liable for promoting relevant research in peripheral spaces.

The level of scientific development of a field, country or region is not measured simply by publications indexed in mainstream science databases and by the impact of their citations. It is equally important to evaluate the results of local and regional research work in order to understand the configurations of science and their importance in each context. The broader view of science’s universal character contrasts sharply with the artificial character of the division between mainstream and peripheral science, as Guédon (2011: 155) reflects: “The borderline separating SCI journals from the others is the result of human decisions, not a natural law of scientific publication”.

Attention to regional or peripheral science is also advocated in the principles of The Leiden Manifesto for Research Metrics (Hicks et al., 2015), which seeks to make bibliometrists, managers and researchers aware of good practices in science evaluation through the use of bibliometric indicators. The second principle of the manifesto, “Measure performance against the research missions of the institution, group or researcher” (Hicks et al., 2015: 430), refers to the need to tailor performance indicators to the objectives of science and technology programs and to the socioeconomic and cultural context. This principle states that there is no single evaluation model that is applied to all contexts and that the mission of the evaluated groups should be at the basis of evaluations.

The importance of context requires that the objectives of science and technology systems be indicated in the evaluation, and that the indicators are clearly linked to those objectives. The choice of methodology and indicators should consider the broader socioeconomic and cultural context in which phenomena occur. The evaluation can be focused on public policies, industry or citizens in general. The manifesto also proposes to overcome the merits based exclusively on academic notions of excellence, and to consider the importance of science for other sectors of society and the community in general. Science is contextual. There is therefore no single evaluation model applied to all contexts (Hicks et al., 2015).

The third principle of The Leiden Manifesto, “Protect excellence in locally relevant research” (Hicks et al., 2015: 430), warns on the importance of local and regional production, in contrast to the bias of mainstream science, published in English and conveyed in high-impact journals. The problem is more serious in the Social Sciences and Humanities, but is also reflected in other fields or themes characterized by a local or regional dimension. The manifesto proposes the evaluation of science based on pluralism and social relevance of research results, with more inclusive indicators of science and technology, defined based on local and regional scientific communication policies and strategies.

Ràfols et al. (2016b) warn over the risks of undervaluing science in peripheral spaces based on the use of indicators. In particular, the authors draw attention to the conflicts between the “universalist” perspective of indicators and “local” research practices, and between the “universal” vision of excellence and “local” research missions. For Stilgoe (2014), the prioritization of excellence can perpetuate the reproduction of scientific elites and the concentration of research in particular disciplines and places, thus reinforcing the Matthew Effect, adapted to the science domain by Robert Merton, whereby eminent scientists tend to receive proportionately higher credits for their researches (Merton, 1968).

The supposedly universal character of science does not overlap with the context and social issues of regional or disciplinary communities, nor should it define the directions of peripheries at all. Evaluation practices and systems need to value science results geared to local and regional needs, even if they are more difficult to measure. This reinforces the need to propose new indicators and to use multiple indicators which reveal more widely the value of science produced in these spaces.

Factors inherent to fields of knowledge also influence the research visibility and its integration with the center or periphery. The concern with the field’s characteristics is expressed in the sixth principle of The Leiden Manifesto: “Account for variation by field in publication and citation practices” (Hicks et al., 2015: 430-431). The principle reinforces caution with the differences among the publication and citation practices from different fields, as well as with aspects related to the basic or applied research approach and its local, regional or international range.

In this perspective, the best evaluation practices seem to be those that include a set of possible indicators and allow the fields to choose those that are most appropriate (Hicks et al., 2015). Another important practice is the use of relative and field-normalized indicators in which publications and citations are weighted against broader contexts and reflect positions based on reference standards of the fields or disciplines themselves (Schubert and Braun, 1986).

The heterogeneity of research fields needs to be perceived and respected in evaluations, as well as in the definition of science and technology policies and in research promotion programs. Science evaluation should avoid the exclusive use of unique or absolute indicators, expanding the gaze to multiple aspects capable of indicating the strengths of each field based on their patterns of production, communication and use of information.

Reflections on the paradigm of mainstream science and center-periphery relations provide elements for thinking about scientific policies and systems of evaluation and their possible influences on research agendas and ways of doing science in peripheral spaces. The evaluation of peripheral science needs to situate the problems in the objectives and context of the peripheries, which are the basis of the evaluation processes and must be thought of from the conception of policies or programs; that is, they precede the definition of indicators. In addition, to know the policies and evaluation systems, it is important to be clear about who the evaluation agents are, what the evaluation is made for and according to what parameters and considering what principles and interests.

Final Considerations

Some principles and challenges for the evaluation of science based in bibliometric indicators are common to a variety of contexts, such as adherence to the objectives of scientific systems, transparency of data and processes, and the review and updating of indicators. These and other challenges tend to be more intense in peripheral countries and regions, where the configurations of science require closer attention to the objectives, methods and contexts of evaluation.

Two main challenges are posed to peripheral spaces when it comes to science evaluation through the use of bibliometric indicators. The first relates to the absence of data sources sufficiently representative of peripheral science, without which science results from these spaces are permanently under-represented. The need is urgent and is not new in the discussion of the data sources that support bibliometric and scientometric studies. Garfield, the founder of SCI, already drew attention to the need to create databases for regional journals in order to ensure a multidimensional picture of regional science (Garfield, 1995). The importance and reasons for the creation of national and regional citation indexes were also discussed by Pislyakov (2007) and Yadav and Yadav (2014).

Expanding the coverage of databases and using multiple sources in the analysis of peripheral science, including national, regional and international indexes, remain an important challenge for peripheral science. Even the creation and maintenance of these sources is a challenge for these countries. Important efforts have been made in recent decades and some regional databases are even hosted on the Web of Science platform, such as the SciELO Citation Index, the Chinese Science Citation Database, and the Russian Science Citation Index. Other initiatives also promote open access and scientific production of specific regions and fields, such as SciELO and Redalyc in Latin America. However, the initiatives are still limited and quite restricted in relation to the coverage of regional journals or of less visible social groups and fields, and are insufficient for the evaluation of peripheral science. There is evidence on the need for joint efforts by the peripheral regions, states and international organizations in order to develop more comprehensive science databases in peripheral spaces.

The expansion of international database coverage has a direct influence on the visibility of regional science, whereas also contributing to broadening the scope of mainstream science. International databases are important in evaluating regional science and have been widely used in bibliometric studies. They should not, however, be perceived as exclusive sources of local and/or regional science results. Regional sources also play an important role, especially in the Social Sciences and Humanities, traditionally under-represented in mainstream science international indexes (Meneghini, Mugnaini, and Packer, 2006; Aguado Lopez et al., 2014; Velez-Cuartas, Lucio-Arias, and Leydesdorff, 2016; Hicks et al., 2015).

The discussion on the low representativeness of the peripheral regions in the main indexes of mainstream science is not recent and deserves to be continuously expanded. Latin America and the Caribbean, e.g., is an important region for science, but despite the scientific potential of the countries, regional science remains underrepresented on international bases, especially in Scopus and the Web of Science, but also in less consolidated sources. Even among the countries of the region represented in databases, the increase in the number of indexed journals in recent years is asymmetric, indicating the concentration of Brazilian journals among the region’s titles. The asymmetry is also revealed among the different fields, reproducing a historical situation of discrepancy among the disciplines in the main indexes of the main science (Russell, 2000; Aguado Lopez et al., 2014; Collazo-Reyes, 2014).

The second challenge for the evaluation of science in peripheral spaces consists in proposing more inclusive studies and indicators, with a plural and contextual approach, capable of representing more broadly the peripheral science and its configurations, its strengths and what needs better performance in the local, regional or global range. The adaptation of the indicators transposed to peripheral contexts can also generate better results in the evaluation, as well as the use of multiple indicators.

The use of multiple approaches and indicators for science evaluation is widely advocated in the literature, especially in the peripheral context (Velho, 2004; Ràfols et al., 2012; Vessuri, Guédon, and Cetto, 2014). Indicators are usually partial measures and tend not to contemplate individually all aspects of phenomena. The potential of these measures, however, is amplified in relative and multidimensional analyses, which can generate more complete portrayals of the phenomena evaluated. Furthermore, new indicators need to be proposed for the analysis of peripheral science, covering aspects discovered by traditional science metrics and also by altmetrics, such as the social use of science results and the impact of the findings on social and economic development.

Proposing representative bibliometric indicators of peripheral science is not an easy task. This challenge requires continuous exercises by bibliometrists, managers and the scientific community itself. New indicators also need to be extensively tested, as well as the traditional metrics when transposed to peripheral spaces.

The challenges of peripheral science are not limited to the inclusion of local journals in databases and the adequacy of evaluation indicators. They also refer to the constitution of scientific communities, formation and retention of human resources, limited investments, weak institutional infrastructure, inadequate access to information and excessive dependence on international science, as well as quality of research and of regional journals (Argenti, Filgueira, and Sutz, 1990; Arunachalam, 1992; Fink et al., 2014; Salager-Meyer, 2015; Chinchilla-Rodríguez, Miguel, and Moya-Anegón, 2015).

The problems are complex and multifaceted, as the literature indicates, and they do not have simple solutions. There are different levels of peripherality and within a country or region there can be significant differences between fields and territories, for example (Arunachalam, 1992). These configurations reinforce the importance of defining appropriate policies and procedures for science evaluation in peripheral spaces, respecting their objectives and characteristics and using the appropriate tools. The basis of any evaluation is the context in which phenomena occur. Science evaluation should be sustained by plural and contextual views, capable of revealing more broadly the proper science configurations in central or peripheral contexts.

The times are of changes in science and scientific communication, boosted by technological advances, by the emphasis on collaborative processes and by open access to knowledge. The broader perspective of open science points to knowledge being transparent and available, openly and quickly, to all. The reflection and debate on the policies and practices of evaluation of the peripheral spaces are essential in this scenario, and the indicators should serve to boost the science of the peripheries and not to constitute obstacles that compromise their development.

References

Abramo, Giovanni and Ciriaco Andrea D’Angelo. 2011. “Evaluating research: from informed peer review to bibliometrics”. Scientometrics 87 (3): 499-514. https://doi.org/10.1007/s11192-011-0352-7 [ Links ]

Aguado Lopez, Eduardo, Arianna Becerril Garcia, Miguel Leal Arriola, and Nestor Daniel Martinez-Dominguez. 2014. “Ibero-America in mainstream science (Thomson Reuters/Scopus): a framented region”. Interciencia 39 (8): 570-79. [ Links ]

Aleixandre Benavent, Rafael. 2009. “Factor de impacto, competencia comercial entre Thomson Reuters y Elsevier, y crisis económica.” Anuario ThinkEPI 3: 27-29. [ Links ]

Argenti, Gisela, Carlos Filgueira, and Judith Sutz. 1990. “From standardization to relevance and back again: science and technology indicators in small, peripheral countries”. World Development 18: 1555-1567. [ Links ]

Arunachalam, Subbiah. 1992. “Peripherality in science: what should be done to help peripheral science get assimilated into mainstream science.” in Science indicators for developing countries, editado por Jacques Gaillard and Rigas Arvanitis, 67-76. Paris: ORSTOM. [ Links ]

Arunachalam, Subbiah. 1995. “Science on the periphery: can it contribute to mainstream science?” Knowledge and Policy 8 (2): 68-87. https://doi.org/10.1007/ BF02825969 [ Links ]

Bourdieu, Pierre. 1988. Homo academicus. Stanford: Stanford University Press. [ Links ]

Callon, Michel, Jean-Pierre Courtial, and Hervé Penan. 1995. Cienciometría: la medición de la actividad científica. Gijón: Ediciones Trea. [ Links ]

Chavarro, Diego Andrés. 2016. “Universalism and particularism: explaining the emergence and growth of regional journal indexing systems.” Doctoral thesis, University of Sussex, Science and Technology Policy Research Unit. [ Links ]

Chinchilla-Rodríguez, Zaida, Sandra Miguel, and Félix de Moya-Anegón. 2015. “What factors affect the visibility of Argentinean publications in humanities and social sciences in Scopus? Some evidence beyond the geographic realm of research.” Scientometrics 102 (1): 789-810. https://doi.org/10.1007/s11192-014-1414-4 [ Links ]

Collazo-Reyes, Francisco. 2014. “Growth of the number of indexed journals of Latin America and the Caribbean: the effect on the impact of each country.” Scientometrics 98 (1): 197-209. https://doi.org/10.1007/s11192-013-1036-2 [ Links ]

Fink, Daniel, Youngsun Kwon, Jae Jeung Rho, and Minho So. 2014. “S&T knowledge production from 2000 to 2009 in two periphery countries: Brazil and South Korea.” Scientometrics 99 (1): 37-54. https://doi.org/10.1007/s11192-013-1085-6 [ Links ]

Garfield, Eugene. 1995. “Quantitative analysis of the scientific literature and its implications for science policymaking in Latin America and the Caribbean.” Bulletin of the Pan American Health Organization 29 (1): 87-95. [ Links ]

Glänzel, Wolfgang. 2003. Bibliometric as a research field. Course Handouts. [ Links ]

Gläser, Jochen and Grit Laudel. 2007. “The social construction of bibliometric evaluations.” In The changing governance of the sciences: the advent of research evaluation systems, edited by Richard Whitley and Jochen Gläser, 101-23. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-1-4020-6746-4_5 [ Links ]

Guédon, Jean-Claude. 2011. “El acceso abierto y la división entre ciencia ‘principal’ y ‘periférica.’” Crítica y Emancipación 3 (6): 135-80. [ Links ]

Hicks, Diana, Paul Wouters, Ludo Waltman, Sarah de Rijcke, and Ismael Ràfols. 2015. “Bibliometrics: the Leiden manifesto for research metrics.” Nature 520 (7548): 429-31. https://doi.org/10.1038/520429a [ Links ]

Larivière, Vincent, Stefanie Haustein, and Philippe Mongeon. 2015. “The oligopoly of academic publishers in the digital era.” PLOS One 10 (6): 1-15. https://doi. org/10.1371/journal.pone.0127502 [ Links ]

López López, Pedro. 1996. Introducción a la bibliometría. Valência: Promolibro. [ Links ]

Maltras Barba, Bruno. 2003. Los indicadores bibliométricos: fundamentos y aplicación al análisis de la ciencia. Madrid. [ Links ]

Maricato, João de Melo and Daysi Pires Noronha. 2013. Indicadores bibliométricos e cientométricos em CT&I: apontamentos históricos, metodologias e tendências de aplicação, in Bibliometria e cientometria: reflexões teóricas e interfaces, editado por Maria Cristina Piumbato Hayashi and Jacqueline Leta, 59-82. São Carlos: Pedro & João Editores. [ Links ]

Meneghini, Rogerio, Rogerio Mugnaini, and Abel L Packer. 2006. “International versus national oriented Brazilian scientific journals: a scientometric analysis based on SciELO and JCR-ISI Databases.” Scientometrics 69 (3): 529-38. https://doi.org/10.1007/s11192-006-0168-z [ Links ]

Merton, Robert K. 1968. “The Matthew Effect in science.” Science 159 (3810): 56-63. https://doi.org/10.1126/science.159.3810.56 [ Links ]

Morales Gaitán, Katia Andrea and Eduardo Aguado López. 2010. “La legitimación de la ciencia social en las bases de datos científicas más importantes para América Latina.” Latinoamérica - Revista de Estudios Latinoamericanos (51): 159-88. [ Links ]

Mueller, Suzana Pinheiro Machado and Hamilton Vieira Oliveira. 2003. “Autonomia e dependência na produção da ciência: uma base conceitual para estudar relações na comunicação científica.” Perspectivas Em Ciência Da Informação 8 (1): 58-65. [ Links ]

Penfield, Teresa, Matthew J. Baker, Rosa Scoble, and Michael C. Wykes. 2013. “Assessment, evaluations, and definitions of research impact: a review.” Research Evaluation 23(1): 21-32. https://doi.org/10.1093/reseval/rvt021 [ Links ]

Pislyakov, Vladimir. 2007. “Why should we create national citation indexes?” Science and Technical Libraries (2): 65-71. [ Links ]

Price, Derek John de Solla. 1986. Little science, big science...and beyond. New York: Columbia University Press. [ Links ]

Ràfols, Ismael, Loet Leydesdorff, Alice O’Hare, Paul Nightingale, and Andy Stirling. 2012. “How journal rankings can suppress interdisciplinary research: a comparison between innovation studies and business & management.” Research Policy 41 (7): 1262-82. https://doi.org/10.1016/j.respol.2012.03.015 [ Links ]

Ràfols, Ismael, Joirdi Molas-Gallart, Diego Chavarro, and Nicolás Robinson-García. 2016a. “On the dominance of quantitative evaluation in ‘peripheral’ countries: auditing research with technologies of distance.” SSRN Electronic Journal, May. http://dx.doi.org/10.2139/ssrn.2818335 [ Links ]

Ràfols, Ismael, Joirdi Molas-Gallart, Richard Woolley, and Diego Andrés Chavarro. 2016b. “Capturando a investigação invisível: por indicadores mais inclusivos de ciência e tecnologia.” Trabalho apresentado no 5o Encontro Brasileiro de Bibliometria e Cientometria, São Paulo, 6-8 de julho. [ Links ]

Russell, Jane M. 2000. “Publication indicators in Latin America revisited.” In Web of Knowledge: A Festschrift in Honor of Eugene Garfield, edicted by Blaise Cronin and Helen Barsky Atkins, 233-250. Medford: Information Today. [ Links ]

Salager-Meyer, Francoise. 2015. “Peripheral scholarly journals: from locality to globality”. Ibérica 30: 11-36. [ Links ]

“San Francisco Declaration on Research Assessment”. 2012. https://sfdora.org/read. [ Links ]

Santa, Samaly and Victor Herrero-Solana. 2010. “Cobertura de la ciencia de América Latina y el Caribe en Scopus vs Web of Science.” Investigación Bibliotecológica: Archivonomía, Bibliotecología e Información 24 (52): 13-27. [ Links ]

Schott, Thomas. 1998. “Ties between center and periphery in the scientific world-system: accumulation of rewards, dominance and self-reliance in the center.” Journal of World-Systems Research 4 (2): 112-44. https://doi.org/10.5195/ jwsr.1998.148 [ Links ]

Schroder, Stefan, Florian Welter, Ingo Leisten, Anja Richert, and Sabina Jeschke. 2014. “Research performance and evaluation: empirical results from collaborative research centers and clusters of excellence in Germany.” Research Evaluation 23 (3): 221-232. https://doi.org/10.1093/reseval/rvu010 [ Links ]

Schubert, András and Tibor Braun. 1986. “Relative indicators and relational charts for comparative assessment of publication output and citation impact.” Scientometrics 9 (5-6): 281-91. https://doi.org/10.1007/BF02017249 [ Links ]

Shils, Edward. 1975. Centre and periphery: essays in macrosociology. Chicago: University of Chicago Press. [ Links ]

Spinak, Ernesto. 1998. “Indicadores cienciometricos”. Ciência da Informação 27 (2): 141-148. [ Links ]

Stumpf, Ida Regina Chittó. 2008. “Avaliação pelos pares nas revistas de Comunicação: visão dos editores, autores e avaliadores.” Perspectivas Em Ciência Da Informação 13 (1): 18-32. http://dx.doi.org/10.1590/S1413-99362008000100003 [ Links ]

STI Conference. 2016. “Conference thema: peripheries, frontiers and beyond.” Proceedings of the 21 ST International Conference on Science and Technology Indicators, València, 14-16 September. [ Links ]

Stilgoe, Jack. 2014. “Against excellence.” https://www.theguardian.com/science/political-science/2014/dec/19/against-excellence. [ Links ]

Torres-Salinas, Daniel and Evaristo Jiménez-Contreras. 2010. “Introducción y estudio comparativo de los nuevos indicadores de citación sobre revistas científicas en Journal Citation Reports y Scopus.” El Profesional de La Información 19 (2): 201-7. [ Links ]

Velez-Cuartas, Gabriel, Diana Lucio-Arias, and Loet Leydesdorff. 2016. “Regional and global science: Latin American and Caribbean publications in the SciELO Citation Index and the Web of Science.” El Profesional de La Información 25 (1): 35-46. [ Links ]

Velho, Lea. 1990. “Indicadores científicos: em busca de uma teoria.” Interciencia 15 (3): 139-145. [ Links ]

Velho, Lea. 2004. Science and technology in Latin America and the Caribbean: an overview. Maastricht: United Nations University. [ Links ]

Vessuri, Hebe, Jean-Claude Guédon, and Ana María Cetto. 2014. “Excellence or quality? Impact of the current competition regime on science and scientific publishing in Latin America and its implications for development.” Current Sociology 62 (5): 647-65. https://doi.org/10.1177/0011392113512839 [ Links ]

Yadav, Bharti and Meera Yadav. 2014. “Resources, facilities and services of the Indian citation index (ICI).” Library Hi Tech News 31(4): 21-29. https://doi.org/10.1108/ LHTN-02-2014-0008 [ Links ]

Para citar este texto:

Santin, Dirce María and Sônia Elisa Caregnato. 2019. “The binomial center-periphery and the evaluation of science based on indicators”. Investigación Bibliotecológica: archivonomía, bibliotecología e información 33 (79): 13-33. http://dx.doi.org/10.22201/iibi.24488321xe.2019.79.57930

Received: March 24, 2018; Accepted: December 03, 2018

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License