Consulting for Science Evaluation Design

Aimed at organizations that need to assess / evaluate scholarly output, impact, and other dimensions of science.

There is growing pressure from various stakeholders in the scientific community for the adoption of responsible metrics for evaluations of science. The term refers to the ethical and appropriate use of citation-based metrics and so-called alternative metrics, such as, for example, mentions in blogs and social networks. It is necessary to redefine the concept of "impact" in scientific research, which should represent the originality and social relevance of the knowledge produced. However, the organizations responsible for evaluations face great challenges to design meaningful evaluations, which will give valid answers to their questions, without causing undesired systemic effects, such as, for example, impacts on academic autonomy, creativity, innovation and changes in scientific communication patterns and research agendas.

We have more than 15 years of experience in scientometrics, providing analysis, insights and recommendations on science Evaluations and the responsible use of metrics for science evaluations. We have had the opportunity to provide consulting services to government organizations, research funding agencies and leading universities in Latin America on strategic issues, such as identifying and measuring research excellence and international visibility, regional and gender inequality, among others, as well as phenomena such as academic mobility and brain drain.

We adopt a participatory approach, building upon the experiences and knowledge of the various actors involved to leverage their competencies and build new capacities for the design of science evaluations.

Our science evaluation design consultancy typically includes the following activities:
Initial definitions

Governance: identification of key stakeholders, definition of their participation in the project, and development of the consultancy's work plan.

Definition of the evaluation objectives, its purpose, values, and evaluation subjects (e.g., educational and/or research institutions for the purpose of funding allocation or individuals for hiring or promotion).

Strategic analysis

Risk analysis, potential systemic effects on the evaluated subjects, processes and institutions and planning of actions to mitigate risks and undesired effects.

Conducting a pilot to ensure that the evaluation is robust, dependable, is actually evaluating what is intended, is not easily manipulated, and will not lead to undesired effects on the evaluation subjects.

Planning

Definition of significant dimensions to be evaluated and qualitative and quantitative approaches.

Selection of data sources (especially for evaluations involving interdisciplinary research, humanities, and social sciences), metrics, indicators, analyses, and reports.

Development of the evaluation plan, defining all steps, timelines, and responsibilities.

Evaluation of the Evaluation

Interviews and questionnaires to participants and subjects of the evaluation to determine whether the evaluation achieved its objectives, was formative and added value to the processes evaluated.

Compilation of best practices and lessons learned during the evaluation and development of a case study.