The responsible use of research metrics

How UOW uses research metrics responsibly

UOWs Principles on the Responsible use of Research Metrics (“Principles”) provide guidance on the appropriate use of quantitative research metrics to evaluate research activity. The Principles are crafted to reflect and promote existing best practices, serving as a roadmap for future endeavours across all disciplines and activity indicators at UOW.

UOW values the diverse community of excellent researchers, who contribute to our strategic goal to ‘create knowledge for a better world’. It is therefore critical that information used to assess research activity is used appropriately and transparently.

Research metrics can provide useful evidence to help inform decision-making, in combination with qualitative and expert assessment. They are used to inform research reporting, to support grant and promotion applications.

Research metrics data used, must be used in compliance with all UOW Policies including, but not limited to:

Discretion should be applied when reporting, circulating, distributing, or sharing metrics aggregated by group, particularly where the group consists of less than ten (10) individuals, to avoid inadvertent identification of individuals or disadvantage.

In accordance with recommendations from the Metric Tide (2015), metrics used to assess research activity at UOW must be considered in terms of their:

  • robustness (using the best available data in terms of accuracy and scope);
  • humility (recognising that quantitative evaluation can complement, but does not replace, expert assessment);
  • transparency (keeping the collection of data and its analysis open to scrutiny);
  • diversity (reflecting a multitude of research and researcher career paths); and
  • reflexivity (recognising and anticipating the systemic and potential effects of indicators and updating them in response). 

Research metrics have their limitations and can be incorrectly reported or misinterpreted, particularly where assumptions of data validity are made. Where appropriate, quantitative indicators can be used to inform judgements and challenge preconceptions, but not to replace expert judgement. 

Global frameworks supporting responsible use of research metrics

The Leiden Manifesto provides ten principles for the appropriate use of metrics in research evaluation. These principles can be used to maintain accountability of both evaluators and the indicators they use in metrics-based research assessment.

The Hong Kong Principles (2019) aim to improve research by recognising and rewarding researchers for behaviours that ensure trustworthy research. It presents five principles: responsible research practices; transparent reporting; open science (open research); valuing a diversity of types of research; and recognising all contributions to research and scholarly activity. The principles highlight the importance of ethical and open research, as well as the diverse contributions that drive scientific progress.

The San Francisco Declaration on Research Assessment DORA (2012) aims to improve how research outputs are evaluated by emphasising responsible metrics use, promoting transparency, and advocating for diverse research contributions. It encourages moving away from journal-based metrics like the Journal Impact Factor and instead focusing on the quality and impact of the research itself, to foster fairer, more accurate assessments of scientific work across all disciplines.

The Metrics Toolkit provides information about a range of research metrics, how they are calculated, their limitations and examples of appropriate use.

UOW's key principles for research metrics

The following four Principles, informed by the Leiden Manifesto (2015), DORA (2012), and Hong Kong principles (2019) provide guidance on the appropriate use of research metrics at UOW.

 

A consistent and discipline-appropriate range of established quantitative (e.g. outputs, citations, grant applications, research income, engagement activities,) and qualitative information (e.g. esteem indicators, impact narratives, peer reviews, testimonials, etc) must be used to support research evaluation, where available. Multiple indicator types should be used in combination to provide a robust, credible and holistic assessment.

The potential limitations and sources of bias present in various indicators must be acknowledged and addressed (e.g. differences between disciplines, career stages and full-time equivalent (FTE) status, relative to opportunity, and equity, diversity and inclusion considerations). Particularly, journal metrics and author metrics need to be carefully evaluated. For example, careful attention to the many factors that influence citation rates, such as the volume of publication and citations characteristics of the subject area and type of journal. The Journal Impact Factor can complement expert opinion and informed peer review. In the case of academic evaluation for promotion or probation, it is inappropriate to use a journal-level metric as a proxy measure for individual researchers, institutions, or articles.

Disciplinary differences in research inputs, processes, contributions, activities, and outputs must be taken into account. This includes recognition of community engagement and activities that support meaningful and sustained relationship building with communities. For example, working with some communities may require a longer consultation time or specialised personnel and, therefore, recognition of specified milestones or project contributions may be appropriate in addition to other research outputs. Any disciplinary biases in indicators used must be considered and addressed; this is particularly important to Indigenous researchers, and research projects and programs with Indigenous partners.

Due to the evolving research environment, any metrics utilised should be evaluated, reviewed and revised as necessary.

Data used to assess research activity must be reliable (i.e. accurate, reputable, transparent and relevant), and open to scrutiny prior to assessment. Evaluation methods should be clearly explained and inherent limitations must be recognised. Data sources and reference periods used to assess research activity must be disclosed.

UOW seeks to maximise opportunities and support for researchers, and to ensure all research activity is appropriately recognised and attributed. Researchers have an important role to play in ensuring their research activity data held by the University is accurate and up to date. This includes:

  • Information about research outputs, income and HDR student supervision
  • Appropriate attribution of credit for outputs based on scholarly contribution, in accordance with the UOW Authorship Policy
  • Notifying supervisors of any inappropriate use of metrics.

To simplify this process, Individual Research Activity dashboards will aim to enable academics to review their research activity data easily and provide a mechanism to rectify any issues. It is anticipated that any information made available will be subject to continuous improvement processes.

A combination of quantitative and qualitative indicators should be used to assess research activity to ensure numerical metrics are not used out of context, and to overcome bias tendencies in peer review. An approach that considers an individual's expertise, experience, activities and influence is best.

Judgements of a work based solely on external factors, such as the reputation of authors, or the journal or publisher of a work, should be avoided. For example, journal level metrics should not be used as a sole measure of the quality of an individual work; the work itself is important and must be considered on its merits. NOTE: It is important to recognise the rise of predatory publishers and paper mills. It is expected that works considered are only published in reputable outlets.

The scale of the research activity must be considered, and caution is needed when interpreting quantitative indicators in small scale assessments, such as the assessment of an individual researcher.

Indicators of research quality and significance vary between disciplines. UOW values a diverse range of research outputs, and recommends they be considered in a context that reflects the needs and diversity of individual research fields. Quantitative indicators should be clearly defined and normalised by discipline to account for such variation. Research quality is multifaceted and cannot be captured by a single indicator used in isolation.

Caveats regarding differences between research fields should be acknowledged in any analysis. It should be recognised when it is not appropriate to provide certain types of quantitative data, e.g. citation data are often unreliable for humanities and social sciences disciplines. Contextual information should be used to inform research assessment (e.g. average grant sizes, common publication routes, citing conventions).

Sources and further reading:

University examples