
{"id":432,"date":"2021-10-26T14:47:07","date_gmt":"2021-10-26T14:47:07","guid":{"rendered":"http:\/\/blogs.plymouth.ac.uk\/research\/?p=432"},"modified":"2021-10-26T14:47:07","modified_gmt":"2021-10-26T14:47:07","slug":"an-introduction-to-responsible-metrics-open-access-week","status":"publish","type":"post","link":"https:\/\/blogs.plymouth.ac.uk\/research\/2021\/10\/26\/an-introduction-to-responsible-metrics-open-access-week\/","title":{"rendered":"An introduction to responsible metrics (Open Access Week)"},"content":{"rendered":"<p><strong><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-395\" src=\"http:\/\/blogs.plymouth.ac.uk\/research\/wp-content\/uploads\/sites\/30\/2021\/10\/images-150x150.png\" alt=\"the open access symbol\" width=\"80\" height=\"80\" srcset=\"https:\/\/blogs.plymouth.ac.uk\/research\/wp-content\/uploads\/sites\/30\/2021\/10\/images-150x150.png 150w, https:\/\/blogs.plymouth.ac.uk\/research\/wp-content\/uploads\/sites\/30\/2021\/10\/images-160x160.png 160w, https:\/\/blogs.plymouth.ac.uk\/research\/wp-content\/uploads\/sites\/30\/2021\/10\/images.png 225w\" sizes=\"auto, (max-width: 80px) 100vw, 80px\" \/>This post is part of a series of blogs in celebration of <a href=\"http:\/\/blogs.plymouth.ac.uk\/research\/2021\/10\/19\/open-access-week-2021\/\">Open Access Week 2021<\/a>. Keep an eye out for more posts throughout the week or follow our Twitter account,\u00a0<a href=\"https:\/\/twitter.com\/OpenResPlym\">@OpenResPlym<\/a>, to keep up to date with OA Week events.<\/strong><\/p>\n<h2><\/h2>\n<h2><strong>Responsible metrics in a nutshell<\/strong><\/h2>\n<p>Research metrics\u00a0are used to &#8216;measure&#8217; the influence or impact\u00a0of researchers and their publications. Authors use journal metrics to decide where they want to publish; article metrics are used to assess the &#8216;quality&#8217; of a research output, or group of outputs; institutions use author metrics to inform the recruitment, probation, or promotion of researchers.<\/p>\n<p>Most research metrics come in the form of quantitative measurements. Many well-known metrics are citation-based, from citation counts and percentiles, <a href=\"https:\/\/www.metrics-toolkit.org\/metrics\/field_weighted_citation_impact\/\">Field-Weighted Citation Impact<\/a> (FWCI), the <a href=\"https:\/\/www.metrics-toolkit.org\/metrics\/h_index\/\">h-index<\/a>, and the <a href=\"https:\/\/www.metrics-toolkit.org\/metrics\/journal_impact_factor\/\">Journal Impact Factor (JIF)<\/a>; beyond these, metrics might include anything from views, downloads, mentions, or sales, to collaboration metrics or research grant income.<\/p>\n<p>Problems with the use of metrics arise when these <em>quantitative<\/em> metrics are used as a proxy for measuring something more complex and <em>qualitative<\/em>, such as the overall calibre of a researcher or a research output. Qualitative metrics can also be biased or manipulated, particularly if too much emphasis is placed on specific metrics as evaluation criteria.<\/p>\n<p>Responsible metrics is a movement which advocates for the ethical, appropriate use of numerical metrics when evaluating research. The idea is not to do away with quantitative metrics, but rather to ensure that they are used in appropriate situations, applied alongside qualitative information wherever possible, and that they are not used as inadequate proxies or arbitrary measures.<\/p>\n<p>&nbsp;<\/p>\n<h2><strong>What\u2019s so bad about quantitative metrics?<\/strong><\/h2>\n<p>Quantitative metrics are not inherently \u2018bad\u2019. The problems arise when these metrics are <em>used<\/em> badly.<\/p>\n<p>Citation metrics, for example, measure exactly what they set out to measure (within the parameters of the available data). They indicate how many citations were received by an output or group of outputs within a certain period of time; this measurement may be weighted, expressed as a percentile, or calculated in relation to another metric. These data certainly tell a part of a research output\u2019s story, and they can be useful. They do not, however, tell the <em>whole<\/em> story: a high volume of citations does not guarantee that a piece of research is of high quality, nor vice versa. It is therefore not the metrics that are the problem, but rather the assumptions which are made about what these metrics can sufficiently \u2018measure\u2019.<\/p>\n<p>Some of the ways in which research metrics can be insufficient, biased, or misconstrued include:<\/p>\n<h4>Lack of context<\/h4>\n<p>There are many reasons publications get cited, and not all of them are good. Generally, citation metrics make no distinction between \u2018good\u2019 citations and neutral or negative ones. Similar problems can arise when using social media attention as an indicator of research quality.<\/p>\n<h4>False proxies<\/h4>\n<p>The Journal Impact Factor (JIF) measures the average citations per document in an entire journal over two years. It is unreasonable to use the JIF of the journal an article was published in as a surrogate for its quality as an individual publication \u2013 or even for its citation impact: papers published in high JIF journals are not guaranteed to be more highly cited.<\/p>\n<h4>Bias and gaming<\/h4>\n<p>There are many ways in which research metrics can be biased. Some speak to broader problems (for example, some studies have shown female authors are less likely to be cited than their male colleagues),<a href=\"#_ftn1\" name=\"_ftnref1\">[1]<\/a> while other metrics can introduce bias when applied in the wrong situations (for example, the h-index disfavours younger researchers, and doesn\u2019t account for author order). Metrics can be deliberately manipulated, too \u2013 publishers can game their impact factor through publishing only in particular areas, avoiding certain output types, or even through participating in citation coercion and \u2018citation cartels\u2019.<a href=\"#_ftn2\" name=\"_ftnref2\">[2]<\/a><\/p>\n<h4>Skewed incentives<\/h4>\n<p>Too much emphasis on one metric encourages goal displacement and distortion of behaviour. Researchers may feel the need to prioritise what they are being measured by (e.g. impact factor) over anything else (e.g. more suitable publishing venues and\/or open access opportunities).<\/p>\n<h4>Suitability<\/h4>\n<p>Some metrics may be used as appropriate indicators in certain situations but are entirely inappropriate in others. The FWCI, for example, becomes less stable the smaller the sample size, so it is not a suitable metric for smaller groups of outputs.<a href=\"#_ftn3\" name=\"_ftnref3\">[3]<\/a> It also takes time to stabilise (since citations accrue over time), so it is more inaccurate when applied to newer publications.<\/p>\n<p>Other common problems with metrics can include database reliance (since different databases will produce varying results) and failure to account for variance in practise between different disciplines.<br \/>\n&nbsp;<\/p>\n<h2><strong>So, what\u2019s the solution?<\/strong><\/h2>\n<p>It is not practical to do away with the use of research metrics entirely, nor does the responsible metrics movement advocate for this. It is possible, however, to avoid certain practises that are more likely to discriminate, and to introduce measures which can help to offset some of the problems outlined above.<\/p>\n<p>Some general rules for good practice in research assessment include:<\/p>\n<ul>\n<li>Not judging research solely on the journal it was published in<\/li>\n<li>Avoiding arbitrary measures and \u2018false precision\u2019 \u2013 uncertainty and error margins must be taken into account<\/li>\n<li>Avoiding reliance on any one metric, and using qualitative measures in conjunction with quantitative metrics whenever possible<\/li>\n<\/ul>\n<p>Some questions one might ask before applying research metrics could be:<\/p>\n<ul>\n<li><strong>What are the risks associated with the application of metrics in this situation?<\/strong> Am I using metrics to make an impactful decision (such as hiring or promotion), or for an activity less likely to have an impact on the entities under examination (such as studying publication patterns at a national or institutional level)?<\/li>\n<li><strong>Am I using this metric as a proxy for something else?<\/strong> What am I really trying to measure, and what can these metrics actually tell me?<\/li>\n<li><strong>Are the metrics I am using appropriate in this particular\u00a0situation? <\/strong>Do I understand what it is they are measuring, and their limitations?<\/li>\n<li><strong>Am I using an appropriate range of metrics or other methods of analysis?<\/strong> How can I best ensure this assessment is well-rounded?<\/li>\n<\/ul>\n<h2><\/h2>\n<p>&nbsp;<\/p>\n<h2><strong>Responsible metrics manifestos and statements<\/strong><\/h2>\n<p>There are four key documents associated with the responsible metrics movement. Each has its own set of principles, but all outline some of the ways in which researchers or institutions can work to use metrics more responsibly.<\/p>\n<p>The documents are:<\/p>\n<ol>\n<li><a href=\"https:\/\/sfdora.org\/read\/\">DORA \u2013 the San Francisco Declaration on Research Assessment (2012)<\/a><\/li>\n<li><a href=\"https:\/\/www.nature.com\/articles\/520429a\">The Leiden Manifesto (2015)<\/a><\/li>\n<li><a href=\"https:\/\/webarchive.nationalarchives.gov.uk\/ukgwa\/20210823214948\/https:\/re.ukri.org\/sector-guidance\/publications\/metric-tide\/\">The Metric Tide Report (2015)<\/a><\/li>\n<li><a href=\"https:\/\/www.wcrif.org\/guidance\/hong-kong-principles\">The Hong Kong Principles (2019)<\/a><\/li>\n<\/ol>\n<p>Many thousands of individuals, research institutions, scientific organisations, and funders alike have signed DORA or aligned themselves with the principles of the Leiden Manifesto, thereby committing themselves to adopt responsible practises as outlined by these documents.<br \/>\n&nbsp;<\/p>\n<h2><strong>Gaining momentum<\/strong><\/h2>\n<p>Support from initiatives such as<a href=\"https:\/\/www.coalition-s.org\/\"> Plan S<\/a> has recently given the responsible metrics movement some additional momentum. The UKRI Research Councils are all signatories to DORA, and the Wellcome Trust <a href=\"https:\/\/wellcome.org\/grant-funding\/guidance\/open-access-guidance\/open-access-policy#responsible-and-fair-research-assessment-dcc7\">now expect Wellcome-funded organisations to publicly commit to responsible research evaluation<\/a> as a condition of their grants.<\/p>\n<p>The University of Plymouth is in the process of working towards an official policy on responsible metrics.<br \/>\n&nbsp;<\/p>\n<h2><strong>Useful links<\/strong><\/h2>\n<ul>\n<li>University of Plymouth guidance on <a href=\"https:\/\/plymouth.libguides.com\/research\/impact\">responsible metrics<\/a><\/li>\n<li><a href=\"https:\/\/sfdora.org\/read\/\">San Francisco Declaration on Research Assessment (DORA)<\/a><\/li>\n<li><a href=\"https:\/\/www.nature.com\/articles\/520429a\">The Leiden Manifesto<\/a><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><a href=\"#_ftnref1\" name=\"_ftn1\">[1]<\/a> See for example: Paula Chatterjee and Rachel M. Werner, \u2018Gender Disparity in Citations in High-Impact Journal Articles\u2019, <em>Jama Network Open<\/em> (2021), &lt;<a href=\"https:\/\/doi.org\/10.1001%2Fjamanetworkopen.2021.14509%3e%20\">https:\/\/doi.org\/10.1001%2Fjamanetworkopen.2021.14509&gt;<\/a> [accessed 25\/10\/2021]; Neven Caplar, Sandro Tacchella &amp; Simon Birrer, \u2018Quantitative evaluation of gender bias in astronomical publications from citation counts\u2019, <em>Nature Astronomy<\/em> (2017), &lt;<a href=\"https:\/\/doi.org\/10.1038\/s41550-017-0141\">https:\/\/doi.org\/10.1038\/s41550-017-0141<\/a>&gt; [accessed 25\/10\/2021]<\/p>\n<p><a href=\"#_ftnref2\" name=\"_ftn2\">[2]<\/a> Allen W. Wilhite and Eric A. Fong, \u2018Coercive Citation in Academic Publishing\u2019, <em>Science<\/em>, 335 (2012), &lt;<a href=\"https:\/\/doi.org\/10.1126\/science.1212540\">https:\/\/doi.org\/10.1126\/science.1212540<\/a>&gt; [accessed 25\/10\/2021]<\/p>\n<p><a href=\"#_ftnref3\" name=\"_ftn3\">[3]<\/a> Ian Rowlands, \u2018SciVal\u2019s Field weighted citation impact: Sample size matters!\u2019, <em>The Bibliomagician<\/em> (2017), &lt;<a href=\"https:\/\/thebibliomagician.wordpress.com\/2017\/05\/11\/scivals-field-weighted-citation-impact-sample-size-matters-2\/\">https:\/\/thebibliomagician.wordpress.com\/2017\/05\/11\/scivals-field-weighted-citation-impact-sample-size-matters-2\/<\/a>&gt; [accessed 25\/10\/2021]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This post is part of a series of blogs in celebration of Open Access Week 2021. Keep an eye out for more posts throughout the week or follow our Twitter account,\u00a0@OpenResPlym, to keep up to date with OA Week events. Responsible metrics in a nutshell Research metrics\u00a0are used to &#8216;measure&#8217; the influence or impact\u00a0of researchers&hellip; <a class=\"more-link\" href=\"https:\/\/blogs.plymouth.ac.uk\/research\/2021\/10\/26\/an-introduction-to-responsible-metrics-open-access-week\/\">Continue reading <span class=\"screen-reader-text\">An introduction to responsible metrics (Open Access Week)<\/span><\/a><\/p>\n","protected":false},"author":10,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-432","post","type-post","status-publish","format-standard","hentry","category-team","entry"],"_links":{"self":[{"href":"https:\/\/blogs.plymouth.ac.uk\/research\/wp-json\/wp\/v2\/posts\/432","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.plymouth.ac.uk\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.plymouth.ac.uk\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.plymouth.ac.uk\/research\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.plymouth.ac.uk\/research\/wp-json\/wp\/v2\/comments?post=432"}],"version-history":[{"count":20,"href":"https:\/\/blogs.plymouth.ac.uk\/research\/wp-json\/wp\/v2\/posts\/432\/revisions"}],"predecessor-version":[{"id":452,"href":"https:\/\/blogs.plymouth.ac.uk\/research\/wp-json\/wp\/v2\/posts\/432\/revisions\/452"}],"wp:attachment":[{"href":"https:\/\/blogs.plymouth.ac.uk\/research\/wp-json\/wp\/v2\/media?parent=432"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.plymouth.ac.uk\/research\/wp-json\/wp\/v2\/categories?post=432"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.plymouth.ac.uk\/research\/wp-json\/wp\/v2\/tags?post=432"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}