Measuring academic performance

How should we be assessing our academic performance at the end of the year?

As it is the end of the year, it is an opportune moment to look back and consider what we have all achieved this year. But how should we be assessing our research activities? What constitutes a success or achievement for us as academics? Should we still be using traditional metrics or concentrating on newer assessment measures based on an open research ethos.

Academia is currently in a time of transition - we are moving towards a more open research approach and this is muddying the water in terms of how research impact and researchers should be and are being assessed. The Declaration on Research Assessment, known commonly as DORA, was developed in 2012, which is quite a while ago now!, and their vision was to advance practical and robust approaches to research assessment globally and across all scholarly disciplines. This is happening, but at a slow pace, and leaves us as researchers in a halfway house with some organisations and funders drawing up new research assessment criteria to align with open research but others lagging behind.

This year there have been major steps forward in the adoption of open research and the recognition of wider research outputs. We have seen major funders shifting policies towards open access as a requirement, such as the UKRI Open Access Policy (published in August 2021 and to be implemented from April 2022). And UNESCO announced their Recommendation on Open Science, which really underlined a global shift towards open research.

It seems that most organisations are currently focusing on implementing open science practices, which is a slow process, and that aligning research metrics with this new approach will come further down the track. This is demonstrated by the European University Association Open Science Survey 2020-21 (EUA), which has found 34% of the surveyed institutions did not report using any open science elements in their academic assessments.

In the UNESCO open science recommendations there are mentions of research assessment - they state they are ‘encouraging responsible research and researcher evaluation and assessment practices, which incentivize quality science, recognizing the diversity of research outputs, activities and missions’. And some institutions are drawing up and starting to implement guidelines, such as Utrecht University, who are one of the first universities to announce abandoning traditional research metrics by announcing the TRIPLE model that recognises and rewards teamwork, quality research, societal impact and openness of research. See also this article from Nature about Dutch universities abandoning the journal impact factor.

What are we moving from and to?

The traditional research metrics are based on publishing in high impact journals, journal citations of your articles, university rankings and acquiring research funding. This was highlighted in the EUA 2019 survey results that showed researchers give highest importance to research publications and attracting external research funding (very important 80% and 57% respectively) for progressing in their research careers.

The use and abuse of these metrics is discussed in this article - Chapham et al. 2019. Games academics play and their consequences: how authorship, h-index and journal impact factors are shaping the future of academia - in which the authors point out that even though there is awareness of abuse of these metrics and very rightly a movement away from them by some institutions and funding bodies, there is still ‘strong pressure on individuals, particularly young researchers, to play the game to advance their scores’.

Research metrics that are often traditionally used are:

  • Journal impact factor (JIF) - The JIF is a measure of the frequency with which the average article in a journal has been cited in a particular year. It is used to measure the importance or rank of a journal by calculating the times it’s articles are cited.
  • H-index - The h-index is an index that attempts to measure both the scientific productivity and the apparent scientific impact of a scientist. The index is based on the set of the researcher’s most cited papers and the number of citations that they have received in other people’s publications.
  • The Research Excellence Framework (REF) - is a research impact evaluation of British higher education institutions.

So how is research assessment changing? This is currently hard to answer as it is something that is still evolving but examples such as the TRIPLE model mentioned above are good places to look. Moving away from counting anything would be a good start. To align with open science the metrics would need to recognise the quality of research rather than the quantity. It would need to equally recognise a much wider range of research outputs including open datasets, open methods, code, etc. Also activities such as open education (such as teaching, training, and mentoring) and knowledge transfer outside of academia (such as outreach, citizen science, writing for wider audiences) need to be given more value as they can have the same if not more societal impact than research publications.

Current uncertainties with measuring research assessment and how we can tackle them

Where does this leave researchers who are trying to navigate their academic careers? Do we now move to focusing all our efforts on open research embracing the team science and slow science way of working? Do we not worry about the old (but still persisting!) publish or perish academic ethos?

Uncertainties and some suggested solutions

1. What research outputs and activities should we put on CVs, job and funding applications?

It is rare to have an academic career that is a straight upward progression within one institution. As academics we often do many different short term positions before (hopefully!) securing a permanent job, so we have to think about what goes on our CV to show research impact, successes and career progression. The current state of change in research assessment makes this harder than ever. You cannot align your research with one particular set of measures of impact or success from one institution or even one country/region - as the EUA survey showed, not all institutions are changing their research assessment measures at the same rate. This makes research assessment measures now more diverse than ever, so how do we keep up with this? Do we need to do even more work than we already are to cover all these metrics?

Suggested solutions - Research is moving to more open ways of working, so start upskilling yourself and doing it. You will then have a wider range of research outputs available to put on your CV. Also as you should do when applying for anything - be smart and research the organisation or funder you are applying to - do they have a open research policy or specific criteria for the grant about open research or are they all about high impact publications? If you know this, you can adapt your CV or application to highlight the activities or outputs they place most value on. It does also depend on what you apply for because if it’s a job or funding related to open science - you should put as much of your open work on the application. If you are wanting to be an open researcher than look for those positions that will help you progress with this research ethos. This will add the most activities and outputs to your CV to cover open research metrics.

2. How do we work in collaborative teams if we all need different research outputs to be successful?

From discussions with colleagues, who work at different organisations and in different countries, the current picture of researcher assessment, as I suspected, is mixed. It seems that traditional research metrics are still being used to assess personal and project performance in many countries and even at different institutions within the same country. These metrics are then used for hiring researchers for new roles, promoting researchers and for awarding research funding. So if we work in a team in which individuals have different research output priorities, how do we deal with this?

Suggested solutions - Producing a wide range of research outputs should cover all your teams needs - it will allow an open research approach and produce more publications that allows all your team to take the credit they need. It is important to have discussions about research outputs and authorship at an early stage in your project planning to assess your teams personal priorities along with the project impact requirements. This means you can accommodate the needs of your team by thinking about what research outputs will be produced and who is going to be the main contributor/author. Discussions around authorship are often difficult but making decisions about first authors and authorship order early on, taking everyone’s opinions into consideration and giving this process full transparency is your best bet at getting this right. Aiming for an inclusive authorship model is also a good way of giving maximum credit for all contributions in a research project.

3. Will we all be judged equally?

As we are in a phase of transition, as well as organisations that are slowly shifting to open research, there are researchers moving their practices slowly to open research. There is no quick switch that can be made for researchers to adopt open science practices - it involves upskilling through a process of taking small steps towards more transparent and open workflows. This means that there will be researchers at different stages in this transition, so how can we fairly and equally judge their research impact and successes?

Suggested solutions - Not sure there is a solution to this! As policies are moving slowly but surely towards research metrics that reflect open research, I hope researchers that embrace this approach will be rewarded now in terms of funding and positions. I do think though that this will only happen in certain organisations that are themselves embracing open research, so again it is about choosing the position and organisation that suits your research ethos.

4. Will new research assessment policies actually be enforced?

I have fears about the implementation of these new research assessment policies - It is very likely they won’t be fairly or strictly implemented! There is a clear example of this in publishing where journals have authorship policies that require open data or reproducible research. Some of these policies appear pretty strict, but the articles that appear in theses journals do not match these requirements. Most articles in these journals are no where near transparent enough to be reproducible. This is happening due to a lack of enforcement of these policies by reviewers, editors and publishers. But it really stems from a lack of understanding of open science practices because of a lack of training for authors, reviewers, editors and the publishers themselves. So I’m sure that this will also happen on funding boards, job interview panels and any other place where open science practices have to be assessed.

Suggested solutions - As open researchers, we need to get involved in research assessment. Join initiatives to draw up new research metrics, sit on job interview panels or be a funding application reviewer and be the person to ask about open research.

Covering all the bases

I think here I’m definitely saying it is not easy at the moment to know how you will be assessed as a researcher. It is very hard to cover all the bases when it comes to research assessment as we can’t do everything. And I think this is OK. You need to decide what research activities and also what research outputs are going to be most beneficial for you in your career and think about this for now, next year and maybe in five years time.

I’ve been on quite a journey in the last few years coming back in to academia but one thing I decided early on was that I wanted to take an open research approach and so that’s what I’m concentrating on. It’s important to me to do this so all the research I do, all the research outputs I produce and all the organisations I collaborate with are all focused on this aim. I think this is reflected well in my research activities and outputs this year.

It does mean that I have had to compromise though and if you were to judge me by the old research metrics this year - I don’t look that great.

  • I have not published any research articles - Shock horror! I did think I would have at least one as I have had a paper in review since August 2020 so I hadn’t particularly been focused on finishing up any other articles. The article that has now been accepted was in review for more than 6 months and even though I have given back revisions it is still not published. I also have quite a few I’m working on that are nearly done so 2022 might be bumper publication year for me.
  • I have won some research funding as a PI - a big tick for me!
  • I have also won two fellowships - another tick or I suppose two ticks for me!
  • I’ve formed an international committee within an international society - that should count for something I think.
  • I’m also now fully employed in research, at least for a bit, so this is a change from working on short term funding grants and contracts.

But I’ve also done loads this year that I’m really proud of and I think has a lot of academic impact and value. This has all been done during a pandemic and as the main carer for my two children (so that involved homeschooling two kids for the first part of the year while working).

  • I’ve mentored 11 amazing researchers through Open Life Science, my own projects and within The Alan Turing Institute.
  • I’ve reviewed five research articles, and six rounds of applications for projects, fellowships and hackathons.
  • I’ve given 11 presentations at meetings and conferences.
  • I’ve written 16 (now 17) blogs.
  • I’ve co-written 23 sub-chapters for The Turing Way.
  • I’ve run 7 training workshops.
  • I’ve been part of two association committees and one journal board.
  • I’ve managed a community of researchers.
  • I’ve worked collaboratively on many of the above activities with at least 13 different organisations.

I think my activities for this year do reflect my focus on open research and sway towards wider scientific communication, community building, collaborative working and training. All of which are normally undervalued in research assessment metrics.

I just want to finish by thanking all of the wonderful people (you know who you are!) that I’ve worked with this year - I thank them for all the help and support they have given me and also the passion and enjoyment they have brought to the research that we are doing together.

Wishing you all a great 2022!