Dear all,
I hope you are all doing well and assessing well! Although it took me a while to come here, I wanted to share the column I read in the afternoon at the Recognition and Reward Festival. Thank you all once again for 'listening'! :)
Column presented at the third Recognition & Rewards Festival
13th of April 2023
I prepared this column to bring a bit of an international perspective to the discussion. Just a bit of background about me: I am Canadian — Québécoise to be precise —who moved to Europe a decade ago to work in research integrity and publication ethics. That led me to do a PhD on research success and research assessment in Belgium, after which I moved to England and worked with the Netherlands, Belgium, and eventually also England on topics of research culture, research environment, and research assessment. I always struggle to answer when I am asked where I work or where I’m from, and this shapes the way I think about research.When I started my research journey as a PhD student, I tried to understand research success by listening to the voices of those involved in research. My project was focused on the Flemish research system. As I went along, I realised that the perspective of different stakeholders did not always align. Each had a different role in science which influenced what they wanted. Researchers wanted to advance knowledge, but they also had a career to survive in, so they were frustrated about the elements of their career that did not encourage ‘good science’. Funders and policy makers wanted to promote good science, but they also had a country to convince about the value of research and about the return on investments. Publishers also wanted to share research of the highest quality, but they had a journal to maintain and needed to attract readership... and so on. Yet despite these different perspectives, I also noticed that all stakeholders agreed that there were issues with research assessments and that changes were needed.
These complex dynamics are not unique to Flanders, quite the contrary. After the PhD, I continued to work on similar topics, but this time adding The Netherlands and the United Kingdom to my Canado-Flemish perspective. I found similar issues in all places. In all four countries, assessors had fallen in love with the simplicity of the journal impact factor, the H index, and the number of papers to assess researchers. In all four countries, groups of academics worried and denounced the perverse impact that the focus on these metrics created. As a result, all four countries are now trying new ways of assessing researchers, some at a national level, others through funders, group of funders, or specific institutions.
Yet, in all four countries, the decades of metric-focused and paper-focused assessments also left behind a culture of success that cannot change overnight. Even here where the reform has been underway for some years already, there are academics who don’t really know about it. And how could they? They’re already overstretched, often burnt out, how can we expect them to keep up with these changes? And even academics who are aware of this reform will have to fight deeply engrained habits and conceptions of success for another decade at least, myself included.
All of these points are true well beyond the four countries I have been involved in. The number and the spread of the institutions signing up to DORA shows that the worry about research assessment is truly global. I worked directly on the topic of research assessment for a few years now and I would read about a new initiative, position papers, or conferences on the topic every month, then every week, and eventually almost every day. It now reached a point where I cannot even keep up with it anymore (so don’t quiz me, I’m really out of date now!) and that’s very good news.
But in speaking a bit further with those around me, including many of you who were at the onset of these initiatives, I understood that a core element was still missing before we could call these changes in research assessment a proper ‘reform’. To call it a reform, we needed a coordination of initiatives so that stakeholders, institutions, and countries move together. Without coordination, the fear of the first mover’s disadvantage creeps in. Institutions fear that changing research assessments will reduce their rankings, their funding, and their attractiveness to researchers. Researchers worry that focusing less on the good old metrics will damage their mobility and competitiveness in other institutions or countries.
That’s where efforts like the Dutch Recognition and Reward Programme come in. In my view the programme was so successful because it brought together the voices from public knowledge institutions and funders. Others followed similar paths. In Norway, Universities Norway, which groups 32 universities and university colleges, built a national framework and proposed a toolbox for recognition and rewards of academic careers. In the UK, the Future Research Assessment Programme is acting at a different level targeting the way research institutions are funded to align changes. In Latin America, the Latin American Forum on Research Assessment (FOLEC-CLACSO) provides a platform for discussion and debate between stakeholders on the meanings, policies and practices of research assessment processes in the Latin American region. And I could go on.
Yet another key ingredient is important here: coordination on an international level. On this, we cannot deny DORA’s role in raising awareness and uniting voices towards responsible research assessment. Other global groups such UNESCO, the G7, and the Global Research Council also embarked in the debate. But the initiative you probably all heard of the most is the Coalition for Advancing Research Assessment – CoARA, for friends – which now includes organisations from over 40 countries and which is chaired by someone who as a particular knowledge of the Dutch Recognition and Reward Programme. CoARA is special because it commits signatories to act and reform their assessment procedures in an agreed timeframe, so it really moves a huge community of stakeholders from ideology to action. Signatories move together, in consultation and support of one another.
So, is this the end point? Is it time to sit back and enjoy where we’ve come? … Of course not. The work is just starting. And to this note, I thought of framing three points of action that I think may help us in the next steps.
First, the reform needs to actively make efforts to be more inclusive of forgotten countries so that everyone takes part. Some countries just managed to move away from quantity indicators and financial rewards for each paper published to a system that focuses on so called ‘quality indicators’ measured through citations and journal impact factors. We need to make sure we do not by-pass and ignore their challenges and instead support them in moving to yet another assessment approach. We also need to carefully probe for unintended consequences that the new approaches to assessment may have. For example, something as simple as valuing open access — which is basically the new normal for us — still creates a lot of anxiety in countries where APC weavers are not applicable and where the costs of publishing open access threatens their participation in the production of knowledge. We need to make sure that we involve and we listen to other countries so that everyone is able to embark in this reform. This point also applies the Recognition and Rewards Programme even though it is a Dutch initiative. As you could have guessed from my complicated background, I really think mobility is an important part of academic life. For the reform to work, it needs a true international scope so that it doesn’t block the mobility of academics going out or also coming in.
Second – and this is a point that I think is especially relevant for the Dutch Recognition and Rewards Programme – we need to bring more evidence to the table to understand research assessment better. I would even dare to say that countries who are so advanced in this reform have a duty to pilot, experiment, and evidence new ways of assessing research so they can study the impacts that these new assessments have. The Recognition and Rewards Programme already attracted a lot of eyes in the past years and it has acquired the visibility needed to also bring this evidence into view. In doing so however, we – and you – will need to be brave and humble enough to truly listen – again – to what works and what doesn’t and to accept that we will make mistakes along the way and that we will need to adapt our methods and sometimes admit we were wrong. We are scientists after all, so we should use our own methods to make this reform a success.
My next point is a little more local and links back to the lagging research culture I discussed earlier. In my opinion, if we want research assessments to be “responsible”, we need to give academics the skills, the mindset, and the time to conduct responsible assessment. First, the culture of responsible assessment needs to start earlier on, at the PhD level and even before that. Here and in many places around the world, PhD students are still expected to publish three papers before they can defend their thesis, other outputs don’t work. In some countries of institutions, the cumulative impact factor of these papers even needs to reach a certain score. And also here, PhD students are sometimes penalised if they finish after the expected completion period, for example they don’t receive the financial help needed to print their theses, which adds to the pressure of productivity. We need to change these dynamics and pressures earlier on if we want cultures to change. Along the same line, we need to provide academics with an environment that allows them to rebuild their views of success, here and anywhere in the world. This can mean balancing workloads, providing dedicated time for peer assessment and especially securing research careers – as Hieke Huistra so beautifully explained earlier today – or moving to longer-term funding schemes so that the pressures to publish and survive do not predominate over the desire to innovate on views of success. For the Recognition and Reward Programme, this can mean working together with research institutions to identify how they can make sure their research environments support researchers and academics in this transition period, and afterwards. To echo the ‘SCOPE’ Framework, we need to evaluate with the evaluated, and we need to listen to what they need in this reform.
So to summarise, we need to listen. We need to listen to the evaluated, to the many countries who are still scarcely involved in the discussion, and to the data.
Thank you.