2.8 Recognizing and rewarding scientists: Time for bias-free assessments

  • Mar 2023
  • Claartje Chajes
  • ·
  • Modified Apr 2023
  • 1
  • 3
  • 85
Claartje Chajes
R&R festival 2023
  • Ineke van der Vegt
  • Johan van de Worp
  • Daphne Snackers

Thomas Hoogeboom, PhD, Senior researcher at IQ healthcare of the Radboudumc; Nellie Konijnendijk, Senior policy officer Diversity & Inclusion of the KNAW and founder of the company Structural Change consultancy & Soraya Refos, Policy advisor Diversity & Inclusion of the KNAW

The purpose of this interactive session is to rethink the way we recognize and reward scientists in the Netherlands. To do this, it is important that we recognize the biases in the current methods of assessment. During this session participants will work in small groups. We will provide the participants with insights from the scientific literature regarding the current assessment methods and potential intervention options. Subsequently, we will promote impartial decision-making with a fun method, where participants are challenged to rethink assessment criteria from different perspectives. At the end of the session, we will collate the different insights per group and draw conclusions from the workshop.

Attachments

Download
title added by options
Workshop_2-8_RenR_festival_2023.pptx Apr 2023 Claartje Chajes

Comments

3 comments, latest: 8 May 2023
  • ''How to maximise the academic system for those in the worst positions?''

    That is the question we should continuously ask ourselves when (re)designing our system for recognizing and rewarding staff members and is therewith the biggest take-away for me from this workshop. Additionally, we should consider if and how we could take into account the valuable perspective of those people who have left academia as we are currently redesigning the system with the people who have decided to stay. To conclude: enough food for thought from this workshop!

    Daphne Snackers
  • Here you can read a summary from this workshop (with thanks to the reporter for making it):

    Goal of the session was: try to reimagine the current scientific system using a thought experiment. The veil of ignorance: a thought experiment to overcome your own biases. How to maximise the quality of life for those in the worst positions?


    The current system is flawed and has many aversive effects and isn’t fair. Current appraisal of scientists is flawed due to, among other things: bias in metrics (i.e., men and women’s rates of self-citation differ with 70% à citations lead to new citations), different in funding rate across various ethnic groups, invitations to conferences, student evaluations (students rate male instructors higher)) and stereotyping. Everybody has stereotyping biases imprinted.

    The thought experiment: imagine that you are allowed to rethink how we recognize the quality of a scientist. But you do not know who you are when you are being assessed (female/non-binary/male/transgender/cis, brown/black/white, young/old/, etc).

    Aspects related to culture, the work environment etc. are possibly the only thing to which a general criterion can (should?) be applied to all individuals as we all have similar ambitions when it comes to establishing the desired culture.

    What are the main take aways of this session?

    • It should be quality that counts.
    • We often argue that a trajectory is the same for everyone. By doing so, we are ignorant of the various circumstances that people are in.
    • Certain behaviours are (only) negatively interpreted for certain groups.
    • Stereotypes are especially present in people’s instincts. This validates the use of quantitative measures.
    • There is no single perfect metric. We (the universities and academics) have so many tasks and responsibilities that a metric always needs to be seen in context à personalized metrics.
    • Avoid metrics that are biased but keep quantitative measures. People need to know what they need to do!
    • Let the person being evaluated propose their own metrics.
    • Don’t mix performance and potential evaluations.
    • Not everyone needs the same goals/criteria.
    • Don’t keep performance criteria too vague or too hard to reach.
    • Don’t use student evaluations as a way to measure teaching performance. For alternatives, see the article of Kreitzer & Sweet-Cushman, 2021.
    • Formulate team goals with the team. Communicate performance criteria with the team.
    • Start with clear, specific, measurable performance criteria directly related to job requirements that were agreed on together.
    • When returning from leave, adjust the requirements immediately (you will not be able to make your previous requirements in less time).

    Train reviewers to be aware of their own biases.

    Johan van de Worp