Survey results on quality features - faculty deans of Dutch Universities

  • 7 Nov
  • Claartje Chajes
  • 2
  • 148
Claartje Chajes
Kennisbank // Knowledge Base

Institutions agree on the core values of the Recognition & Rewards programme but putting these into practice remains a challenge. This is, in short, the outcome of a survey conducted among the faculty deans of Dutch universities. On Thursday 5 June 2025, researchers from the Centre for Science and Technology Studies (CWTS) presented the outcome of this survey. Here we summarize the key findings and recommend some discussion questions for further exploration.

For the survey and an insightful slide deck presentation of the CWTS: please check the attachments.

Key Findings

  • The study on quality features serves as a ‘thermometer’, offering an overview of where faculties currently stand.
  • Nearly half of the faculties are in a transitional phase, revising policies and processes, conducting experiments, and actively starting implementation.
  • There is a strong need among academics for clear and transparent assessment criteria; however, some faculties struggle with how to implement these in practice.
  • Faculties often require the use of portfolios; beyond that, there is considerable freedom in how quality features are applied.
  • Faculties aim to broaden the assessment system by including aspects such as leadership, teaching, societal impact, and Open Science.

Introduction

Many academics experience a one-sided emphasis on research performance, frequently leading to the undervaluation of academic work in other domains (teaching, impact, leadership, and patient care), thereby putting ambitions in these areas under pressure. The partners in the Recognition & Rewards programme had expressed their ambition to revise and improve the assessment system in the position paper Room for Everyone’s Talent. The aim is to shift the focus more towards quality and less towards quantity. This requires, first and foremost, a recalibration of research assessment procedures, which traditionally emphasises quantitative output indicators. Additionally, the system should be broadened by developing indicators to assess academic performance in teaching, impact, and leadership. The roadmap Room for Everyone’s Talent in Practice states that institutions must clarify which quality they apply across the core domains of recruitment, development, appointment, and promotion of academic staff. The term “quality features” refers to the qualitative and quantitative criteria that demonstrate adequate performance in each domain.

To gain insight into the quality features being used, the Recognition & Rewards core team developed a survey, which was distributed to all deans of Dutch universities at the end of March 2025. The primary aim of the questionnaire was to assess the status of faculties in developing quality features. In line with the road map, disciplinary differences were considered. The results were analysed by a research team from CWTS. The questionnaire consisted of three parts: academic career path assessment, the use of quality features, and the Recognition & Rewards reform process.

Academic Career Assessment

The first part of the questionnaire included various questions about how faculties assess academic staff. Most respondents indicated that academic work is assessed at the time of appointment and promotion, and it is also a standard part of annual performance reviews.

There is broad support among deans for the goals of the Recognition & Rewards programme. They value the diversity of talents and the focus on quality. They also consider it important that academic work is socially relevant, and they encourage collaboration with societal and business partners. Many deans see the opportunity for diversified career paths as a strength of their assessment systems. They aim to tailor assessments to the individual profiles of academics and the norms of their disciplines. Some deans are satisfied with the criteria developed and the developmental approach to staff evaluation.

However, a majority of deans also identify significant challenges. While there is a strong need for clear and transparent criteria, it remains difficult in practice to assess academic work qualitatively. Often, there is a lack of consistently applicable criteria, leading to uncertainty among academics. Deans also note tensions between individual ambitions and team or organisational goals. Faculties continue to struggle with the practical implementation of Recognition & Rewards.

Quality Features

The survey also asked which quality features faculties apply across different domains. For no fewer than eighty quality features, deans could indicate whether each was mandatory, optional, or discouraged in performance assessments. Generally, faculties place high value on portfolios in which academics present and explain their achievements in narrative form. In most faculties, the use of such portfolios is mandatory. This aligns closely with one of the roadmap’s actions: encouraging institutions to use evidence-based CVs or assessment portfolios in appointments and promotions.

The results also show a mixed picture. Per domain, the following conclusions can be drawn:

  • Research domain: Articles and reviews in academic journals remain by far the most prevalent indicators of quality. In over two-thirds of faculties, these features are mandatory. Securing grants is also highly valued. While there is room for alternative outputs—such as developing digital infrastructure, databases, and designs—the emphasis remains on more traditional outputs. Notably, some science and medical faculties report that they do not wish to use citation counts as a measure of quality.
  • Teaching domain: In addition to the use of a teaching portfolio, the Basic Teaching Qualification (BKO) and the ability to supervise students are the most important quality features. The BKO is almost universally mandatory. Faculties differ in their approach to student evaluations: just under half consider these evaluations an important tool for assessing teaching performance, while some faculties explicitly discourage their use in evaluations.
  • Impact domain: Faculties generally allow academics considerable freedom to choose quality features that align with their individual profiles. Unlike the teaching and research domains, most quality features in the impact domain are optional. Only the impact portfolio is mandatory in nearly half of the faculties. This suggests that there are not yet generic and uniform quality indicators for assessing impact. In other words, individual researchers must select features that suit their own focus.
  • Leadership domain: There is broad consensus on mandatory quality featuresindicators. Faculties consider supervision of PhD candidates, the ability to perform management tasks, team leadership skills, and collaboration to be key criteria.

Progress of Recognition & Rewards at Faculty Level

According to the deans, faculties are making good progress with the Recognition & Rewards programme. Nearly half report being in a transitional phase, revising policies and processes, conducting experiments, and actively beginning implementation. Around 20% of faculties state that they have fully implemented a new assessment system, particularly in the science domain. Additionally, nearly a quarter have developed plans but have yet to start implementation.

The survey provides a clear picture of the changes faculties are making. Three-quarters report increased appreciation for academics with talents in leadership, teaching, impact, or Open Science. Many faculties are working on implementing career paths, translating faculty strategy into strategic workforce planning, specifying quality features, and using narratives. The road map is proving to be a useful guide.

Opinions are divided on qualitative assessment through peer review. For about half of the institutions, this is a significant issue; around 10% of deans explicitly state they do not intend to use it. Notably, half of the faculties no longer wish to allow inappropriate use of metrics such as the H-index and Journal Impact Factor (JIF). At the same time, many faculties either do not discuss this issue or explicitly state they do not wish to change current practices.

Response Rate

A total of 102 deans from the eighteen Dutch universities were contacted. It should be noted that WUR and the ideological-based universities do not have a faculty structure; in these cases, the rector was contacted. Over 60% of the deans completed the questionnaire in full. The responses are well distributed across universities and academic disciplines, although the response rate from technical universities is noticeably lower.

Conclusion

The study on quality features serves best as a thermometer: the results provide a clear picture of where faculties currently stand. Most faculties are actively working on implementing a new assessment system. The results also highlight which quality features faculties value most. Faculties use a wide variety of indicators, giving academics considerable freedom to choose those that best match their focus profile. This is especially true in the impact domain, where most features are optional. In the research and education domains, there is broad agreement on the importance of research and teaching portfolios, publishing articles, and the BKO, though many other indicators remain optional.

We can conclude that faculties are committed to broadening the assessment system, aligning with the principles of Recognition & Rewards. However, two caveats must be noted. First, this broadening carries certain risks. Academics expect clear and transparent criteria, yet faculties themselves acknowledge that ambiguity often exists in the criteria used and how assessments are conducted. Is it sufficiently clear to academics what they are being assessed on when many indicators are optional? Second, the responses do not reveal to what extent the quality criteria align with the faculty’s chosen strategy. What does it mean in practice that many indicators are optional? This is precisely the challenge many faculties currently face.

In our view, the ‘solution’ lies in training assessment committees and ensuring transparent communication. Communication is twofold: faculties must clearly communicate which quality features they use, and they must also continuously check whether staff understand the criteria as intended. Maintaining an ongoing dialogue is essential. The national Recognition & Rewards programme is happy to support these conversations, both within faculties and in national disciplinary consultations. The discussion points below offer a starting point for these conversations.