10 October 2023

Experimenting with innovative approaches to responsible research assessment

Research assessment lies at the heart of the daily work of those foundations that support science: this practice is rooted in traditional quantitative measurements designed to evaluate research outputs and their impacts, such as journal impact factor and citations. Over the past few decades new evaluation principles have come to the forefront, recognising the consequences that traditional research quality assessment metrics have had on the research community. Research funding foundations have not shied away from this conversation, transforming their own practices and in some case leading the way to new approaches.

With the aim of facilitating more effective philanthropic support for research through transnational cooperation and information exchange, Philea’s Research Forum (a funders’ collaborative of research-funding philanthropic organisations) initiated conversations around the importance of philanthropic funders adopting responsible research assessment practices. Leading by example, several foundations that steer the work of the Research Forum have signed the Agreement on Reforming Research Assessment which sets a shared direction for changes in assessment practices for research, researchers and research performing organisations, with the overarching goal to maximise the quality and impact of research.

A recent publication from the Research Forum looks into “Assessing Research for Philanthropic Funding. Innovative Approaches”, and was born in a workshop hosted by the Foundation for Polish Science on three distinct methodologies that challenge traditional assessment methods and offer innovative alternatives: the use of artificial intelligence (AI), the adoption of narrative curriculum vitae (CVs) and the implementation of randomised selection – also known as “the lottery approach”. Although this publication is intertwined with the shared interest of the group of enhancing responsible research assessment, the principles and methodologies are valid regardless of the specific areas or fields that the evaluation actually focuses on.

The advantages and challenges related to one of the most prominent approaches revolutionising evaluation practices: the use of AI, constitutes a real paradigm shift in evaluation methods. Advantages such as efficiency and time savings, objectivity and consistency, potential for innovation and impact as well as the fact that AI systems are in continuous improvement, should nevertheless be weighed against the ethical, legal and practical challenges of this technology. Two key tools employed by “la Caixa” Foundation to optimise its research proposal evaluation process demonstrate how the rationale behind the choice of such tools is inevitably linked to a specific need and motivation of the foundation itself: the willingness to enhance proposal selection, improve resource allocation, and achieve more efficient and effective research funding outcomes. “la Caixa” uses two tools in tandem: in one case, the AI-assisted model (applied in the real evaluation process for the first time for the 2023 selection round) eliminates the need for the human evaluation of proposals flagged with low probability of being selected, resulting in cost savings. In the other case, AI supports the matching process for the remote evaluation of research proposals. Inevitably, both processes are under continuous scrutiny and this iterative approach involves refining the algorithm, introducing new variables, and incorporating feedback from project leaders and reviewers.

The role and benefits of narrative CVs, alongside traditional methods in fostering responsible research assessment, are illustrated by the framework adopted by the University of Oxford. This framework clearly showcases how this approach – unlike metric-based approaches – allows reviewers to gain a deeper understanding of the applicant’s motivations, background, achievements and skills. Narrative accounts however also present several challenges, leading to the reflection that this approach may be more suitable for situations where the number of applications is limited, allowing for a more detailed evaluation of each candidate. “The narrative CV framework is being embraced internationally, but there is still work to be done to ensure coherence and rigour in the evaluation process”: several strategies to address the challenges are outlined in the publication to orient the choices of interested colleagues.

A third innovative approach to responsible research assessment is illustrated in practice by the Volkswagen Foundation’s implementation of a partially randomised methodology in research funding: the Experiment Funding Initiative, a high-risk funding programme focused on ideas in the life sciences, natural sciences and engineering, offering small grants. “If partial randomisation has the potential to encourage diversity in the pool of funded projects and reduce bias by offering an equal opportunity to all applicants, it should be regarded as a complementary approach to peer review. Open communication, transparency and careful consideration of the fundable applications, along with addressing potential biases, contribute to a robust and fair funding evaluation process”.

As a collaborative learning community of donors, the Research Forum pursues its reflection on how to improve research-funding philanthropic practices and invites interested colleagues to get in touch: the next opportunity to come together will be on 7-8 March 2024 in Milan, in the context of the biannual conference on “Breaking Bad (Habits) – How can foundations move from siloes to shaping future innovation ecosystems?” hosted by Fondazione Cariplo.

Authors

Ilaria d’Auria
Head of Programmes – Thematic Collaborations, Philea