10 Novembre 2025
RESSH 2026 – Research Evaluation in Social Sciences and Humanities – “Research Evaluation and Scientific Autonomy under Pressure”

The RESSH Conference is the regular event of the international association ENRESSH – European Network for Research Evaluation in the Social Sciences and Humanities. It brings together specialists in research evaluation and policy, with a particular focus on the social sciences, humanities, and the arts.
The 2026 edition is organized by ENRESSH and IGSG-CNR; it will be held in Florence from 22 to 24 April 2026 and will be dedicated to the theme “Research Evaluation and Scientific Autonomy under Pressure”.
Research evaluation is being reshaped by shifting global powers, rising security concerns, and new technological developments, such as artificial intelligence (AI). At the same time, research is under increasing public scrutiny. Cases of scientific misconduct, plagiarism, data fabrication, and the proliferation of paper mills and predatory journals have raised questions about the credibility and ethics of contemporary science.
Debates on scientific integrity are increasingly intertwined with geopolitical interests and technological change, revealing research evaluation as a highly politicised tool rather than a neutral process. Strategic funding priorities and geopolitical competition determine what counts as valuable or legitimate research. As political pressures limit international collaboration, open science and academic mobility, evaluation systems become key arenas where a variety of actors, including those who support more bottom-up approaches, negotiate research sovereignty and academic freedom.
In the context of conflicting priorities and shifting power structures in research governance, technological advancements are changing evaluation procedures in ways that both intensify and transform existing tensions. The emergence of AI-driven assessment tools, such as algorithmic indicators, automated peer review, and large-scale data analytics, is one of the most significant developments. These tools offer efficiency and comparability but also raise concerns about reliability, transparency, and fairness. When algorithms define standards of quality, there is a risk of narrowing intellectual diversity, reinforcing dominant paradigms, and marginalizing critical or unconventional scholarship.
The academic community needs to critically reassess how evaluation affects not only excellence and impact, but also individual academic freedom, institutional autonomy, ethical responsibility, inclusivity, and trust in research. Fostering a diverse and reputable scientific ecosystem requires evaluation systems to preserve autonomy while promoting integrity.
Important dates:
Submission deadline: 12 January 26 January 2026
Notification of acceptance: 20 February 2026
Early bird registration: 23 February–23 March 2026
Standard registration: 24 March–13 April 2026
Conference: 22–24 April 2026