Methodology - Charité Compass on Research Quality
Background
Economic, social, and cultural challenges are increasingly pushing societal demands for the effective translation of scientific knowledge into societal benefits. At the same time, extensive meta-research has exposed substantial and widespread deficits in the scientific rigour, transparency, and reproducibility of research studies, which significantly undermine public trust in science. To enhance the translational potential of biomedical research, the BIH QUEST Centre for Responsible Research has developed a comprehensive implementation program aimed at improving research quality, promoting open science, including patient and stakeholder engagement, and supporting transparent and effective research data management. Complementing the implementation program, the Center is currently establishing the COMPASS monitoring and evaluation system to track its progress and effectively assess the intended outcomes.
Instrument
The Charité COMPASS on Research Quality presented here is a modular research assessment tool that translates the intended outcomes of QUEST Center’s implementation program into measurable research practices.
Instrument Development
In the scope of the QUEST Center’s monitoring and evaluation activities, the Compass represents an assessment tool that translates program outcomes, in particular research practices enhancing the translational potential of biomedical research, into measurable research practices. The instrument was developed through a rigorous, theory- and practice-informed process. First, QUEST program-related research practices were mapped into clear dimensions, sub-dimensions that were translated into measurable topic-specific indicators, i.e. scales and sub-scales, whereby items populating the scales were drawn from the literature on research methodology, established reporting guidelines, or developed together with experts in the field of biomedical research. In the next step, iterative expert review ensured content validity and relevance, prompting refinements to wording and structure. Further, survey instructions and response formats were defined, followed by a usability pre-test that checked logical flow, respondent friendliness and digital compatibility. The instrument is now in the pilot phase, where empirical studies are assessing its validity and reliability.
Utilisation
The Charité Compass on Research Quality is offered in three evaluation modes, namely:
- Individual Practice
- Team Practice
- Research Output
Using either the whole scale or single subscales of the Compass, it offers valuable opportunities to researchers and teams to assess and reflect on QUEST program-relevant research practices, but also to reviewers in the scope of assessing the quality of common research output, such as a thesis, publication, or grant proposal.
Instructions & Response Specifications: Individual and Team Practice
Instructions
To what extent have you / has your team implemented the following research practices in your work? Please provide an answer for each of the following research practices, and select the most appropriate response if none of the categories seem to fit perfectly, to avoid ‘missing values’ in the dataset. Please note that “implementation” refers to the path a person/team takes from first considering to routinely applying a research practice.
Response Specification
- Not at all = I/We have not considered applying this practice.
- Barely = I/We have briefly explored this practice.
- Partially = I/We have applied this practice in an isolated case/ isolated cases.
- Moderately = I/We have applied this practice in various situations/settings.
- Largely = I/We have sound experience in applying this practice.
- Fully = I/We use this practice routinely and confidently.
- Extra: Not applicable
Instructions & Response Specifications: Research Output
Instructions
How would you rate the implementation of the following research practices in the work you are evaluating? Please note: This assessment tool can be used to evaluate various forms of research outputs, whereby the term “work” encompasses research outputs such as theses, grant applications or scholarly publications.
Response Specification
- Poor = The work does not describe or falls significantly short of responsible research standards.
- Moderate = The work meets basic research standards but exhibits gaps or inconsistencies.
- Excellent = The work fully meets responsible research standards.
- Extra: Not applicable
Data Analysis & Presentation
Given the unidimensional nature of the items, response values are suitable for summation and averaging. However, the median is the preferred measure of central tendency, as it is more robust to potential outliers and skewed distributions.
To facilitate the interpretation of the assessments performed, the Compass displays the results in the form of a radar chart.
Scales & Subscales
The following section defines every scale and sub-scale of the instrument, briefly touches on the envisaged improvements in programme-specific research practice, followed by the outline of the individual items through which users will self-report their current implementation status or evaluate research outputs.
PLEASE NOTE: Scales and subscales incorporated in this instrument primarily include those relevant to QUEST program activities.
Ethics of Science
- Subscales: Equitable authorship, Timely reporting
Research ethics
- Subscales: Privacy protection, Data protection, Transparent Reporting, Animal Welfare*)
*)Animal Welfare in Biomedical Research as Part of Research Ethics
- Subscales: Replacement, Reduction, Refinement
Evidence Synthesis & Evaluation in Biomedical Research
- Subscales: Systematic Literature Search, Critical Appraisal, Meta-Analysis, Evidence Synthesis
Study Design in Biomedical Research
- Subscales: Research Question, Theory Framing, Research Design, Pre-Registration, Data Management Plan
Research Validity in Biomedical Research
- Subscales: Internal Validity, External Validity, Statistical, Construct Validity
Research Documentation in Biomedical Research
- Subscales: Accurate, Legible, Original, Contemporaneous, Accessible (*ALCOA)
Presentation Quality in Biomedical Research
- Subscales: Clarity, Coherence, Accuracy, Inclusiveness
Scientific Relevance of Biomedical Research
- Subscales: Originality, Theoretical Contribution, Methodological Advancements,
Practical Relevance of Biomedical Research
- Subscales: Applicability, Impact on Stakeholders, Policy Influence
Patient and Stakeholder Engagement in Biomedical Research
- Subscales: Allocation of Tasks, Interaction, Epistemology
Accessibility & Reuse in Biomedical Research
- Subscales: Open-Access Publishing, Open Methods, Open Data, Open Code, Reuse
Research Data Management in Biomedical Research
- Subscales: Data Description, Documentation, Storage & Backup, Legal and Ethics, Preservation & Sharing, Responsibilities
Literature
Al-Shahi Salman, Rustam, Elaine Beller, Jonathan Kagan, Elina Hemminki, Robert S. Phillips, Julian Savulescu, Malcolm Macleod, and others, ‘Increasing Value and Reducing Waste in Biomedical Research Regulation and Management’, The Lancet, 383.9912 (2014), 176–85 <https://doi.org/10.1016/S0140-6736(13)62297-7>
Al-Shahi Salman, Rustam, Elaine Beller, Jonathan Kagan, Elina Hemminki, Robert S. Phillips, Julian Savulescu, Malcolm R. Macleod, and others, ‘Reducing Waste from Incomplete or Unusable Reports of Biomedical Research’, The Lancet, 383.9912 (2014), 176–85 <https://doi.org/10.1016/S0140-6736(13)62228-X>
Barlösius, Eva, ‘Concepts of Originality in the Natural Science, Medical, and Engineering Disciplines: An Analysis of Research Proposals’, Science Technology and Human Values, 44.6 (2019), 915–37 <https://doi.org/10.1177/0162243918808370>
Chan, An Wen, Fujian Song, Andrew Vickers, Tom Jefferson, Kay Dickersin, Peter C. Gøtzsche, and others, ‘Increasing Value and Reducing Waste: Addressing Inaccessible Research’, The Lancet, 383.9913 (2014), 257–66 <https://doi.org/10.1016/S0140-6736(13)62296-5>
Cobey, Kelly D., Stefanie Haustein, Jamie Brehaut, Ulrich Dirnagl, Delwen L. Franzen, Lars G. Hemkens, and others, ‘Community Consensus on Core Open Science Practices to Monitor in Biomedicine’, PLoS Biology, 21.1 (2023), 1–17 <https://doi.org/10.1371/journal.pbio.3001949>
Correction Chalmers, Iain, Michael B. Bracken, Ben Djulbegovic, Silvio Garattini, Jonathan Grant, A. Metin Gülmezoglu, and others, ‘How to Increase Value and Reduce Waste When Research Priorities Are Set’, The Lancet, 383.9912 (2014), 156–65 <https://doi.org/10.1016/S0140-6736(13)62229-1>
Döring, Nicola, ‘Forschungs- Und Wissenschaftsethik’, in Forschungsmethoden Und Evaluation in Den Sozial- Und Humanwissenschaften (Berlin, Heidelberg: Springer Berlin Heidelberg, 2023), pp. 119–43 <https://doi.org/10.1007/978-3-662-64762-2_4>
———, ‘Qualitätskriterien in Der Empirischen Sozialforschung’, in Forschungsmethoden Und Evaluation in Den Sozial- Und Humanwissenschaften (Berlin, Heidelberg: Springer Berlin Heidelberg, 2023), pp. 79–118 <https://doi.org/10.1007/978-3-662-64762-2_3>
———, ‘Untersuchungsdesign’, in Forschungsmethoden Und Evaluation in Den Sozial- Und Humanwissenschaften (Berlin, Heidelberg: Springer Berlin Heidelberg, 2023), pp. 183–221 <https://doi.org/10.1007/978-3-662-64762-2_7>
Farin-Glattacker, Erik, Silke Kirschning, Thorsten Meyer, and Rolf Buschmann-Steinhage, ‘Partizipation an Der Forschung – Eine Matrix Zur Orientierung’, Ausschuss „Reha-Forschung“ Der Deutschen Vereinigung Für Rehabilitation (DVfR) Und Der Deutschen Gesellschaft Für Rehabilitationswissenschaften (DGRW), 2014, 1–24
Go-fair, ‘FAIR Principles’ <https://www.go-fair.org/fair-principles/>
Hug, Sven E., and Mirjam Aeschbach, ‘Criteria for Assessing Grant Applications: A Systematic Review’, Palgrave Communications, 6.1 (2020), 1–15 <https://doi.org/10.1057/s41599-020-0412-9>
Hug, Sven E., Michael Ochsner, and Hans Dieter Daniel, ‘Criteria for Assessing Research Quality in the Humanities: A Delphi Study among Scholars of English Literature, German Literature and Art History’, Research Evaluation, 22.5 (2013), 369–83 <https://doi.org/10.1093/reseval/rvt008>
Ioannidis, John P.A., Sander Greenland, Mark A. Hlatky, Muin J. Khoury, Malcolm R. Macleod, David Moher, and others, ‘Increasing Value and Reducing Waste in Research Design, Conduct, and Analysis’, The Lancet, 383.9912 (2014), 166–75 <https://doi.org/10.1016/S0140-6736(13)62227-8>
Macleod, Malcolm R, Susan Michie, Ian Roberts, Ulrich Dirnagl, Iain Chalmers, John P A Ioannidis, and others, ‘Biomedical Research: Increasing Value, Reducing Waste’, The Lancet, 383.9912 (2014), 101–4 <https://doi.org/10.1016/S0140-6736(13)62329-6>
McDowall, R. D., ‘Is Traceability the Glue for ALCOA, ALCOA+, or ALCOA++?’, Spectroscopy (Santa Monica), 37.4 (2022), 13–19 <https://doi.org/10.56530/spectroscopy.up8185n1>
Organisation for Economic Co-Operation and Development (OECD), ‘Addressing Societal Challenges Using Transdiciplinary Research’, Policy Papers, 2020, pp. 39–51
Page, Matthew J., Joanne E. McKenzie, Patrick M. Bossuyt, Isabelle Boutron, Tammy C. Hoffmann, Cynthia D. Mulrow, and others, ‘The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews’, The BMJ, 372 (2021) <https://doi.org/10.1136/bmj.n71>
Roles, Contributor, ‘2 CRediT Taxonomy 2.1 Contributor Roles’, 2022 <https://www.niso.org/publications/z39104-2022-credit>
Science Europe, ‘Guidance for Data Management Plans’ <https://scienceeurope.org/media/411km040/se-rdm-template-3-researcher-guidance-for-data-management-plans.docx>
du Sert, Nathalie Percie, Viki Hurst, Amrita Ahluwalia, Sabina Alam, Marc T. Avey, Monya Baker, and others, ‘The Arrive Guidelines 2.0: Updated Guidelines for Reporting Animal Research’, PLoS Biology, 18.7 (2020), 9–10 <https://doi.org/10.1371/journal.pbio.3000410>
United Nations (UN) General Assembly, Transforming Our World: The 2023 Agenda for Sustainable Development, 2015, pp. 1–35 <https://documents.un.org/doc/undoc/gen/n15/291/89/pdf/n1529189.pdf>
Contact
Dr.rer.nat. Christiane Wetzel, M.Sc. Science Management, Head of Monitoring & Evaluation Unit, BIH QUEST Center for Responsible Research, quest-programmevaluation@bih-charite.de
Acknowledgements
I gratefully acknowledge the valuable contributions of our colleagues at QUEST Center for Responsible Research, especially Silke Kniffert, Natascha Drude, Evgeny Bobrov, Sarah Weschke, Alexandra Bannach-Brown, and Sofija Vojvodić, as well as our colleagues at Charité3R, especially Lisa Grohmann, in providing comprehensive expert input on responsible research practices. Their insights and expertise were instrumental in developing items describing QUEST program-relevant research practices. I also acknowledge the assistance of ChatGPT, an AI language model developed by OpenAI, assisting in the streamlining and refinement of item wording, thereby improving the clarity and coherence of the research practice scales presented here. All final decisions regarding content and structure remain the author's responsibility.
Funding
The Development of the Charité Quality Compass received funding by the Stiftung Charité through the Max Rubner Award 2022, project number StC_MRP_2022_10.