The priorities identified in the Recommendations on FAIR metrics for EOSC report, and the set-up of the Task Force (TF) FAIR Metrics and Data Quality by the EOSC-Association form the backdrop to this work package on metrics, certification, and guidelines. The tasks focus on the extension, updating, and monitoring of FAIR Metrics, assessment tools, and Trustworthy Digital Repository (TDR) requirements. The main goals are to translate and adapt these topics to the context of specific communities and thematic domains, as well as to apply them to a wider range of digital research objects, such as software and semantic artefacts.
We aim to support research communities to implement existing or emerging metrics. Using the FAIR data maturity model, this WP will continue to develop assessment methods created in the FAIRsFAIR project to assess FAIR digital objects and iterate based on testing. Expanding on this, WP4 will also create new methods and tools that will help serve our goal of including more communities and object types.
Together with the Social Sciences community (represented by CESSDA-ERIC) and use case partners from other domains we will develop two use cases around:
- extending and/or adapting current generic, or disciplinary-agnostic, FAIR digital object metrics and F-UJI practical tests to become more disciplinary-aware and include more disciplinary tests,
- improve the flow of FAIRness information to repository registries and discovery portals.
Datasets and software can only be kept FAIR over time in FAIR-enabling repositories. Communicating the degree to which this is achieved by a repository requires a seamless integration into the FAIR data ecosystem. To help better facilitate this, we will support the development of guidelines and mechanisms to improve connections between repository registries and discovery portals, repository trustworthiness mechanisms, FAIR digital objects assessments outcomes, and FAIR maturity certifications.
The WP will also engage with new communities to accelerate the uptake of these best practices broadly and cross-disciplinary, e.g. via the cascading grant support tier or in-kind support programmes set up in WP2
Key tasks
- The F-UJI automated FAIR digital objects assessment tool for repositories and e-infrastructures, and the FAIR awareness-raising tool, FAIR-Aware , will be further tested and developed in collaboration with selected communities to identify possible biases and improve their efficiency. Additionally, we will extend and adapt the FAIRsFAIR data object assessment metrics .
- Building upon existing guidelines & metrics for research software the WP will adapt and enhance the FAIR principles for research software. Furthermore, we will design a quantitative evaluation method and utilise an automatic evaluation tool to assess the FAIRness of semantic artefacts. These tasks will utilise the recommended metadata properties of WP4.
- We will advance the capabilities of Trustworthy Digital Repositories (TDRs) that are capable of curating FAIR data and software, and improve their connections to registries and discovery portals.
Key outcomes
The currently generic or disciplinary-agnostic FAIR digital object metrics and practical tests are extended and/or adapted to be more disciplinary aware. They include more disciplinary-specific tests and enable technical and organisational integration of these assessment tools within EOSC infrastructure
Metrics have been developed for the assessment of software and they are disciplinary-specific and therefore may be of use to specific communities, which may be used by the different infrastructures in the scholarly ecosystem (aggregators, publishers, scholarly repositories and Software Heritage, the universal software archive).
A semantic artefact FAIRness assessment methodology has been created, for which the semantic artefact catalogues of WP4 have been used to demonstrate and evaluate applicability, effectiveness, performance and adoption of the FAIRness assessment method with the aim of demonstrating its impact in terms of making semantic artefacts more FAIR.
A Comparison of tools for automated FAIR software assessment was performed. FAIR-IMPACT partners examined the application and potential repurposing of three existing automated assessment tools built to assess FAIR data principles to assess compliance with the FAIR for Research Software (FAIR4RS) principles
The final outcome will draw together much of the other work into guidelines and a pilot study for exposing repository trustworthiness status, FAIR maturity and FAIR data and software assessment outcomes in generic and disciplinary registries and portals. These measures will support the network of FAIR-enabling TDRs, an important component of EOSC interoperability.
Related deliverables and milestones
Partners involved
- KNAW-DANS - leader
- CNRS
- DataCite
- DTU-DeiC
- eSDF
- INRAE
- INRIA
- KIT
- LifeWatch
- OBSPARIS
- UBremen
- UEDIN-DCC/SSI
- UESSEX-UKDS
- UKRI STFC
- UPM