R for Reusability: Certification, metrics & guidelines for FAIR data and software

Datasets and software can only be kept FAIR over time in FAIR-enabling repositories. Communicating the degree to which this is achieved by a repository requires a seamless integration into the FAIR data ecosystem. To help better facilitate this, we supported the development of guidelines and mechanisms to improve connections between repository registries and discovery portals, repository trustworthiness mechanisms, FAIR digital objects assessments outcomes, and FAIR maturity certifications.

This WP aimed to support research communities to implement existing or emerging metrics. Using the FAIR data maturity model, this WP developed assessment methods created in the FAIRsFAIR project to assess FAIR digital objects and iterate based on testing. Expanding on this, WP4 also created new methods and tools that will help serve our goal of including more communities and object types.

 


Key outcomes

Metrics for data

FAIR-IMPACT Worked on domain-agnostic metrics for data assessment and metrics that can be made specific to certain communities. 

Discover the metrics


Metrics for software

Metrics have been developed for the assessment of software and they are disciplinary-specific and therefore may be of use to specific communities, which may be used by the different infrastructures in the scholarly ecosystem (aggregators, publishers, scholarly repositories and Software Heritage, the universal software archive).

A Comparison of tools for automated FAIR software assessment was also performed. FAIR-IMPACT partners examined the application and potential repurposing of three existing automated assessment tools built to assess FAIR data principles to assess compliance with the FAIR for Research Software (FAIR4RS) principles

Read more about the metrics for software


Metrics related Use Cases 

Together with the Social Sciences community (represented by CESSDA-ERIC) and use case partners from other domains two use cases were developed around:

  1. extending and/or adapting current generic, or disciplinary-agnostic, FAIR digital object metrics and F-UJI practical tests to become more disciplinary-aware and include more disciplinary tests,
  2. improve the flow of FAIRness information to repository registries and discovery portals.

Discover the use cases 


Semantic artefact assessment

One of the main objectives of the FAIR-IMPACT project is to extend FAIR assessment beyond data. Based on existing best practices and recommendations by the community (see Further information), in FAIR-IMPACT we have addressed semantic artefact assessment from two different perspectives. A semantic artefact FAIRness assessment methodology has been created, for which  the semantic artefact catalogues of WP4 have been used to demonstrate and evaluate applicability, effectiveness, performance and adoption of the FAIRness assessment method with the aim of demonstrating its impact in terms of making semantic artefacts more FAIR. On the other side, a methodology to generate FAIR semantic artefacts and their context by design.

Read more about semantic artefact assessment


Transparency guidelines

FAIR-IMPACT developed guidelines and a prototype to improve the transparency of, and trust in, repositories. The guidelines help to expose relevant information as metadata at the organisational and object level to facilitate discovery, provide context, and support interoperability.

Discover FAIR-IMPACT transparency guidelines

 


Key tasks

  • The F-UJI automated FAIR digital objects assessment tool for repositories and e-infrastructures, and the FAIR awareness-raising tool, FAIR-Aware , were further tested and developed in collaboration with selected communities to identify possible biases and improve their efficiency. Additionally, we extended and adapted the FAIRsFAIR data object assessment metrics.
  • Building upon existing guidelines & metrics for research software, to adapt and enhance the FAIR principles for research software. Furthermore, Design a quantitative evaluation method and utilise an automatic evaluation tool to assess the FAIRness of semantic artefacts. These tasks will utilise the recommended metadata properties of WP4.
  • Advance the capabilities of Trustworthy Digital Repositories (TDRs) of curating FAIR data and software, and improve their connections to registries and discovery portals.

Other related deliverables and milestones


Partners involved

  • KNAW-DANS - leader
  • CNRS
  • DataCite
  • DTU-DeiC
  • eSDF
  • INRAE
  • INRIA
  • KIT
  • LifeWatch
  • OBSPARIS
  • UBremen
  • UEDIN-DCC/SSI
  • UESSEX-UKDS
  • UKRI STFC
  • UPM

 

Get in touch with WP5 colleagues - write us