Speakers
Description
Overview:
This session aims to showcase the roles and relationships among FAIR Implementation Profiles (FIPs), FAIRsharing, and FAIR²—three complementary efforts that help researchers and data stewards to optimally reuse standards and make research data truly FAIR. The session will provide an overview of key challenges, introduce key technologies, and offer perspectives on how these tools are evolving to support responsible, reproducible, and increasingly AI-ready data reuse.
The Challenges:
At the core of putting FAIR into practice are the many and often complicated choices that must be made when selecting appropriate standards—terminologies, models, formats, minimal information requirements, identifier schemas—and suitable repositories and knowledge bases. These resources are essential to describe, report, and share research objects such as datasets, code, and workflows. Yet each project, group, or organisation typically follows its own norms. Without visibility into community preferences, it becomes difficult to find and reuse existing solutions. This increases the risk of needless reinvention and divergence in how standards are applied.
Even when datasets are declared FAIR, reuse in practice is often hindered by incomplete documentation, unclear standard usage, and metadata that is not structured for use in computational workflows. These issues are especially pronounced in interdisciplinary research and in AI-driven settings, where data needs to be both machine-actionable and richly contextualized to support automated discovery, integration, and analysis.
Practical Solutions: FAIR Implementation Profiles and FAIRsharing
To map this landscape and encourage convergence, the FAIR Implementation Profile (FIP) was introduced in 2019 by GO FAIR and developed in cooperation with ENVRI-FAIR. A FIP systematically represents a collection of declarations a community makes about its usage of FAIR Enabling Resources (FERs). The FIP Wizard is a tool that supports the creation and publication of community-specific FIPs (now more than 450 intances of FIPs representing over 1200 accumulated FERs). Once published, FIPs from different communities can be openly searched with semantic precision and compared, providing critical insight into community norms and decision-making about standards.
FAIRsharing complements FIPs by offering an informative and educational service that describes and interrelates standards, databases, and data policies across all disciplines [https://blog.fairsharing.org/?p=971]. FAIRsharing records are curated, tagged by maturity, and continuously updated to reflect the dynamic evolution of the standards ecosystem. Communities can create FAIRsharing Collections to represent the resources they use in their FIPs or recommend to others. FAIRsharing also supports integration with tools such as the Data Stewardship Wizard (DSW), enabling data producers and stewards to generate FAIR assessments and data management plans based on trusted metadata.
FAIRsharing content is machine-actionable and accessible via API, enabling third-party tools to answer key questions about standards and repositories: “Which repositories support controlled access?”, “Which identification schemas are used?”, or “Which standards are suitable for describing software?” Where FIPs describe resources already registered in FAIRsharing, curated metadata is retrieved automatically and incorporated into FER nanopublications, including citations and provenance. If new resources are introduced, users are prompted to create corresponding records. This collaborative ecosystem supports the development of machine-learning approaches that assist communities in optimizing FAIR implementation strategies, improving alignment and interoperability across domains.
Expanding the Ecosystem: FAIR²
As a key use case, this session will also introduce FAIR², a new framework focused on enabling structured, reproducible, and AI-ready reuse of data. FAIR² responds to real-world challenges that remain even when data is technically FAIR—particularly those related to machine usability, provenance clarity, and contextual documentation. It introduces three new publication outputs: the FAIR² Data Article, FAIR² Data Package, and FAIR² Data Portal. These formats support deeply structured metadata (e.g., schema.org, Croissant, PROV-O), transparent provenance, and integration into modern data workflows.
As artificial intelligence becomes a common tool for knowledge discovery, synthesis, and prediction, FAIR² is designed to ensure datasets are not only discoverable but usable in AI systems. Its structured outputs provide the metadata and contextual scaffolding that intelligent agents and machine learning models require for interpreting, filtering, and applying data responsibly.
The presentation will explore how FAIR² can benefit from integration with FAIRsharing and FIPs—such as by referencing curated standards or reflecting community practices in structured metadata. These opportunities will be discussed as a pathway toward more coherent, machine-actionable, and ethically grounded data publication and reuse.
Session Format:
This 90-minute session will include three short presentations followed by open discussion and audience Q&A. The proposed agenda is:
Presenter 1 – GO FAIR Foundation (Schultes): An overview of FAIR Implementation Profiles (20 minutes, including 15-minute presentation)
Presenter 2 – FAIRsharing (Susanna Sansone): FAIRsharing and its role in FAIR assessment and assistance (20 minutes, including 15-minute presentation)
Presenter 3 – Senscience (Sean Hill): FAIR²: Structured publication for reproducible and AI-ready data reuse (20 minutes, including 15-minute presentation)
Discussion and Q&A – Integration, use cases, and future directions for FAIR data sharing (30 minutes)