Read in and find out! Here, you can view topic-related background literature as well as dissemination activities of the FeatureCloud consortium itself. Once new publications are available, we list them here. Open access listings are linked directly to the full text version. All others lead to the respective journal repository or publisher.
Publications of the FeatureCloud Consortium (count: 10)
Relevance for FeatureCloud: In this paper we describe a novel holistic approach to an automated medical decision pipeline that builds on the latest machine learning research, integrating the human-in-the-loop via an innovative, interactive, and exploration-based explainability technique called counterfactual graphs. We outline how multi-modal representations enable joint learning of a single outcome, how embeddings can be learned in a distributed manner securely and efficiently, and how to leverage counterfactual paths for intuitive explainability and causability. This approach could be used as a basis for novel medical Apps in the FeatureCloud AI Store.
Relevance for FeatureCloud: In direct relevance for FeatureCloud, we here demonstrate the principle of federated machine learning. While the FeatureCloud prototype platform emerged, we have worked on stand-alone solutions for typical medical application scenarios, including a federated genome-wide association study (GWAS) tool, called “sPLINK”.
Relevance for FeatureCloud: This paper investigates attack scenarios and success rates for a malicious node in federated learning settings such as in FeatureCloud, considering both sequential and parallel strategies, and thus builds a basis for estimating risks from potential adversaries participating in the federated learning.
Relevance for FeatureCloud: This paper estimates how well membership inference attacks, for example, determining whether a data sample was used in a machine learning model training process. This translates also to federated learning, for example, whether there is an increased risk to privacy if honest-but-curious participants can observe a number of exchanged model parameters. Results of this attack analysis fed into the risk analysis and will contribute to the mitigation strategies in WP2, and will influence directly the implementation of the federated learning in WP7 in the FeatureCloud project.
Relevance for FeatureCloud: This paper provides the first proof of principle for AI-enhanced systems medicine prediction of drug repurposing against COVID-19. It is the first paper to systematically provide still centralized network medicine AI, which will eventually be extended into a federated, decentralized approach that will be implemented in the FeatureCloud platform dedicated to a global anti-COVID-19 network headed by the International Network Medicine Consortium.
Relevance for FeatureCloud: FeatureCloud is about the proof of feasibility and implementation of a new security and privacy technique in the medical domain: federated machine learning. This paper summarizes the state of the art in privacy-enhancing technology for the processing of biomedical data and provides the basis for the techniques that FeatureCloud needs to support in order to ensure privacy-preserving AI in biomedicine.
Relevance for FeatureCloud: In this paper, we introduce the notion of causability, which is extending explainability and is of great importance for future Human-AI interfaces in WP 4. Such interfaces for explainable AI have to map the technical explainability (which is a property of an AI, e.g. the heatmap of a neural network produced by e.g. layer-wise relevance propagation) with causability (which is a property of a human, i.e. the extent to which the technical explanation is interpretable by a human) and to answer questions of why we need a ground truth, i.e. a technical framework for understanding. Here counterfactuals are important P (y x | x ′, y ′) with the typical activity of “retrospection” and questions including “what-if?” – this is highly relevant to re-trace and to make the results of FeatureCloud interpretable to experts within the medical domain.
Relevance for FeatureCloud: Advancements in Artificial Intelligence (AI) and Machine Learning (ML) are enabling new diagnostic capabilities. In this paper, we argue that the very first step before introducing AI/ML into diagnostic workflows is a deep understanding of how pathologists work. We developed a visualization concept, including (a) the sequence of the views observed by the pathologist (Observation Path), (b) the sequence of the spoken comments and statements of the pathologist (Dictation Path), (c) the underlying knowledge and experience of the pathologist (Knowledge Path), (d) information about the current phase of the diagnostic process and (e) the current magnification factor of the microscope chosen by the pathologist. This is highly important for explainable AI in the context of WP4 hence extremely valuable for the whole FeatureCloud project.
Relevance for FeatureCloud: In this paper, we investigate medical decision processes and the relevance of explainability in decision making. The first step for implementing decision-paths in systems is to retrace an experienced pathologist’s diagnosis finding process. Recording a route through a landscape composed of human tissue in terms of a roadbook is one possible approach to collect information on how diagnoses are found. Choosing the roadbook metaphor provides a simple schema, that holds basic directions enriched with metadata regarding landmarks on a rally – in the context of pathology such landmarks provide information on the decision finding process. This is highly relevant for explainable AI in the context of WP4 hence extremely valuable for the whole FeatureCloud project.