Successful completion of the lighthouse project Ophthalmo-AI

After three years, the Ophthalmo-AI project, which focused on intelligent, cooperative medical decision support in ophthalmology, was concluded in mid-March.

Four demonstrators (including an intelligent learning tool to support image diagnoses and a dashboard to support treatment decisions in therapy) were developed as part of the project and were evaluated very positively in the two participating clinics (Augenklinik Sulzbach, Augenzentrum am St. Franziskus-Hospital Münster).

In addition, two Master’s theses were completed as part of the project, with one completed and one planned employment of the students as IML employees in Oldenburg and Saarbrücken. Several publications have been published or submitted to AI and medical conferences, and a new project on active learning with Google Germany as a partner is based on the content of Ophthalmo-AI.

Hasan Md Tusfiqur Alam and Md Abdul Kadir from IML show project contents from Ophthalmo-AI to an audience. Photo by: Felix Brüggemann, Copyright: Google.

Research Grant from Accenture

  • Duration: January, 2024 – December, 2024
  • Research topics: Medical Text Analysis, Machine Learning & Deep Learning, LLMs

This research aims to investigate ChatGPT’s natural language inference (NLI) capabilities in healthcare contexts, focusing on tasks like understanding clinical trial information and evidence-based health fact-checking. We will explore various Chain-of-Thought methods to improve ChatGPT’s reasoning abilities and integrate dynamic context analysis techniques for better inference accuracy. Our approach involves integrating a retrieval-augmented generation framework, utilizing mechanisms such as context analysis, multi-hop reasoning, and knowledge retrieval.

Siting Liang from IML presents the Autoprompt Project

Sponsored by

Two Papers on Large Vision Models for Healthcare accepted at NeurIPS 2023

Duy Nguyen, from the Interactive Machine Learning department, and colleagues from the University of Oldenburg, Max Planck Research School for Intelligent Systems, the University of Texas at Austin, the University of California San Diego, and other institutions presented a full paper and a workshop paper at NeurIPS 2023. NeurIPS is considered one of the premier global conferences in the field of machine learning. The conference took place in New Orleans, USA, from December 10th to 16th, 2023, with an overall acceptance rate of 26.1%.

The first paper “LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching” introduces LVM-Med, a novel family of deep networks trained on a large-scale dataset of approximately 1.3 million medical images from 55 publicly available datasets (large vision model), encompassing various organs and modalities such as CT, MRI, X-ray, and Ultrasound. The authors address the challenge of domain shift between natural and medical images, proposing a self-supervised contrastive learning algorithm for fine-tuning pre-trained models. This algorithm integrates pair-wise image similarity metrics, captures structural constraints through a graph-matching loss function, and allows efficient end-to-end training using modern gradient estimation techniques. LVM-Med is evaluated on 15 medical tasks, demonstrating superior performance compared to state-of-the-art supervised, self-supervised, and foundation models. Notably, for challenging tasks like Brain Tumor Classification or Diabetic Retinopathy Grading, LVM-Med achieves a 6-7% improvement over previous vision-language models while using only a ResNet-50. Pre-trained models are made available to the community.

The second paper, “On the Out of Distribution Robustness of Foundation Models in Medical Image Segmentation”, accepted at the Workshop on robustness of zero/few-shot learning in foundation models (R0-FoMo), investigates the challenge of constructing robust models in medical imaging that can effectively generalize to test samples under distribution shifts. In particular, the authors compare the generalization performance of various pre-trained models after fine-tuning on the same in-distribution dataset, finding that foundation-based models exhibit better robustness than other architectures. The study also introduces a new Bayesian uncertainty estimation for frozen models, using it as an indicator to characterize the model’s performance on out-of-distribution (OOD) data, which proves beneficial for real-world applications. The experiments highlight the limitations of current indicators like accuracy on the line or agreement on the line, commonly used in natural image applications, and underscore the promise of the introduced Bayesian uncertainty, where lower uncertainty predictions tend to correspond to higher out-of-distribution (OOD) performance. 

Like many previous NeurIPS conferences, NeurIPS-2023 this year features a diverse program with several invited speakers, 2,773 accepted posters, 14 tutorials, and 58 workshops. Among these, Duy reports important workshops for IML, for example, Foundation Models for Decision Making, Optimal Transport and Machine Learning, XAI in Action: Past, Present, and Future Applications, and Medical Imaging meets NeurIPS. Furthermore, our department connected and established collaboration for upcoming projects with leading groups in machine learning and bio-medical research at Harvard University and Stanford University.

Duy Nguyen with the poster explaining his paper at NeurIPS 2023

References

Nguyen, Duy MH, et al. “LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching.” arXiv preprint arXiv:2306.11925 (2023).

Nguyen, Duy MH, et al. “On the Out of Distribution Robustness of Foundation Models in Medical Image Segmentation.” arXiv preprint arXiv:2311.11096 (2023).

Google Research Grant for End-to-End Active Learning Framework for Medical Image Annotation

  • Duration: January, 2024 – December, 2024
  • Collaboration Partner: Google
  • Research topics: Medical Image Analysis, Machine Learning & Deep Learning, Human-Machine-Interaction

We develop a modularized active learning framework within the Google Cloud Platform, facilitating large-scale medical image annotation in a cost-effective manner while ensuring data sovereignty and privacy. Our work emphasizes a federated learning use case for healthcare data, taking into consideration data protection and security aspects. Our goal is to create an end-to-end platform for efficient annotation that benefits both clinicians and the research community.

Hasan Md Tusfiqur Alam (left) and Md Abdul Kadir from IML with their architecture for the GCP Project

Sponsored by

Paper accepted at MICCAI 2023

The MICCAI Society is a professional organization dedicated to the fields of Medical Image Computing and Computer Assisted Interventions. It brings together researchers from various scientific disciplines such as computer science, robotics, physics, and medicine. The society is renowned for its annual MICCAI Conference, which allows for the presentation and publication of original research related to medical imaging. It has an acceptance rate of ~30%. Additionally, the society endorses and sponsors several scientific events each year.

This year, a paper titled “EdgeAL: An Edge Estimation Based Active Learning Approach for OCT Segmentation” was presented by Md Abdul Kadir Hasan, Md Tusfiqur Alam, and Daniel Sonntag. The paper focuses on the use of active learning algorithms for training models with limited data. The authors propose EdgeAL, a method that uses the edge information of unseen images as a priori information to measure uncertainty. This uncertainty is quantified by analyzing the divergence and entropy in model predictions across edges. The measure is then used to select superpixels for annotation. The effectiveness of EdgeAL was demonstrated on multi-class Optical Coherence Tomography (OCT) segmentation tasks. The method achieved a 99% dice score while reducing the annotation label cost to 12%, 2.3%, and 3% on three publicly available datasets (Duke, AROI, and UMN). The source code for this method is available online.

Diagram from the paper “EdgeAL: An Edge Estimation Based Active Learning Approach for OCT Segmentation”

Md Abdul Kadir from IML at the MICCAI 2023 conference in Vancouver, Canada

Poster presenting IML’s work (on the left) at MICCAI 2023

References

Kadir, M.A., Alam, H.M.T., Sonntag, D. (2023). EdgeAL: An Edge Estimation Based Active Learning Approach for OCT Segmentation. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14221. Springer, Cham. https://doi.org/10.1007/978-3-031-43895-0_8

Paper accepted for publication in Medical Image Analysis Journal

We are happy to announce that our work “TATL: Task Agnostic Transfer Learning for Skin Attributes Detection” has been accepted at the prestigious journal “Medical Image Analysis”. It’s a collaboration between DFKI, MPI, University of California (Berkeley) and Oldenburg University among others.    

Existing skin attributes detection methods usually initialize with a pre-trained Imagenet network and then fine-tune on a medical target task. However, we argue that such approaches are suboptimal because medical datasets are largely different from ImageNet and often contain limited training samples. 

In this work, we propose Task Agnostic Transfer Learning (TATL), a novel framework motivated by dermatologists’ behaviors in the skincare context. Our method learns an attribute-agnostic segmenter that detects lesion skin regions and then transfers this knowledge to a set of attribute-specific classifiers to detect each particular attribute. Since TATL’s attribute-agnostic segmenter only detects skin attribute regions, it makes use of ample data from all attributes, allows transferring knowledge among features, and compensates for the lack of training data from rare attributes. The empirical results show that TATL not only works well with multiple architectures but also can achieve state-of-the-art performances while enjoying minimal model and computational complexities (30-50 times less than the number of parameters). We also provide theoretical insights and explanations for why our transfer learning framework performs well in practice.

The figure below demonstrates the usefulness of TATL when predicted lesion skin regions (predicted union) could cover both large regions as in Pigment Network and small disconnected regions as in Negative Network.

Projects: pAItient (BMG), Ophthalmo-AI (BMBF)

Assessing Cognitive Test Performance Using Automatic Digital Pen Features Analysis at UMAP’21

Double or nothing – Alexander Prange will present a paper on “Assessing Cognitive Test Performance Using Automatic Digital Pen Features Analysis” at this year’s ACM UMAP conference on User Modeling, Adaptation and Personalization. In contrast to the paper presented at this year’s CHI, we analyze cognitive assessments solely based on digital pen features, without additional content analysis.

German Standardization Roadmap AI

The German Standardization Roadmap Artificial Intelligence has been published recently in November 2020. In medicine, secure framework conditions have to be created; and the legal context, economy, technical aspects, acceptance, privacy, data security, and ethical aspects have to be taken into account.

In the pAItient project, where DFKI is responsible for secure framework conditions of AI systems, we have now published our first paper that explicitely addresses these aspects and standardisation needs of image preprocessing guidelines which are naturally subject to GDPR and DSGVO.

Tomorrow, “The effects of masking in melanoma image classification with CNNs towards international standards for image preprocessing” will be presented at EAI MedAI 2020 – the International Symposium on Medical Artificial Intelligence.

KI-Para-Mi Kick-off

KI-Para-Mi: Kick-off of the KI-Para-Mi Project (BMBF) as webinar in Munich and Saarbrücken. In KI-Para-Mi, we develop an intelligent personnel planning system for flexible shift scheduling in nursing, which above all takes into account the interests of the employees.

Multimodal Multisensor Interface Trilogy published

Multimodal Multisensor Interface Trilogy published

The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces-user input involving new media (speech, multi-touch, hand and body gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces. This three-volume handbook is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas.

Volume 1 

Volume 2 

Volume 3 

Daniel Sonntag becomes member of the Artificial Intelligence of the Canadian German Chamber of Industry and Commerce Inc.

Daniel Sonntag becomes member of the German Delegation of Artificial Intelligence of the Canadian German Chamber of Industry and Commerce Inc.

Main scientific partner institutes include the Future Skills Centre – Ryerson University, the University of Toronto and the Vector Institute, main industrial partners include Element.AI, Zoom.AI, and Roche Canada.

Invited talk at The Future of Work & AI Conference, where experts from Germany and Canada discussed the newest developments in this field, took place on September 18th at the Ontario Investment and Trade Centre.

Contact:
Canadian German Chamber of
Industry and Commerce Inc.
480 University Ave, Suite 1500
Toronto, ON, M5G 1V2 Canada

The Handbook of Multimodal-Multisensor Interfaces Vol. 2

The Handbook of Multimodal-Multisensor Interfaces: Signal Processing Architectures, and Detection of Emotion and Cognition. Volume 2 EDITORS: Oviatt, Sharon; Schuller, Bjorn; Cohen, Philip R; Sonntag, Daniel; Potamianos, Gerasimos; Krueger, Antonio PUBLISHER: Order link at Morgan and Claypool /ACM entry. This is a THREE volume series that presents the definitive state of the art and future directions of the field of Multimodal and Multi-Sensor interfaces.

handbook-cover

KDI closing event

KDI: The KDI project’s closing event took place on September 29, 2017 in the Berlin Museum of Medical History. The ruin of the former Rudolf Virchow Lecture Hall, with its historic charm, presented a unique event location that has made for an unforgettable experience.

Programme:

  • 10:00 – Introduction to clinical data intelligence
  • 11:00 – Organisation of clincial data, data security
  • 12:00 – Application scenarios
  • 15:00 – External speakers
  • 16:00 – Official Demo Event / Internal meeting with DLR and BMWi
  • 17:00 – Farewell

The Handbook of Multimodal-Multisensor Interfaces

Interakt: The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and Common Modality Combinations. Volume 1 EDITORS: Oviatt, Sharon; Schuller, Bjorn; Cohen, Philip R; Sonntag, Daniel; Potamianos, Gerasimos; Krueger, Antonio PUBLISHER: Morgan and Claypool/ACM Press. This is a THREE volume series that presents the definitive state of the art and future directions of the field of Multimodal and Multi-Sensor interfaces.

handbook-cover

Kognit Science Events in 2015

KDI: in The science events that shaped 2015 (Nature)

US President Barack Obama announced the Precision Medicine Initiative:
Tailoring treatments to individual patients has long been a goal in biomedicine, but US President Barack Obama gave this effort a big boost with his announcement in January of the Precision Medicine Initiative (PMI). As part of the US$215-million programme, which will award its first grants next year, the NIH and partner organizations will recruit one million people across the country, collecting genetic information, health records and even data from electronic health-monitoring devices. Researchers will use the information to look for links between disease risk and genetic and environmental factors.

Medical CPS architecture

EIT MCPS: Presentation of full Medical CPS architecture at CBMS 2014 in New York, Mount Sinai Hospital
The Medical Cyber-Physical Systems Activity at EIT: A Look under the Hood
Proceedings of the 27th International Symposium on Computer-Based Medical Systems (CBMS), IEEE 2014
Contact: Daniel Sonntag, DFKI

ERmed: Evaluation

ERmed: First evaluation on the ability to accurately focus on virtual icons in each of several focus planes despite having only monocular depth ques, confirming the viability of these methods for interaction.

ERmed Evaluation ELTE

ERmed: Second evalution at ELTE in Budapest focussing on self-calibrating eye-tracking, robust gaze-guided object recognition and how “artifical salience” can modulate the gaze (un)consciously.

ERmed Eye Gaze ELTE

ERmed: In Budapest, Takumi and Jason work on a combination of eye gaze and dynamic text management that allows user centric text to move along a user’s path in realtime.

First Demo of THESEUS MEDICO Radpeech at ISI Erlangen

First Demo of THESEUS MEDICO Radpeech at ISI Erlangen.

Professor Alexander Cavallaro’s vision of the educated lymphoma patient of the future is very different from today’s patient, who carries the computed tomography (CT) images of his lungs and abdomen home on a CD or DVD after a routine radiological examination.

How semantic technologies can be applied to medicine was illustrated by the THESEUS MEDICO research project, which brought together radiologists from the University of Erlangen, experts from the German Research Center on Artificial Intelligence (DFKI), as well as researchers from Siemens, the Fraunhofer Society, and TUM (Technische Universität München). “MEDICO” was one of several cases put forward for the use of the THESEUS research program, which was initiated by the German Federal Ministry of Economics and Technology in 2007, in order to support technologies for an “Internet of services.

Full Text

Radspeech, the radiology workstation on the iPad (in German)