AI-Based Symptom Assessment Applications in Healthcare

Slide Note
Embed
Share

The presentation highlights the status of work within TG-Symptom for discussion during the e-meeting. It covers AI-based symptom assessment mobile/web applications, pre-clinical triage, and differential diagnosis. The meeting provides updates on symptom assessment and benchmarking approaches within the group, discussing challenges in case encoding and ontology usage.


Uploaded on Nov 18, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. FGAI4H-N-021-A03 E-meeting, 15-17 February 2022 Source: TG-Symptom Topic Driver Title: Att.3 Presentation (TG-Symptom) Purpose: Discussion Contact: Henry Hoffmann E-mail: henry.hoffmann@ada.com Abstract: This PPT summarizes the status of work within TG-Symptom, for presentation and discussion during the meeting.

  2. Meeting N Update for the Topic Group Symptom Assessment E-meeting, 15-17 February 2022

  3. AI-based Symptom Assessment Mobile/Web Applications sometimes called Symptom Checkers that Allow users to enter (INPUT) Patient information (age, sex, ) Current presenting Complaints (symptoms, free texts ) To then engage in a dialog similar to a doctor collecting (INPUT) Additional symptoms, findings, factors, attributes, (lab, imaging, genetics, ) To finally provide the user with (OUTPUT) Pre-Clinical Triage (emergency, see doctor today, self-care ) Differential Diagnosis (disease A 93%, B 73% - e.g. in ICD10, SNOMED CT) Additional diagnostic tests to perform, treatment advice, explanations

  4. Journey Meeting A to C Geneva, 25-27 September 2018 A-020: Towards a potential AI4H use case "diagnostic self- assessment apps" Topic Group created Meeting F Zanzibar 2-5 September 2019 First benchmarking with toy AI & toy Data Minimal Minimal viable Benchmarking MMVB 1.0 Meeting G K Adding technical details to benchmarking and model MMVB 2.2 work completed Meeting K N, Work on approach and tools for encoding benchmarking cases using a shared ontology (ongoing) Meeting M N, Cooperation with audit trials initiative

  5. Benchmarking Approach General approach Started with toy AI and toy data to build a benchmarking system Stepwise increasing complexity Learn all the relevant details for the real benchmarking TG-Symptom benchmarking platform MMVB 2.2 Finished Django Backend JS React Frontend Annotation Tool Toy Ais: Ada, Babylon, Infermedica, Your.MD (all cloud hosted) Implementation work paused Given the impressive progress by the open-code initiative we see this as the best option for the TG-symptom benchmarking

  6. Case Encoding Challenge Every Topic Group member has an own Benchmarking System Essential for development work Similar metrics Completely different case encoding of symptoms Core Task of the Topic Group Agree on how to encode case data for benchmarking Simple for diseases Possible for symptoms/findings Difficult for attributes Options Create an new ontology -> several person years Use an existing one -> incomplete, inconsistent, redundant -> decision to use SNOMED CT anyway

  7. Annotation-Tool Status Focus on attribute support For last meeting: adding SNOMED symptoms Since last meeting: adding symptom attributes Using SNOMED CT attribute relations Some initial testing by TG doctors Other new features Show usage of symptoms Mark unappropriated symptoms Extending test case set Topic group doctors collected/created more abdominal pain related case vignettes For testing the annotation tool For testing AI ontology mapping and benchmarking Research on other sources of unencoded cases

  8. Audit Group Cooperation Audit trial participation: Goal: Learn if/how TG-Symptom could use the OCI evaluation platform and process for benchmarking Start with toyAI + data from MMVB 2.2 system Accept that we cannot follow the proposed timeline Audit team: Carolin Prabhu (Regulatory Expert), Bastiaan Quast (Software Platform Expert / Audit Developer), Eva Weicken (Clinical Expert), Marta Lemanczyk (Ethical Expert), Frank Klawonn (ML Expert) Kickoff workshop Introduction to the audit group to the TG specifics Audit verification checklist work More difficult than expected due to the non-ML nature Additional list with >100 questions need to jugde system performance

  9. Audit Group Cooperation TG-Symptom challenge configuration Step 1: Text-file solution submission Implemented TG-Symptom specific metrics Adjusted some of the static content (not finished) MMVB2.2 helper scripts Export a 1000k cases data-set generate by our MMVB2.2 system Generate from it: Audit annotations, AI inputs , AI submissions Status: Script published in aiaudit GitHub https://github.com/aiaudit-org/trial-audits-team-a-tg-symptoms Locally evaluation works Uploaded first challenge version to aiaudit.org Approved today We expect to see numbers this week Annotation package integration Work with OCI to call our annotation tool as case editor (similar to the MRI annotation tools) Separating the case list and case editor (for annotation package integration)

  10. Non-Technical Work Status TDD related work Beside Chapter 8 regulatory more or less ready for now No work done since meeting M (but the meeting update) Next update once we have a benchmarking system with real data and a real AI Given that collecting a data-set will take time, we expect to be part of the mid-2023 submission

  11. Next Steps Continue annotation tool work Further testing of the new attribute annotation feature by TG doctors Improve usabiltiy (e.g. for finding site) based on feedback Audit/OCI integration Connection to the annotation package i.e. use annotation tool as editor for TG-Symptom cases Test the annotation package workflows with collected case vignettes Use annotated cases in benchmarking Docker-based parallel evaluation of online Ais Connection to TG-Symptom AIs Mapping to toy-AI Mapping to real AIs Outreach Meeting on possible cooperation with TG-MSK (Yura Perov) in next weeks

  12. General Status of the Topic Group Topic Group Members: Companies (18+2): 1DOC3, Ada, Babylon, Baidu, Barkibu, Buoy, Deepcare, Flo, Infermedica, Inspired Ideas, Isabel Healthcare, Kahun, mfine, MyDoctor, Nivi, PNP, Symptify, Visiba Care, xund.ai, healthily (Your.MD) Independent contributors (7): Reza Jarral, Thomas Neumark, Muhammad Murhaba, Pritesh Mistry, Alejandro Osornio, Salman Razzaki, Yura Perov (This morning: interest by Mr. Salim Diwani) Audit group: Carolin Prabhu, Bastiaan Quast, Eva Weicken, Marta Lemanczyk, Frank Klawonn fgai4htgsymptom@lists.itu.int (109+8) 10 Online meetings since meeting M + weekly dev-standup (All with minutes/protocols in SharePoint)

  13. Thank you! WHO/ITU FG AI4H TG Symptom Assessment Meeting N Update E-meeting, 15-17 February 2022

Related


More Related Content