AI-Assisted Acute Myeloid Leukemia Evaluation With the Leukemia End-to-End Analysis Platform (LEAP) Versus Clinician-Only Assessment
LEAP
Prospective Studies on Artificial Intelligence for Cancer Pathology Evaluation
1 other identifier
interventional
10
1 country
1
Brief Summary
This study will test whether artificial intelligence (AI) can help doctors diagnose a rare blood cancer called acute promyelocytic leukemia (APL) more quickly and accurately. Doctors usually examine bone marrow samples under a microscope to make this diagnosis, but it can be challenging and time-consuming. In this study, doctors will review bone marrow samples under three different conditions:
- Unaided Review: Without AI assistance.
- AI as Double-Check: AI-generated evaluation shown after the doctor makes an initial decision.
- AI as First Look: AI-generated evaluation shown at the start of the review. Doctors will be randomly assigned to different orders of these three conditions. This design will allow us to compare how AI support affects diagnostic accuracy, speed, and confidence.
Trial Health
Trial Health Score
Automated assessment based on enrollment pace, timeline, and geographic reach
participants targeted
Target at below P25 for not_applicable
Started Sep 2025
Shorter than P25 for not_applicable
1 active site
Health score is calculated from publicly available data and should be used for screening purposes only.
Trial Relationships
Click on a node to explore related trials.
Study Timeline
Key milestones and dates
First Submitted
Initial submission to the registry
September 8, 2025
CompletedStudy Start
First participant enrolled
September 9, 2025
CompletedFirst Posted
Study publicly available on registry
October 2, 2025
CompletedPrimary Completion
Last participant's last visit for primary outcome
October 10, 2025
CompletedStudy Completion
Last participant's last visit for all outcomes
October 10, 2025
CompletedOctober 31, 2025
October 1, 2025
1 month
September 8, 2025
October 29, 2025
Conditions
Keywords
Outcome Measures
Primary Outcomes (1)
Diagnostic performance of APL detection
Performance of clinicians (unaided and AI-assisted) in detecting APL, measured in accuracy, sensitivity, specificity, positive predictive value, and negative predictive value.
Periprocedural (at the time of slide review)
Secondary Outcomes (6)
Time to diagnosis
Periprocedural (at the time of slide review)
Inter-observer variability
Periprocedural (at the time of slide review)
Concordance between AI predictions and clinicians' diagnoses
Periprocedural (at the time of slide review)
Decision-change rates
Periprocedural (at the time of slide review)
Net benefit after AI exposure
Periprocedural (at the time of slide review)
- +1 more secondary outcomes
Study Arms (2)
Unaided Review First, Then AI-Assisted Review
ACTIVE COMPARATORReaders first complete Block X (Unaided) on their assigned subset SX (34 slides). They then complete Block Y (AI-Assisted) on two separate subsets: SY1 (34 slides; AI as Double-Check) and SY2 (34 slides; AI as First Look). Within Block Y, the order of Y1 and Y2 is randomized. For each reader, SX, SY1, and SY2 are disjoint and stratified by APL status.
AI-Assisted Review First, Then Unaided Review
ACTIVE COMPARATORReaders first complete Block Y (AI-Assisted) on two assigned subsets: SY1 (34 slides; AI as Double-Check) and SY2 (34 slides; AI as First Look), with the order of Y1 and Y2 randomized. They then complete Block X (Unaided) on subset SX (34 slides). For each reader, SX, SY1, and SY2 are disjoint and stratified by APL status.
Interventions
Readers first complete Block X (Unaided) on their assigned subset SX (34 slides). They then complete Block Y (AI-Assisted) on two separate subsets: SY1 (34 slides; AI as Double-Check) and SY2 (34 slides; AI as First Look). Within Block Y, the order of Y1 and Y2 is randomized. For each reader, SX, SY1, and SY2 are disjoint and stratified by APL status.
Readers first complete Block Y (AI-Assisted) on two assigned subsets: SY1 (up to 40 slides; AI as Double-Check) and SY2 (up to 40 slides; AI as First Look), with the order of Y1 and Y2 randomized. They then complete Block X (Unaided) on subset SX (up to 40 slides). For each reader, SX, SY1, and SY2 are disjoint and stratified by APL status.
Eligibility Criteria
You may qualify if:
- Wright-Giemsa-stained bone marrow aspirate smears
- Final diagnosis confirmed through molecular testing in conjunction with expert pathology evaluation
You may not qualify if:
- Poor-quality or unreadable slides
- Cases used in AI training
- Board-certified or board-eligible pathologists, or board-certified/board-eligible hematologists who routinely make hematopathology diagnoses in their clinical practice
- Willingness to complete both unaided and AI-assisted review sessions
Contact the study team to confirm eligibility.
Sponsors & Collaborators
- Harvard Medical School (HMS and HSDM)lead
- National Taiwan University Hospitalcollaborator
- Far Eastern Memorial Hospitalcollaborator
- Taipei Veterans General Hospital, Taiwancollaborator
- Brigham and Women's Hospitalcollaborator
- Massachusetts General Hospitalcollaborator
Study Sites (1)
Harvard Medical School
Boston, Massachusetts, 02115, United States
MeSH Terms
Conditions
Condition Hierarchy (Ancestors)
Study Design
- Study Type
- interventional
- Phase
- not applicable
- Allocation
- RANDOMIZED
- Masking
- TRIPLE
- Who Masked
- CARE PROVIDER, INVESTIGATOR, OUTCOMES ASSESSOR
- Purpose
- DIAGNOSTIC
- Intervention Model
- CROSSOVER
- Sponsor Type
- OTHER
- Responsible Party
- PRINCIPAL INVESTIGATOR
- PI Title
- Associate Professor of Biomedical Informatics
Study Record Dates
First Submitted
September 8, 2025
First Posted
October 2, 2025
Study Start
September 9, 2025
Primary Completion
October 10, 2025
Study Completion
October 10, 2025
Last Updated
October 31, 2025
Record last verified: 2025-10
Data Sharing
- IPD Sharing
- Will not share