Research is a team sport.

Below are highlights from the last 10+ years of research Vega Health’s co-founder and CEO Dr. Mark Sendak conducted with colleagues at the Duke Institute for Health Innovation (DIHI) and Health AI Partnership (HAIP) - the research that is the foundation for Vega Health’s platform and process.

AI as an intervention: Improving clinical outcomes relies on a causal approach to AI development and validation

Journal of the American Informatics Association, 2025

How to design healthcare AI to actually improve patient outcomes. Dr. Mark Sendak and leaders from the Machine Learning for Health Care (MHLC) community provide a framework for health systems to separate genuinely beneficial AI from expensive digital tools that look impressive but don't deliver real clinical value. This approach argues for treating AI like a medical intervention, requiring the same rigorous validation standards used for new drugs or procedures.

Strengthening the use of artificial intelligence within healthcare delivery organizations: balancing regulatory compliance and patient safety

Journal of the American Informatics Association, 2025

As AI regulation explodes across states and federal agencies, health systems face a critical challenge: how to implement AI innovation to advance patient safety without triggering costly regulatory violations. Dr. Mark Sendak and leaders from Health AI Partnership (HAIP) provide a roadmap that allows health systems to deploy AI safely while staying ahead of evolving regulatory requirements, helping organizations avoid legal and financial risks.

The algorithm journey map: a tangible approach to implementing AI solutions in healthcare

NPJ Digital Medicine, 2024

Dr. Sendak and his colleagues at DIHI developed an Algorithm Journey Map to provide a systematic, step-by-step historical account of the multi-year journey for the Duke Sepsis Watch AI solution to be implemented in clinical care. The framework can help health systems navigate the complex process from proof-of-concept to production. This visual framework identifies key stakeholders, anticipates implementation challenges, and establishes success metrics that turn AI pilots into predictable operational successes.

A framework for healthcare delivery organizations to mitigate the risk of AI solutions worsening health inequities

PLOS Digital Health, 2024

Health systems need proactive protection against algorithmic discrimination. Dr. Sendak and leaders from the Health AI Partnership (HAIP) developed the Health Equity Across the AI Lifecycle (HEAAL) framework to provide practical tools and a systematic approach to identify and eliminate AI bias before deployment, ensuring AI tools promote equitable care across all patient populations while protecting organizations from legal and reputational risks.

Meeting the Moment: Addressing Barriers and Facilitating Clinical Adoption of Artificial Intelligence in Medical Diagnosis

National Academy of Medicine, 2022

Studies show that 80% of healthcare AI projects fail to achieve meaningful adoption, wasting millions in investment, and leaving health systems frustrated with broken promises from AI vendors. This landmark provides an actionable roadmap that successful health systems use to overcome these obstacles. Dr. Sendak and an interdisciplinary team working on behalf of the National Academy of Medicine reveal the five critical factors that separate successful AI deployments from expensive failures.

Real-world integration of a sepsis deep learning technology into routine clinical care: implementation study

JMIR Medical Informatics, 2020

Sepsis Watch represents one of the most successful healthcare AI deployments ever documented, demonstrating that AI can deliver measurable clinical results. Dr. Sendak and his colleagues at DIHI led this groundbreaking project at Duke Health, which continuously monitors patients for sepsis risk and has helped save lives. This paper provides the complete implementation playbook to take AI from the research lab to live clinical deployment with proven patient benefit.

A Path for Translation of Machine Learning Products into Healthcare Delivery

European Medical Journal Innovations, 2020

The biggest challenge in healthcare AI isn't developing algorithms—it's successfully translating research breakthroughs into operational clinical tools that deliver real-world impact in clinical care. Dr. Sendak and his colleagues at DIHI develop a translation framework that bridges this critical gap by providing the structured methodology that transforms promising AI research into practical solutions that integrate with clinical workflows, staff capabilities, and organizational ways of working.

Presenting machine learning model information to clinical end users with model facts labels

NPJ Digital Medicine, 2020

The biggest barrier to AI adoption isn't technical—it's getting doctors and nurses to trust and consistently use AI tools in their daily practice. Dr. Sendak and his colleagues at DIHI developed the first Model Facts Label ever used for an AI solution in healthcare. The Model Facts framework advances use of AI by front-line clinicians by providing a standardized approach to communicate AI capabilities, limitations, and appropriate use cases to healthcare workers. Like nutrition labels on food, these labels build clinician confidence and drive consistent utilization across care teams, turning AI skeptics into effective users.

Integrating a machine learning system into clinical workflows: qualitative study

Journal of Medical Internet Research, 2020

What really happens when AI meets the messy reality of clinical workflows? Through detailed interviews with healthcare workers during actual AI implementation, Dr. Sendak and his colleagues at DIHI reveal the hidden challenges, unexpected resistance, and critical success factors that determine whether AI becomes a workflow enhancer or an expensive digital disruption.

Do no harm: a roadmap for responsible machine learning for health care

Nature Medicine, 2019

With over 1,000 citations, this is the most influential paper in healthcare AI ethics and the blueprint that leading health systems use to ensure responsible AI deployment. Dr. Sendak and leaders from the Machine Learning for Health Care (MHLC) community develop a framework that addresses the fundamental question every health system asks: how do we implement AI that enhances patient care while protecting our organization from bias, liability, and unintended harm? This paper established the gold standard principles that guide ethical AI development and help health systems navigate the complex balance between innovation and patient safety.