Bias in healthcare AI: why ‘smart’ isn't always safe
The explosion of artificial intelligence (AI) in healthcare is marketed as a revolution: faster diagnoses, smarter drug discovery, personalized treatments, and better outcomes. From big tech's bold promises to hospital dashboards and direct-to-consumer apps, we're told AI will "fix" medicine.
But as with any technological leap, uncomfortable truths lie beneath the hype - truths that the orthomolecular and integrative medicine communities must confront. One such truth is as old as computing itself: GIGO - Garbage In, Garbage Out.
GIGO Still Rules in the Age of AI
Artificial intelligence is only as "intelligent" as the data it's trained on and the assumptions behind it. If the inputs are flawed - biased, incomplete, or driven by commercial interests - the outputs will reflect and even amplify those flaws.
Let's unpack how this happens.
1. Biased and Incomplete Medical Data
Most healthcare AIs are trained on mainstream, pharmaceutical-centered data - data that often ignores or minimizes the role of nutrition, detoxification, micronutrient therapy, and lifestyle factors. These datasets rarely reflect real-world patient complexity, and almost never orthomolecular insights into inflammation, oxidative stress, or micronutrient deficiencies.
If an AI has never "seen" high-dose vitamin C reversing infection, or a low-carb diet resolving diabetes, it won't suggest these solutions. The AI's "clinical intelligence" is inherently biased against approaches outside the conventional drug-based paradigm.
2. Systematic Exclusion of Orthomolecular and CAM Knowledge
The absence of orthomolecular and complementary medicine (CAM) data in AI systems is not accidental - it is the result of systematic marginalization by powerful interest groups.
Peer-reviewed journals often reject nutrition-based research due to reviewer bias or institutional alignment. Algorithms may down-rank CAM content. Social media platforms label orthomolecular insights as "misinformation" when they challenge pharmaceutical norms. Even search engines increasingly bury credible non-pharma-based articles.
Thus, a vast body of clinical evidence, biochemical rationale, and real-world success stories is excluded not for lack of merit, but for threatening entrenched models.
When AI is built on sanitized, commercially curated "evidence," it becomes a tool of consolidation - not innovation.
3. Commercial Bias Disguised as Intelligence
AI tools are often developed by entities prioritizing compliance, billing efficiency, and drug-based protocols - not optimal health. As a result, these tools reinforce the very assumptions behind today's chronic disease epidemic.
Instead of expanding options, AI may narrow them further, ignoring root-cause and nutrient-based interventions in favor of standardized treatment paths that serve institutional convenience.
4. The Illusion of Objectivity
Many AI models function as "black boxes" - with outputs that even developers cannot fully explain. This creates a dangerous illusion of objectivity.
Patients and clinicians may assume that AI-generated recommendations are scientifically robust, when they may in fact be based on missing data, skewed assumptions, or commercial priorities.
5. Loss of Clinical Judgment and Human Connection
AI promises efficiency - but at what cost? Over-reliance on automation may erode clinical judgment, critical thinking, and empathy - the foundations of good medicine.
Orthomolecular and functional medicine emphasize biochemical individuality and whole-person care. These values cannot be reduced to code unless the code is built with them in mind.
We Need a Different Kind of AI - One Rooted in Real Science
The solution is not to reject AI, but to rebuild it from a foundation of scientific integrity and medical freedom. We need decision-support systems that reflect evidence-based but underrepresented knowledge - not just the dominant, drug-centric narrative.
This is not an idealistic dream. It's a scientific and ethical necessity.
Introducing IOM.ai - A New Vision for Medical AI
To meet the growing need for truly integrative, science-based digital tools in healthcare, I and a group of like-minded colleagues are developing IOM.ai - an in-progress initiative committed to:
- Orthomolecular science and systems biology
- Functional and nutritional medicine
- Transparent, explainable decision-making
- Open-access knowledge
- Empowerment for both physicians and patients
IOM.ai is not a commercial product - it is a value-driven, non-profit-aligned platform currently under development. Our goal is to amplify underrepresented science and build an intelligent system that serves real-world health needs through a root-cause, nutrition-centered approach.
We believe this vision is closely aligned with the mission of OMNS - and resonates with the broader movement for integrative, orthomolecular, and patient-centered medicine. As we build this system, we invite the OMNS board, contributors, readers, and the wider community of health professionals, researchers, educators, and advocates to collaborate with us - to co-create a medical AI that truly reflects the principles and practice of root-cause, science-based medicine.
If you are interested in supporting or contributing to IOM.ai - whether through data, funding, research, or advocacy - we warmly welcome your involvement. Please reach out.
Conclusion: GIGO Still Rules
GIGO remains the law of computing - and now, of healthcare AI. If we feed AI systems flawed, biased, or commercially skewed data, we will get flawed, biased, and potentially dangerous outputs.
The antidote isn't abandoning AI. It's building better AI - one grounded in the truth, not the market. One that respects science over spin, and patients over protocols.
To Subscribe at no charge: http://www.orthomolecular.org/subscribe.html
OMNS archive link http://orthomolecul1 Hour of Screen Time at Bedtime Reduces Sleep by 24 Minutes, Study Finds ar.org/resources/omns/index.shtml