<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:podcast="https://podcastindex.org/namespace/1.0">
    <channel>
        <generator>RedCircle VERIFY_TOKEN_7ba0f646-5486-4319-a2d1-6a71de0f4634  -- Rendered At Tue, 12 May 2026 07:40:49 &#43;0000</generator>
        <title>The Health AI Brief</title>
        <link>https://redcircle.com/shows/the-health-ai-brief</link>
        <language>en-US</language>
        <copyright>All rights reserved.</copyright>
        <itunes:author>Stephen A</itunes:author>
        <itunes:summary>Decoding artificial intelligence for busy medical professionals in just a few minutes. Every second counts. We provide high-yield AI insights for physicians, surgeons, and healthcare executives who need the signal without the noise.

Stay ahead of the future of medicine with ultra-concise briefings on:
- Ambient Clinical Intelligence: Automating medical documentation and EHR workflows.
- Generative AI &amp; LLMs: Practical applications of ChatGPT and medical-grade AI in the clinic.
- Agentic AI: The rise of autonomous medical assistants and triage tools.
- ROI of HealthTech: Evaluating AI tools that actually reduce clinician burnout and improve patient outcomes.

We cut through the tech hype to deliver the clinical-grade intelligence you need to lead the digital transformation in healthcare. No long intros, no fluff, just the high-yield facts to help you master Medical AI during your commute or between patients.

Subscribe now for your daily AI advantage.</itunes:summary>
        <podcast:guid>7ba0f646-5486-4319-a2d1-6a71de0f4634</podcast:guid>
        
        <description><![CDATA[<p><strong>Decoding artificial intelligence for busy medical professionals in just a few minutes.</strong> Every second counts. We provide <strong>high-yield AI insights</strong> for physicians, surgeons, and healthcare executives who need the signal without the noise.</p><p>Stay ahead of the <strong>future of medicine</strong> with ultra-concise briefings on:</p><ul><li><strong>Ambient Clinical Intelligence:</strong> Automating medical documentation and EHR workflows.</li><li><strong>Generative AI &amp; LLMs:</strong> Practical applications of ChatGPT and medical-grade AI in the clinic.</li><li><strong>Agentic AI:</strong> The rise of autonomous medical assistants and triage tools.</li><li><strong>ROI of HealthTech:</strong> Evaluating AI tools that actually reduce <strong>clinician burnout</strong> and improve <strong>patient outcomes</strong>.</li></ul><p>We cut through the tech hype to deliver the <strong>clinical-grade intelligence</strong> you need to lead the <strong>digital transformation in healthcare.</strong> No long intros, no fluff, just the high-yield facts to help you master <strong>Medical AI</strong> during your commute or between patients.</p><p><strong>Subscribe now</strong> for your daily AI advantage.</p>]]></description>
        
        <itunes:type>episodic</itunes:type>
        <podcast:locked>no</podcast:locked>
        <itunes:owner>
            <itunes:name>Stephen A</itunes:name>
            <itunes:email>sauger1989@hotmail.com</itunes:email>
        </itunes:owner>
        
        <itunes:image href="https://media.redcircle.com/images/2025/8/13/9/b68b096c-411b-452d-9b04-68838ad1e8bd_logo_updated.jpg"/>
        
        
        
            
            <itunes:category text="Health &amp; Fitness">

            
                <itunes:category text="Medicine"/>
            

        </itunes:category>
        
            
            <itunes:category text="Technology" />

            

        
        
            
            <itunes:category text="Education">

            
                <itunes:category text="How To"/>
            
                <itunes:category text="Courses"/>
            

        </itunes:category>
        

        
        <itunes:explicit>no</itunes:explicit>
        
        
        
        
        
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Managing ‘Needle in a Haystack’ Context - Why AI Struggles with the Middle of Your Notes</itunes:title>
                <title>Managing ‘Needle in a Haystack’ Context - Why AI Struggles with the Middle of Your Notes</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>LLMs have a &#34;memory&#34; problem called the U-Shaped Curve, they remember the start and end of your prompt, but forget the middle. We teach you how to position the most critical patient data (like allergies or DNR status) to ensure the AI never misses the &#34;needle in the haystack.&#34;</span></p><p><br></p><p><span>𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 &amp; 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐃𝐢𝐬𝐜𝐥𝐨𝐬𝐮𝐫𝐞:</span></p><p><span>This concise summary of AI technology is for 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐚𝐧𝐝 𝐢𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐩𝐮𝐫𝐩𝐨𝐬𝐞𝐬 𝐨𝐧𝐥𝐲. It provides a technical analysis of AI capabilities in healthcare and does not constitute medical advice, diagnosis, or treatment.</span></p><p><span>• 𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐀𝐜𝐜𝐨𝐮𝐧𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲: If you are a healthcare professional, ensure any implementation of AI tools complies with your local Trust’s policies, data governance protocols, and professional regulatory standards (GMC/NMC/HCPC or equivalent).</span></p><p><span>• 𝐈𝐧𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐄𝐯𝐢𝐝𝐞𝐧𝐜𝐞-𝐁𝐚𝐬𝐞𝐝 𝐑𝐞𝐯𝐢𝐞𝐰: The views expressed are my own and do not represent the official position of any University, Hospital Trust, employer, or regulatory body.</span></p><p><span>• 𝐏𝐚𝐭𝐢𝐞𝐧𝐭 𝐒𝐚𝐟𝐞𝐭𝐲: This video does not establish a doctor-patient relationship. Members of the public should always seek the advice of a qualified healthcare provider regarding any medical condition.</span></p><p><br></p><p><span>Music generated by Mubert https://mubert.com/render</span></p><p><span>https://substack.com/@healthaibrief</span></p><p><br></p><p><span>#ContextWindow #MachineLearning #ClinicalSafety #ai in medicine</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;LLMs have a &amp;#34;memory&amp;#34; problem called the U-Shaped Curve, they remember the start and end of your prompt, but forget the middle. We teach you how to position the most critical patient data (like allergies or DNR status) to ensure the AI never misses the &amp;#34;needle in the haystack.&amp;#34;&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 &amp;amp; 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐃𝐢𝐬𝐜𝐥𝐨𝐬𝐮𝐫𝐞:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This concise summary of AI technology is for 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐚𝐧𝐝 𝐢𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐩𝐮𝐫𝐩𝐨𝐬𝐞𝐬 𝐨𝐧𝐥𝐲. It provides a technical analysis of AI capabilities in healthcare and does not constitute medical advice, diagnosis, or treatment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• 𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐀𝐜𝐜𝐨𝐮𝐧𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲: If you are a healthcare professional, ensure any implementation of AI tools complies with your local Trust’s policies, data governance protocols, and professional regulatory standards (GMC/NMC/HCPC or equivalent).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• 𝐈𝐧𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐄𝐯𝐢𝐝𝐞𝐧𝐜𝐞-𝐁𝐚𝐬𝐞𝐝 𝐑𝐞𝐯𝐢𝐞𝐰: The views expressed are my own and do not represent the official position of any University, Hospital Trust, employer, or regulatory body.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• 𝐏𝐚𝐭𝐢𝐞𝐧𝐭 𝐒𝐚𝐟𝐞𝐭𝐲: This video does not establish a doctor-patient relationship. Members of the public should always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;#ContextWindow #MachineLearning #ClinicalSafety #ai in medicine&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="2030863" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/c21d6aa1-b239-4aca-916c-7cee13bd2c0c/stream.mp3"/>
                
                <guid isPermaLink="false">35b627dc-8c9c-4df7-a669-a6e0c368ef27</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/c21d6aa1-b239-4aca-916c-7cee13bd2c0c</link>
                <pubDate>Tue, 12 May 2026 06:00:14 &#43;0000</pubDate>
                <itunes:duration>126</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Can the WHO’s AI Fix Medical Misinformation?</itunes:title>
                <title>Can the WHO’s AI Fix Medical Misinformation?</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Can the WHO’s new AI tool, ChatHRP, solve the global crisis of medical misinformation? Discover how this Retrieval-Augmented Generation system provides clinicians with instant access to verified sexual and reproductive health and rights (SRHR) data.</p><p><br></p><p>ChatHRP is a beta-phase AI assistant developed by the HRP and the World Health Organization to streamline access to evidence-based healthcare guidance. Utilizing advanced natural language processing, the tool targets the high-stakes domain of sexual and reproductive health, where misinformation often leads to systemic human rights implications. While the current iteration faces challenges with specific clinical edge cases and conversational memory, it represents a significant move toward public-interest AI that operates independently of commercial algorithms. This episode analyses the technical architecture of the tool, its performance in real-world clinical queries, and the strategic roadmap required to scale such a project into a global &#34;Unified Guideline Engine.&#34;</p><p><br></p><p>Original source: https://www.who.int/news/item/23-04-2026-finding-sexual-and-reproductive-health-and-rights-facts-fast--a-new-ai-powered-tool </p><p>The tool: https://chathrp.org/ </p><p><br></p><p>Key Takeaways:</p><p>• The technical benefits of using RAG (Retrieval-Augmented Generation) to minimize hallucinations in clinical AI.</p><p>• Analysis of the current limitations in context-window management and data-depth within specialized medical databases.</p><p>• The strategic necessity for public-sector investment from organizations like the Gates Foundation to compete with proprietary medical LLMs.</p><p><br></p><p>0:00 Why the WHO is Developing AI</p><p>0:41 Introducing ChatHRP</p><p>1:04 How RAG (Retrieval-Augmented Generation) Works</p><p>1:44 Reducing Risks in Clinical Settings</p><p>2:18 The Technical Challenges of Clinical AI</p><p>2:54 Case Study: Identifying Proximity Errors</p><p>4:03 The Importance of Conversational History</p><p>4:30 Public Interest AI vs. Commercial Interests</p><p>5:03 Democratizing Access in Low-Resource Settings</p><p>5:42 Scaling Toward a Unified Guideline Engine</p><p>6:58 Conclusion: The Future of Global Medical Knowledge</p><p><br></p><p>Related content you may like:</p><p>https://youtu.be/cLO_nrKtKn8 - OpenEvidence explainer</p><p>https://youtu.be/eWCrhxaxkPw - RAG explainer</p><p><br></p><p>Clinical Governance &amp; Educational Disclosure</p><p>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</p><p>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</p><p>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</p><p>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</p><p><br></p><p>Music generated by Mubert https://mubert.com/render</p><p>https://substack.com/@healthaibrief</p><p><br></p><p>#HealthAI #WHO #MedicalInformatics #SRHR #DigitalHealth #ClinicalAI #RAG #EvidenceBasedMedicine #HealthTech #GlobalHealth</p>]]></description>
                <content:encoded>&lt;p&gt;Can the WHO’s new AI tool, ChatHRP, solve the global crisis of medical misinformation? Discover how this Retrieval-Augmented Generation system provides clinicians with instant access to verified sexual and reproductive health and rights (SRHR) data.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;ChatHRP is a beta-phase AI assistant developed by the HRP and the World Health Organization to streamline access to evidence-based healthcare guidance. Utilizing advanced natural language processing, the tool targets the high-stakes domain of sexual and reproductive health, where misinformation often leads to systemic human rights implications. While the current iteration faces challenges with specific clinical edge cases and conversational memory, it represents a significant move toward public-interest AI that operates independently of commercial algorithms. This episode analyses the technical architecture of the tool, its performance in real-world clinical queries, and the strategic roadmap required to scale such a project into a global &amp;#34;Unified Guideline Engine.&amp;#34;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Original source: https://www.who.int/news/item/23-04-2026-finding-sexual-and-reproductive-health-and-rights-facts-fast--a-new-ai-powered-tool &lt;/p&gt;&lt;p&gt;The tool: https://chathrp.org/ &lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• The technical benefits of using RAG (Retrieval-Augmented Generation) to minimize hallucinations in clinical AI.&lt;/p&gt;&lt;p&gt;• Analysis of the current limitations in context-window management and data-depth within specialized medical databases.&lt;/p&gt;&lt;p&gt;• The strategic necessity for public-sector investment from organizations like the Gates Foundation to compete with proprietary medical LLMs.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;0:00 Why the WHO is Developing AI&lt;/p&gt;&lt;p&gt;0:41 Introducing ChatHRP&lt;/p&gt;&lt;p&gt;1:04 How RAG (Retrieval-Augmented Generation) Works&lt;/p&gt;&lt;p&gt;1:44 Reducing Risks in Clinical Settings&lt;/p&gt;&lt;p&gt;2:18 The Technical Challenges of Clinical AI&lt;/p&gt;&lt;p&gt;2:54 Case Study: Identifying Proximity Errors&lt;/p&gt;&lt;p&gt;4:03 The Importance of Conversational History&lt;/p&gt;&lt;p&gt;4:30 Public Interest AI vs. Commercial Interests&lt;/p&gt;&lt;p&gt;5:03 Democratizing Access in Low-Resource Settings&lt;/p&gt;&lt;p&gt;5:42 Scaling Toward a Unified Guideline Engine&lt;/p&gt;&lt;p&gt;6:58 Conclusion: The Future of Global Medical Knowledge&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Related content you may like:&lt;/p&gt;&lt;p&gt;https://youtu.be/cLO_nrKtKn8 - OpenEvidence explainer&lt;/p&gt;&lt;p&gt;https://youtu.be/eWCrhxaxkPw - RAG explainer&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/p&gt;&lt;p&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/p&gt;&lt;p&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/p&gt;&lt;p&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/p&gt;&lt;p&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;https://substack.com/@healthaibrief&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #WHO #MedicalInformatics #SRHR #DigitalHealth #ClinicalAI #RAG #EvidenceBasedMedicine #HealthTech #GlobalHealth&lt;/p&gt;</content:encoded>
                
                <enclosure length="7351484" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/394df532-3d60-4a18-90fc-1a737f7651a4/stream.mp3"/>
                
                <guid isPermaLink="false">753fcd60-212f-4659-a55d-0b4fabeea387</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/394df532-3d60-4a18-90fc-1a737f7651a4</link>
                <pubDate>Thu, 07 May 2026 06:00:20 &#43;0000</pubDate>
                <itunes:duration>459</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI Just Beat Harvard Doctors?</itunes:title>
                <title>AI Just Beat Harvard Doctors?</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>Can AI truly out-diagnose a Harvard-trained physician? In this episode, we break down a groundbreaking study from Science  where OpenAI’s o1 model went head-to-head with hundreds of doctors in real-world emergency room cases.</span></p><p><br></p><p><span>The paper: https://www.science.org/doi/full/10.1126/science.adz4433 </span></p><p><br></p><p><span>We analyse the performance of large language models on complex reasoning tasks, from the prestigious NEJM Clinicopathological Conferences to live patients in the ER. While the results show AI outperforming humans at the triage stage, we dig into the crucial details that the headlines missed—including the risks of overdiagnosis and the bias inherent in the study&#39;s patient selection. This is an essential deep dive for any clinician, healthcare manager, or tech enthusiast looking to understand the future of clinical reasoning and the path toward integrating AI into the hospital workflow.</span></p><p><br></p><p><span>Key Takeaways</span></p><p><span>• Discover how OpenAI’s o1 series achieves 98% accuracy on complex diagnostic cases and significantly outperforms GPT-4 in clinical management.</span></p><p><span>• Understand the &#34;True Positive&#34; bias in the latest ER studies and why AI accuracy in the ICU doesn&#39;t necessarily translate to safe triage in the general population.</span></p><p><span>• Learn about the &#34;Bond Score&#34; and how medical AI is being evaluated against the gold standard of physician expertise.</span></p><p><br></p><p><span>00:00 Introduction to AI vs. Human Clinicians</span></p><p><span>01:13 Study Phase 1: NEJM Clinical Cases</span></p><p><span>01:51 Performance on Management Cases</span></p><p><span>02:35 Real-world Emergency Department Evaluation</span></p><p><span>03:45 Limitations of the Real-world Study</span></p><p><span>05:05 Methodology and Prompting Differences</span></p><p><span>05:52 Logistical Challenges and Data Validity</span></p><p><span>06:40 AI&#39;s Reasoning Capabilities in Medicine</span></p><p><span>07:34 Future Research and Collaborative Intelligence</span></p><p><span>08:31 Summary and Final Thoughts</span></p><p><br></p><p><span>Clinical Governance &amp; Educational Disclosure</span></p><p><span>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</span></p><p><span>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</span></p><p><span>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</span></p><p><span>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</span></p><p><br></p><p><span>Music generated by Mubert https://mubert.com/render</span></p><p><span>https://substack.com/@healthaibrief</span></p><p><br></p><p><span>#MedicalAI #HealthTech #OpenAI #ClinicalReasoning #DigitalHealth #HealthcareInnovation #MachineLearning #DoctorVsAI #FutureOfMedicine #MedEd</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Can AI truly out-diagnose a Harvard-trained physician? In this episode, we break down a groundbreaking study from Science  where OpenAI’s o1 model went head-to-head with hundreds of doctors in real-world emergency room cases.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;The paper: https://www.science.org/doi/full/10.1126/science.adz4433 &lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;We analyse the performance of large language models on complex reasoning tasks, from the prestigious NEJM Clinicopathological Conferences to live patients in the ER. While the results show AI outperforming humans at the triage stage, we dig into the crucial details that the headlines missed—including the risks of overdiagnosis and the bias inherent in the study&amp;#39;s patient selection. This is an essential deep dive for any clinician, healthcare manager, or tech enthusiast looking to understand the future of clinical reasoning and the path toward integrating AI into the hospital workflow.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Discover how OpenAI’s o1 series achieves 98% accuracy on complex diagnostic cases and significantly outperforms GPT-4 in clinical management.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Understand the &amp;#34;True Positive&amp;#34; bias in the latest ER studies and why AI accuracy in the ICU doesn&amp;#39;t necessarily translate to safe triage in the general population.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Learn about the &amp;#34;Bond Score&amp;#34; and how medical AI is being evaluated against the gold standard of physician expertise.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:00 Introduction to AI vs. Human Clinicians&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:13 Study Phase 1: NEJM Clinical Cases&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:51 Performance on Management Cases&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;02:35 Real-world Emergency Department Evaluation&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;03:45 Limitations of the Real-world Study&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;05:05 Methodology and Prompting Differences&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;05:52 Logistical Challenges and Data Validity&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;06:40 AI&amp;#39;s Reasoning Capabilities in Medicine&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;07:34 Future Research and Collaborative Intelligence&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;08:31 Summary and Final Thoughts&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;#MedicalAI #HealthTech #OpenAI #ClinicalReasoning #DigitalHealth #HealthcareInnovation #MachineLearning #DoctorVsAI #FutureOfMedicine #MedEd&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="9238987" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/fc5585b9-024e-4aec-9a61-6b722c421b39/stream.mp3"/>
                
                <guid isPermaLink="false">f067f4f1-3ae3-4387-8de4-d03cd56f61b4</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/fc5585b9-024e-4aec-9a61-6b722c421b39</link>
                <pubDate>Mon, 04 May 2026 06:00:49 &#43;0000</pubDate>
                <itunes:duration>577</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Google DeepMind AI Co-Clinician Tries to Examine Patients</itunes:title>
                <title>Google DeepMind AI Co-Clinician Tries to Examine Patients</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>Is Google DeepMind’s new multimodal AI ready to see patients? A clinical breakdown of the AI co-clinician.</span></p><p><br></p><p><span>The transition from text-based chatbots to real-time audio-video medical AI marks a major milestone, but examining the clinical mechanics reveals critical hurdles before deployment.</span></p><p><br></p><p><span>Google DeepMind recently published a technical report and blog post detailing their &#34;AI co-clinician,&#34; a multimodal system powered by Gemini and Project Astra. Designed to conduct live telemedical consultations, the system uses a dual-agent architecture to process visual and auditory cues in real time. This analysis breaks down the technical achievements, the study design, and the subtle but significant clinical limitations observed in the demonstration, from hallucinated physical exams to the nuances of interpreting actual pathology versus simulated signs.</span></p><p><br></p><p><span>Link to the blogpost: https://deepmind.google/blog/ai-co-clinician/</span></p><p><span>Technical report: https://www.gstatic.com/vesper/ai_coclinician_technical_report.pdf </span></p><p><span>Example video: https://www.youtube.com/watch?v=dC4icb75vLQ </span></p><p><span>Key Takeaways</span></p><p><span>• How the dual-agent architecture separates conversational fluency from clinical reasoning.</span></p><p><span>• The methodological limitations of using physician-actors for evaluating AI on textbook cases.</span></p><p><span>• The critical difference between an AI identifying a simulated physical sign and interpreting true clinical pathology.</span></p><p><br></p><p><span>0:00 Introduction to DeepMind’s AI Co-Clinician</span></p><p><span>0:15 The Vision for AI-Powered Telehealth Consultations</span></p><p><span>0:57 Addressing the Global Healthcare Workforce Shortage</span></p><p><span>1:12 Evolution of Medical AI: From Text to Multimodal Systems</span></p><p><span>1:30 Dual Agent Architecture: The Talker vs. The Clinical Planner</span></p><p><span>2:27 Study Methodology: Comparing AI to Human Physicians</span></p><p><span>2:55 Key Results: Diagnostic Success vs. Clinical Failures</span></p><p><span>3:30 Critique: Limitations of the Evaluation Methodology</span></p><p><span>4:12 Poor Clinical Technique: The Problem with Compounded Questions</span></p><p><span>4:49 Physical Reality Failures: Sitting Exams and Hallucinated Fingers</span></p><p><span>5:28 Analysis: Misinterpreting Pathological Signs (Myasthenia Gravis)</span></p><p><span>6:56 Safety Risks: Missing Red Flags in Depression Screening</span></p><p><span>7:27 Experimental Showcase vs. Current Deployment Reality</span></p><p><span>8:15 The &#34;Medical Student&#34; Analogy: Knowledge vs. Experience</span></p><p><span>8:41 Summary: Technical Milestones and Physical Realities</span></p><p><span>9:43 Challenges in Clinical Supervision and Workflow Integration</span></p><p><span>11:00 Final Thoughts and Wrap Up</span></p><p><span>Clinical Governance &amp; Educational Disclosure</span></p><p><span>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</span></p><p><span>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</span></p><p><span>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</span></p><p><span>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</span></p><p><br></p><p><span>Music generated by Mubert https://mubert.com/render</span></p><p><span>https://substack.com/@healthaibrief</span></p><p><br></p><p><span>#HealthTech #MedicalAI #DeepMind #Telemedicine #ClinicalAI #DigitalHealth #FutureOfMedicine #HealthcareInnovation</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Is Google DeepMind’s new multimodal AI ready to see patients? A clinical breakdown of the AI co-clinician.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;The transition from text-based chatbots to real-time audio-video medical AI marks a major milestone, but examining the clinical mechanics reveals critical hurdles before deployment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Google DeepMind recently published a technical report and blog post detailing their &amp;#34;AI co-clinician,&amp;#34; a multimodal system powered by Gemini and Project Astra. Designed to conduct live telemedical consultations, the system uses a dual-agent architecture to process visual and auditory cues in real time. This analysis breaks down the technical achievements, the study design, and the subtle but significant clinical limitations observed in the demonstration, from hallucinated physical exams to the nuances of interpreting actual pathology versus simulated signs.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Link to the blogpost: https://deepmind.google/blog/ai-co-clinician/&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Technical report: https://www.gstatic.com/vesper/ai_coclinician_technical_report.pdf &lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Example video: https://www.youtube.com/watch?v=dC4icb75vLQ &lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• How the dual-agent architecture separates conversational fluency from clinical reasoning.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• The methodological limitations of using physician-actors for evaluating AI on textbook cases.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• The critical difference between an AI identifying a simulated physical sign and interpreting true clinical pathology.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;0:00 Introduction to DeepMind’s AI Co-Clinician&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;0:15 The Vision for AI-Powered Telehealth Consultations&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;0:57 Addressing the Global Healthcare Workforce Shortage&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;1:12 Evolution of Medical AI: From Text to Multimodal Systems&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;1:30 Dual Agent Architecture: The Talker vs. The Clinical Planner&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;2:27 Study Methodology: Comparing AI to Human Physicians&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;2:55 Key Results: Diagnostic Success vs. Clinical Failures&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;3:30 Critique: Limitations of the Evaluation Methodology&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;4:12 Poor Clinical Technique: The Problem with Compounded Questions&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;4:49 Physical Reality Failures: Sitting Exams and Hallucinated Fingers&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;5:28 Analysis: Misinterpreting Pathological Signs (Myasthenia Gravis)&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;6:56 Safety Risks: Missing Red Flags in Depression Screening&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;7:27 Experimental Showcase vs. Current Deployment Reality&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;8:15 The &amp;#34;Medical Student&amp;#34; Analogy: Knowledge vs. Experience&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;8:41 Summary: Technical Milestones and Physical Realities&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;9:43 Challenges in Clinical Supervision and Workflow Integration&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;11:00 Final Thoughts and Wrap Up&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;#HealthTech #MedicalAI #DeepMind #Telemedicine #ClinicalAI #DigitalHealth #FutureOfMedicine #HealthcareInnovation&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="10709368" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/062e6e5e-b18b-44cf-80e2-235bbf9990cc/stream.mp3"/>
                
                <guid isPermaLink="false">bf3575ae-04ea-468a-b3e6-cd880bf80a5c</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/062e6e5e-b18b-44cf-80e2-235bbf9990cc</link>
                <pubDate>Fri, 01 May 2026 07:14:32 &#43;0000</pubDate>
                <itunes:duration>669</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>XML Tags for Data - How Tech Giants Structure Medical Charts for AI</itunes:title>
                <title>XML Tags for Data - How Tech Giants Structure Medical Charts for AI</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Clinical notes are messy; your prompts shouldn’t be. Learn how to use [patient_history], [labs], and [plan] tags to &#34;sandwich&#34; your data. We explain why XML tags act as &#34;mental boundaries&#34; for the LLM reducing confusion in complex case reviews.</p><p><br></p><p>𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 &amp; 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐃𝐢𝐬𝐜𝐥𝐨𝐬𝐮𝐫𝐞:</p><p>This concise summary of AI technology is for 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐚𝐧𝐝 𝐢𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐩𝐮𝐫𝐩𝐨𝐬𝐞𝐬 𝐨𝐧𝐥𝐲. It provides a technical analysis of AI capabilities in healthcare and does not constitute medical advice, diagnosis, or treatment.</p><p>• 𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐀𝐜𝐜𝐨𝐮𝐧𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲: If you are a healthcare professional, ensure any implementation of AI tools complies with your local Trust’s policies, data governance protocols, and professional regulatory standards (GMC/NMC/HCPC or equivalent).</p><p>• 𝐈𝐧𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐄𝐯𝐢𝐝𝐞𝐧𝐜𝐞-𝐁𝐚𝐬𝐞𝐝 𝐑𝐞𝐯𝐢𝐞𝐰: The views expressed are my own and do not represent the official position of any University, Hospital Trust, employer, or regulatory body.</p><p>• 𝐏𝐚𝐭𝐢𝐞𝐧𝐭 𝐒𝐚𝐟𝐞𝐭𝐲: This video does not establish a doctor-patient relationship. Members of the public should always seek the advice of a qualified healthcare provider regarding any medical condition.</p><p>Music generated by Mubert https://mubert.com/render</p><p>https://substack.com/@healthaibrief</p><p><br></p><p>#DataStructuring #XML #MedicalCoding #AIArchitecture #HealthIT #aiinmedicine </p>]]></description>
                <content:encoded>&lt;p&gt;Clinical notes are messy; your prompts shouldn’t be. Learn how to use [patient_history], [labs], and [plan] tags to &amp;#34;sandwich&amp;#34; your data. We explain why XML tags act as &amp;#34;mental boundaries&amp;#34; for the LLM reducing confusion in complex case reviews.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 &amp;amp; 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐃𝐢𝐬𝐜𝐥𝐨𝐬𝐮𝐫𝐞:&lt;/p&gt;&lt;p&gt;This concise summary of AI technology is for 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐚𝐧𝐝 𝐢𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐩𝐮𝐫𝐩𝐨𝐬𝐞𝐬 𝐨𝐧𝐥𝐲. It provides a technical analysis of AI capabilities in healthcare and does not constitute medical advice, diagnosis, or treatment.&lt;/p&gt;&lt;p&gt;• 𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐀𝐜𝐜𝐨𝐮𝐧𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲: If you are a healthcare professional, ensure any implementation of AI tools complies with your local Trust’s policies, data governance protocols, and professional regulatory standards (GMC/NMC/HCPC or equivalent).&lt;/p&gt;&lt;p&gt;• 𝐈𝐧𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐄𝐯𝐢𝐝𝐞𝐧𝐜𝐞-𝐁𝐚𝐬𝐞𝐝 𝐑𝐞𝐯𝐢𝐞𝐰: The views expressed are my own and do not represent the official position of any University, Hospital Trust, employer, or regulatory body.&lt;/p&gt;&lt;p&gt;• 𝐏𝐚𝐭𝐢𝐞𝐧𝐭 𝐒𝐚𝐟𝐞𝐭𝐲: This video does not establish a doctor-patient relationship. Members of the public should always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;https://substack.com/@healthaibrief&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#DataStructuring #XML #MedicalCoding #AIArchitecture #HealthIT #aiinmedicine &lt;/p&gt;</content:encoded>
                
                <enclosure length="2105260" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/a6d624a5-4ae2-4df0-a680-f5e3e1be4a2e/stream.mp3"/>
                
                <guid isPermaLink="false">cc8a1c6e-276d-43f1-9629-c811a20b6c65</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/a6d624a5-4ae2-4df0-a680-f5e3e1be4a2e</link>
                <pubDate>Thu, 30 Apr 2026 06:00:04 &#43;0000</pubDate>
                <itunes:duration>131</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The Negative Prompt Strategy for LLMs</itunes:title>
                <title>The Negative Prompt Strategy for LLMs</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Sometimes, telling an AI what not to do is more important than telling it what to do. We explore the &#34;Negative Prompt&#34;, how to banish fluff, avoid specific drug classes in recommendations, and ensure the AI never mentions patient names. A must-listen for anyone worried about AI safety and boundaries.</p><p><br></p><p>#AISafety #NegativePrompt #ClinicalGuidelines #HealthTech #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Sometimes, telling an AI what not to do is more important than telling it what to do. We explore the &amp;#34;Negative Prompt&amp;#34;, how to banish fluff, avoid specific drug classes in recommendations, and ensure the AI never mentions patient names. A must-listen for anyone worried about AI safety and boundaries.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#AISafety #NegativePrompt #ClinicalGuidelines #HealthTech #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2009547" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/847de21b-53b0-4616-81a7-0aec6d96b5f7/stream.mp3"/>
                
                <guid isPermaLink="false">30e88162-5104-4c62-b011-e4980d2f2f48</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/847de21b-53b0-4616-81a7-0aec6d96b5f7</link>
                <pubDate>Wed, 29 Apr 2026 06:00:12 &#43;0000</pubDate>
                <itunes:duration>125</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Politeness vs Performance – Why Saying Please may Killing Your AI’s Accuracy</itunes:title>
                <title>Politeness vs Performance – Why Saying Please may Killing Your AI’s Accuracy</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Are you treating your LLM like a colleague or a calculator? In this episode, we explain the &#34;Token Tax&#34; of politeness. Learn why filler words like &#34;Please&#34; and &#34;Thank you&#34; waste precious context and why direct, imperative commands lead to better clinical reasoning. Stop being nice, start being precise.</p><p><br></p><p>#PromptEngineering #AIHacks #MedicalAI #Efficiency #LLM #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Are you treating your LLM like a colleague or a calculator? In this episode, we explain the &amp;#34;Token Tax&amp;#34; of politeness. Learn why filler words like &amp;#34;Please&amp;#34; and &amp;#34;Thank you&amp;#34; waste precious context and why direct, imperative commands lead to better clinical reasoning. Stop being nice, start being precise.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#PromptEngineering #AIHacks #MedicalAI #Efficiency #LLM #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2011219" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/d23cd666-ba65-4968-a611-3247838ab1bc/stream.mp3"/>
                
                <guid isPermaLink="false">dae49b5e-4e5d-4aba-b832-1fc05430b9f1</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/d23cd666-ba65-4968-a611-3247838ab1bc</link>
                <pubDate>Tue, 28 Apr 2026 06:00:02 &#43;0000</pubDate>
                <itunes:duration>125</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>What Blindness is Warning Us About AI</itunes:title>
                <title>What Blindness is Warning Us About AI</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>Is AI reshaping the psychological health of the blind community? In this episode, we analyse the BBC&#39;s recent report by Milagros Costabel on &#34;AI Mirrors&#34;, vision-language models that provide real-time, often critical feedback on physical appearance. We explore the clinical shift from functional assistive tech to subjective AI critiques.</span></p><p><br></p><p><span>Link to the original article: https://www.bbc.co.uk/future/article/20260126-ai-mirrors-are-changing-the-way-blind-people-see-themselves</span></p><p><br></p><p><span>As AI transitions from identifying objects to judging human beauty, clinicians must understand the risks of algorithmic bias, Eurocentric training data, and the mental health implications of &#34;AI hallucinations.&#34; We provide a strategic roadmap for &#34;Empathy-First&#34; AI design and contextual intelligence in health-tech.</span></p><p><br></p><p><span>Key Takeaways</span></p><p><span>• The psychological impact of Multimodal LLMs on body image and self-satisfaction.</span></p><p><span>• Why &#34;Certainty Surfacing&#34; and &#34;Contextual Intelligence&#34; are the next frontiers for assistive AI.</span></p><p><span>• Strategies for mitigating Eurocentric bias in vision-language models for global populations.</span></p><p><br></p><p><span>0:00 – AI Mirror</span></p><p><span>0:30 – Milagros Costabel’s BBC Report</span></p><p><span>1:08 – From Functional to Subjective AI</span></p><p><span>2:01 – The Psychological Impact of AI Mirrors</span></p><p><span>3:31 – Bias in AI Training Data</span></p><p><span>4:25 – The Problem with AI Hallucinations</span></p><p><span>5:15 – Transparency and Historical Context</span></p><p><span>5:59 – Conclusion: AI as a Sensory Prosthetic</span></p><p><br></p><p><br></p><p><span>Clinical Governance &amp; Educational Disclosure</span></p><p><span>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</span></p><p><span>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</span></p><p><span>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</span></p><p><span>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</span></p><p><br></p><p><span>Music generated by Mubert https://mubert.com/render</span></p><p><span>https://substack.com/@healthaibrief</span></p><p><br></p><p><span>#HealthAI #AssistiveTech #MedTech #Inclusion #DigitalHealth #GPT4 #BeMyEyes #Accessibility #AIHallucinations #MentalHealthTech</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Is AI reshaping the psychological health of the blind community? In this episode, we analyse the BBC&amp;#39;s recent report by Milagros Costabel on &amp;#34;AI Mirrors&amp;#34;, vision-language models that provide real-time, often critical feedback on physical appearance. We explore the clinical shift from functional assistive tech to subjective AI critiques.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Link to the original article: https://www.bbc.co.uk/future/article/20260126-ai-mirrors-are-changing-the-way-blind-people-see-themselves&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;As AI transitions from identifying objects to judging human beauty, clinicians must understand the risks of algorithmic bias, Eurocentric training data, and the mental health implications of &amp;#34;AI hallucinations.&amp;#34; We provide a strategic roadmap for &amp;#34;Empathy-First&amp;#34; AI design and contextual intelligence in health-tech.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• The psychological impact of Multimodal LLMs on body image and self-satisfaction.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Why &amp;#34;Certainty Surfacing&amp;#34; and &amp;#34;Contextual Intelligence&amp;#34; are the next frontiers for assistive AI.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Strategies for mitigating Eurocentric bias in vision-language models for global populations.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;0:00 – AI Mirror&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;0:30 – Milagros Costabel’s BBC Report&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;1:08 – From Functional to Subjective AI&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;2:01 – The Psychological Impact of AI Mirrors&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;3:31 – Bias in AI Training Data&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;4:25 – The Problem with AI Hallucinations&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;5:15 – Transparency and Historical Context&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;5:59 – Conclusion: AI as a Sensory Prosthetic&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;#HealthAI #AssistiveTech #MedTech #Inclusion #DigitalHealth #GPT4 #BeMyEyes #Accessibility #AIHallucinations #MentalHealthTech&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="7159222" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/e577eb56-3722-4719-a21c-d082a274c60a/stream.mp3"/>
                
                <guid isPermaLink="false">e7e02e91-6258-47f5-8bd5-87bbdee64a33</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/e577eb56-3722-4719-a21c-d082a274c60a</link>
                <pubDate>Fri, 24 Apr 2026 06:00:25 &#43;0000</pubDate>
                <itunes:duration>447</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Pre-, mid-, post-training  - The Complete LLM Training Guide</itunes:title>
                <title>Pre-, mid-, post-training  - The Complete LLM Training Guide</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Confused by RLHF, Pre-training, and Fine-tuning? We break down the complete medical LLM pipeline and explain how &#34;clinical reasoning&#34; is actually built into AI.</p><p><br></p><p>In this definitive guide, we decode the journey of Generative AI in medicine, from raw data pre-training to expert-led reinforcement learning. We explore the mechanics of &#34;Chain of Thought&#34; reasoning, the risks of clinical hallucinations, and why domain-specific fine-tuning is the gold standard for healthcare applications.</p><p><br></p><p>Key Takeaways:</p><p>• The 3 Stages of AI: Why pre-training is like medical school and RLHF is the &#34;Senior Oversight&#34; phase.</p><p>• Safety vs. Utility: How reinforcement learning from human feedback (RLHF) can inadvertently bias clinical results.</p><p>• Small Models, Big Impact: The role of model distillation in preserving patient privacy and reducing hospital costs.</p><p><br></p><p>00:00 Introduction</p><p>00:54 Phase 1: Pre-training</p><p>03:01 Phase 2: Mid-training</p><p>06:02 Phase 3: Post-training</p><p>08:32 Multimodal Data Pipeline Examples</p><p>11:33 Summary and Conclusion</p><p><br></p><p>Generative AI in Medicine, Large Language Models, LLM Training Pipeline, RLHF, Clinical AI Safety, Medical Fine-Tuning, Transformer Architecture, DeepSeek-R1 Medicine, GPT-5 Healthcare, Medical Hallucinations. #HealthAI #MedicalInnovation #LLM #DigitalHealth #MedTech #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Confused by RLHF, Pre-training, and Fine-tuning? We break down the complete medical LLM pipeline and explain how &amp;#34;clinical reasoning&amp;#34; is actually built into AI.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this definitive guide, we decode the journey of Generative AI in medicine, from raw data pre-training to expert-led reinforcement learning. We explore the mechanics of &amp;#34;Chain of Thought&amp;#34; reasoning, the risks of clinical hallucinations, and why domain-specific fine-tuning is the gold standard for healthcare applications.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• The 3 Stages of AI: Why pre-training is like medical school and RLHF is the &amp;#34;Senior Oversight&amp;#34; phase.&lt;/p&gt;&lt;p&gt;• Safety vs. Utility: How reinforcement learning from human feedback (RLHF) can inadvertently bias clinical results.&lt;/p&gt;&lt;p&gt;• Small Models, Big Impact: The role of model distillation in preserving patient privacy and reducing hospital costs.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;00:00 Introduction&lt;/p&gt;&lt;p&gt;00:54 Phase 1: Pre-training&lt;/p&gt;&lt;p&gt;03:01 Phase 2: Mid-training&lt;/p&gt;&lt;p&gt;06:02 Phase 3: Post-training&lt;/p&gt;&lt;p&gt;08:32 Multimodal Data Pipeline Examples&lt;/p&gt;&lt;p&gt;11:33 Summary and Conclusion&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Generative AI in Medicine, Large Language Models, LLM Training Pipeline, RLHF, Clinical AI Safety, Medical Fine-Tuning, Transformer Architecture, DeepSeek-R1 Medicine, GPT-5 Healthcare, Medical Hallucinations. #HealthAI #MedicalInnovation #LLM #DigitalHealth #MedTech #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="13207092" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/ef9d0957-6ab7-4c79-a1cc-c66c9d2d1ccb/stream.mp3"/>
                
                <guid isPermaLink="false">24bc3a0f-3c95-4572-8e1d-ed5baeac0743</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/ef9d0957-6ab7-4c79-a1cc-c66c9d2d1ccb</link>
                <pubDate>Thu, 23 Apr 2026 06:00:26 &#43;0000</pubDate>
                <itunes:duration>825</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Model Context Protocol (MCP) - the &#39;universal adaptor&#39; for artificial intelligence</itunes:title>
                <title>Model Context Protocol (MCP) - the &#39;universal adaptor&#39; for artificial intelligence</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Why is AI still so disconnected from our daily clinical tools? In this episode, we break down the Model Context Protocol (MCP), the new &#34;universal adaptor&#34; for artificial intelligence.</p><p><br></p><p>We move past the hype to explain how this open standard allows LLMs to securely &#34;plug in&#34; to local databases, research archives, and clinical files without the need for custom coding or tedious copy-pasting. If you&#39;ve ever felt frustrated by the &#34;brain in a vat&#34; limitation of modern AI, this episode explains the technical bridge that will finally allow AI to understand your specific clinical context.</p><p><br></p><p>Key takeaways:</p><p>- What MCP is and why it’s being compared to the USB port for data.</p><p>- How it solves the &#34;Silo Problem&#34; in healthcare tech.</p><p>- The impact on data security and future-proofing your clinical workflow.</p><p><br></p><p>#MedicalAI #HealthTech #MCP #ModelContextProtocol #DigitalHealth #ArtificialIntelligence #ClinicianInformatics #NHS #HealthData #AIIntegration #TheHealthAIBrief #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Why is AI still so disconnected from our daily clinical tools? In this episode, we break down the Model Context Protocol (MCP), the new &amp;#34;universal adaptor&amp;#34; for artificial intelligence.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;We move past the hype to explain how this open standard allows LLMs to securely &amp;#34;plug in&amp;#34; to local databases, research archives, and clinical files without the need for custom coding or tedious copy-pasting. If you&amp;#39;ve ever felt frustrated by the &amp;#34;brain in a vat&amp;#34; limitation of modern AI, this episode explains the technical bridge that will finally allow AI to understand your specific clinical context.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key takeaways:&lt;/p&gt;&lt;p&gt;- What MCP is and why it’s being compared to the USB port for data.&lt;/p&gt;&lt;p&gt;- How it solves the &amp;#34;Silo Problem&amp;#34; in healthcare tech.&lt;/p&gt;&lt;p&gt;- The impact on data security and future-proofing your clinical workflow.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#MedicalAI #HealthTech #MCP #ModelContextProtocol #DigitalHealth #ArtificialIntelligence #ClinicianInformatics #NHS #HealthData #AIIntegration #TheHealthAIBrief #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3090390" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/b10391e5-7366-44e2-a4ae-05d43156b3d4/stream.mp3"/>
                
                <guid isPermaLink="false">88562bfe-ab6e-4292-b9ee-31af4270f5d4</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/b10391e5-7366-44e2-a4ae-05d43156b3d4</link>
                <pubDate>Wed, 22 Apr 2026 06:00:16 &#43;0000</pubDate>
                <itunes:duration>193</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Small Language Models (SLMs) - The Lean Machine</itunes:title>
                <title>Small Language Models (SLMs) - The Lean Machine</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Why smaller, specialized models are often faster and more accurate for specific medical tasks.</p><p><br></p><p>#SLM #EfficientAI #TechTrends #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Why smaller, specialized models are often faster and more accurate for specific medical tasks.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#SLM #EfficientAI #TechTrends #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1941002" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/963b9838-e511-41eb-9bc6-cb9e0bcbf67e/stream.mp3"/>
                
                <guid isPermaLink="false">6cf2d6b1-5190-4494-970b-c19db87a06a5</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/963b9838-e511-41eb-9bc6-cb9e0bcbf67e</link>
                <pubDate>Tue, 21 Apr 2026 06:00:38 &#43;0000</pubDate>
                <itunes:duration>121</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>How Bixonimania Fooled the World&#39;s Leading AI Models</itunes:title>
                <title>How Bixonimania Fooled the World&#39;s Leading AI Models</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>Can you trust your AI’s medical advice? A shocking new feature in Nature reveals how a completely fake disease called &#34;Bixonimania&#34; fooled the world&#39;s leading AI models.</span></p><p><br></p><p><span>Original source: https://www.nature.com/articles/d41586-026-01100-y </span></p><p><br></p><p><span>In this episode, we consider the &#34;Bixonimania&#34; experiment, where researchers successfully seeded a fictional illness into the medical ecosystem. Despite blatant clues, including Starfleet references and a literal admission that the paper was &#34;made up&#34;, LLMs like ChatGPT and Gemini presented it as clinical fact. We discuss the strategic implications of &#34;information poisoning,&#34; the risk of commercial exploitation of vulnerable patients, and why the current lack of AI regulation creates a dangerous asymmetry of consequence compared to human physicians.</span></p><p><br></p><p><span>Key Takeaways:</span></p><p><span>• How subtle misinformation can be hidden within high-quality AI advice.</span></p><p><span>• Information Laundering: How fake AI hallucinations are ending up in peer-reviewed journals.</span></p><p><span>• The Regulatory Gap: Why we need accountability for AI-generated medical misinformation.</span></p><p><br></p><p><span>0:00 - What is Bixonimania? (The AI &#34;Trap&#34;)</span></p><p><span>0:25 - The High Stakes of AI Errors in Healthcare</span></p><p><span>0:53 - The Experiment: Seeding a Fictional Condition</span></p><p><span>1:13 - Red Flags the AI Missed (Side-Show Bob &amp; The USS Enterprise)</span></p><p><span>1:31 - How Leading AI Models Responded to the Hoax</span></p><p><span>1:56 - The Danger of Subtle Medical Deception</span></p><p><span>2:30 - Regulatory Asymmetry: AI vs. Human Professionals</span></p><p><span>2:58 - The Consequences for Vulnerable Patients</span></p><p><span>3:18 - How Fake Data is Poisoning Scientific Journals</span></p><p><span>3:47 - Solutions: Red Teaming and Verified Architectures</span></p><p><span>4:30 - The Evolving Role of Humans as Information Verifiers</span></p><p><span>5:01 - Summary: AI as a Mirror, Not a Filter</span></p><p><span>5:45 - Closing Thoughts: The Future of Medical AI Truthfulness</span></p><p><br></p><p><span>Clinical Governance &amp; Educational Disclosure</span></p><p><span>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</span></p><p><span>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</span></p><p><span>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</span></p><p><span>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</span></p><p><br></p><p><span>Music generated by Mubert https://mubert.com/render</span></p><p><span>https://substack.com/@healthaibrief</span></p><p><span>#HealthAI #MedicalEthics #NatureMagazine #Bixonimania #PatientSafety #DigitalHealth #AIGovernance #ClinicalReliability #HealthTechPodcast #FutureOfMedicine</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Can you trust your AI’s medical advice? A shocking new feature in Nature reveals how a completely fake disease called &amp;#34;Bixonimania&amp;#34; fooled the world&amp;#39;s leading AI models.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Original source: https://www.nature.com/articles/d41586-026-01100-y &lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;In this episode, we consider the &amp;#34;Bixonimania&amp;#34; experiment, where researchers successfully seeded a fictional illness into the medical ecosystem. Despite blatant clues, including Starfleet references and a literal admission that the paper was &amp;#34;made up&amp;#34;, LLMs like ChatGPT and Gemini presented it as clinical fact. We discuss the strategic implications of &amp;#34;information poisoning,&amp;#34; the risk of commercial exploitation of vulnerable patients, and why the current lack of AI regulation creates a dangerous asymmetry of consequence compared to human physicians.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• How subtle misinformation can be hidden within high-quality AI advice.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Information Laundering: How fake AI hallucinations are ending up in peer-reviewed journals.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• The Regulatory Gap: Why we need accountability for AI-generated medical misinformation.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;0:00 - What is Bixonimania? (The AI &amp;#34;Trap&amp;#34;)&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;0:25 - The High Stakes of AI Errors in Healthcare&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;0:53 - The Experiment: Seeding a Fictional Condition&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;1:13 - Red Flags the AI Missed (Side-Show Bob &amp;amp; The USS Enterprise)&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;1:31 - How Leading AI Models Responded to the Hoax&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;1:56 - The Danger of Subtle Medical Deception&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;2:30 - Regulatory Asymmetry: AI vs. Human Professionals&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;2:58 - The Consequences for Vulnerable Patients&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;3:18 - How Fake Data is Poisoning Scientific Journals&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;3:47 - Solutions: Red Teaming and Verified Architectures&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;4:30 - The Evolving Role of Humans as Information Verifiers&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;5:01 - Summary: AI as a Mirror, Not a Filter&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;5:45 - Closing Thoughts: The Future of Medical AI Truthfulness&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;#HealthAI #MedicalEthics #NatureMagazine #Bixonimania #PatientSafety #DigitalHealth #AIGovernance #ClinicalReliability #HealthTechPodcast #FutureOfMedicine&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="5852682" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/b2d9c7ef-4e13-45ba-818b-5665a4854011/stream.mp3"/>
                
                <guid isPermaLink="false">55f0aea6-0221-44f8-a0ce-dd018f883563</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/b2d9c7ef-4e13-45ba-818b-5665a4854011</link>
                <pubDate>Mon, 20 Apr 2026 06:00:23 &#43;0000</pubDate>
                <itunes:duration>365</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Test-Time Inference - The High Cost of Thinking</itunes:title>
                <title>Test-Time Inference - The High Cost of Thinking</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Inference is when the &#34;maths&#34; happens. We discuss the cost, latency, and hardware required to get an answer from a medical model in real-time.</p><p><br></p><p>#CloudComputing #Inference #HealthTech #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Inference is when the &amp;#34;maths&amp;#34; happens. We discuss the cost, latency, and hardware required to get an answer from a medical model in real-time.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#CloudComputing #Inference #HealthTech #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1838184" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/0d24a0d5-e06f-4169-838d-1a6d1ada07e7/stream.mp3"/>
                
                <guid isPermaLink="false">7446ae85-3a4f-4d8e-bffd-659cb806d57b</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/0d24a0d5-e06f-4169-838d-1a6d1ada07e7</link>
                <pubDate>Fri, 17 Apr 2026 06:00:31 &#43;0000</pubDate>
                <itunes:duration>114</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Multimodal AI - Seeing the X-Ray</itunes:title>
                <title>Multimodal AI - Seeing the X-Ray</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Language models can &#34;see.&#34; We discuss the transition from NLP to LVM (Large Vision Models) in the radiology suite.</p><p><br></p><p>#Radiology #MultimodalAI #Imaging #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Language models can &amp;#34;see.&amp;#34; We discuss the transition from NLP to LVM (Large Vision Models) in the radiology suite.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#Radiology #MultimodalAI #Imaging #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1936822" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/8ff874c8-b9ea-4fa5-bf38-59062f1209f8/stream.mp3"/>
                
                <guid isPermaLink="false">68b767e9-c207-4749-bc5c-ae93d884af4b</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/8ff874c8-b9ea-4fa5-bf38-59062f1209f8</link>
                <pubDate>Thu, 16 Apr 2026 06:00:05 &#43;0000</pubDate>
                <itunes:duration>121</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>People Use AI Instead of Doctors (Here’s Why)</itunes:title>
                <title>People Use AI Instead of Doctors (Here’s Why)</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>Are AI symptom checkers empowering patients or driving a dangerous crisis in clinical triage? Discover how artificial intelligence is fundamentally rewiring the front door of global healthcare.</span></p><p><br></p><p><span>Recent data reveals a massive behavioural shift in how patients access medical advice, with generative AI in medicine becoming the default first step for millions. This analysis breaks down the dual dynamic of AI symptom checking: how unregulated digital health tools are simultaneously causing patients to delay vital care through false reassurance, while driving others to seek unnecessary appointments due to health anxiety. We explore the critical gaps in current clinical outcomes data, the risks of using consumer LLMs in healthcare without proper validation, and why the future of health tech relies on integrating these tools safely into established NHS innovation and global triage pathways.</span></p><p><br></p><p><span>Link: https://www.axahealth.co.uk/news/2026/axa-health-research-shows-ai-is-driving-people-to-delay-care/  </span></p><p><br></p><p><span>Key Takeaways:</span></p><p><span>• How AI is drastically altering patient behaviour, creating an &#34;AI Health Anxiety Loop&#34; that drives both delayed care and over-utilisation of resources.</span></p><p><span>• The critical limitations of current data, including the lack of peer-reviewed clinical outcomes and the potential commercial incentives of private healthcare reporting.</span></p><p><span>• The strategic path forward for integrating regulated healthcare AI into clinical workflows to empower patients while maintaining safe, human-in-the-loop triage.</span></p><p><br></p><p><span>00:00 – Intro: A scenario of AI use during a late-night health scare</span></p><p><span>00:27 – Introduction to the Axa Health survey data</span></p><p><span>00:58 – AI vs. official health sites: Statistics on user adoption</span></p><p><span>01:40 – The &#34;AI Health Anxiety Loop&#34; paradox</span></p><p><span>02:03 – AI’s impact on patient empowerment and medical literacy</span></p><p><span>02:46 – Critical analysis: Methodological limitations of survey data</span></p><p><span>03:55 – Validation issues and the risks of unregulated LLMs</span></p><p><span>04:40 – Understanding the commercial incentive structures of health insurers</span></p><p><span>05:26 – The future: Integrated AI-clinician triage pathways</span></p><p><span>06:50 – Summary: The transition from search to conversation</span></p><p><span>07:31 – Final conclusions and closing remarks</span></p><p><br></p><p><span>Clinical Governance &amp; Educational Disclosure</span></p><p><span>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</span></p><p><span>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</span></p><p><span>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</span></p><p><span>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</span></p><p><br></p><p><span>Music generated by Mubert https://mubert.com/render</span></p><p><span>https://substack.com/@healthaibrief</span></p><p><br></p><p><span>#HealthAI #DigitalHealth #MedicalTechnology #AISymptomChecker #ClinicalOutcomes #HealthTech #FutureOfMedicine #MedicalAI #NHS #HealthcareInnovation</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Are AI symptom checkers empowering patients or driving a dangerous crisis in clinical triage? Discover how artificial intelligence is fundamentally rewiring the front door of global healthcare.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Recent data reveals a massive behavioural shift in how patients access medical advice, with generative AI in medicine becoming the default first step for millions. This analysis breaks down the dual dynamic of AI symptom checking: how unregulated digital health tools are simultaneously causing patients to delay vital care through false reassurance, while driving others to seek unnecessary appointments due to health anxiety. We explore the critical gaps in current clinical outcomes data, the risks of using consumer LLMs in healthcare without proper validation, and why the future of health tech relies on integrating these tools safely into established NHS innovation and global triage pathways.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Link: https://www.axahealth.co.uk/news/2026/axa-health-research-shows-ai-is-driving-people-to-delay-care/  &lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• How AI is drastically altering patient behaviour, creating an &amp;#34;AI Health Anxiety Loop&amp;#34; that drives both delayed care and over-utilisation of resources.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• The critical limitations of current data, including the lack of peer-reviewed clinical outcomes and the potential commercial incentives of private healthcare reporting.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• The strategic path forward for integrating regulated healthcare AI into clinical workflows to empower patients while maintaining safe, human-in-the-loop triage.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:00 – Intro: A scenario of AI use during a late-night health scare&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:27 – Introduction to the Axa Health survey data&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:58 – AI vs. official health sites: Statistics on user adoption&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:40 – The &amp;#34;AI Health Anxiety Loop&amp;#34; paradox&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;02:03 – AI’s impact on patient empowerment and medical literacy&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;02:46 – Critical analysis: Methodological limitations of survey data&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;03:55 – Validation issues and the risks of unregulated LLMs&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;04:40 – Understanding the commercial incentive structures of health insurers&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;05:26 – The future: Integrated AI-clinician triage pathways&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;06:50 – Summary: The transition from search to conversation&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;07:31 – Final conclusions and closing remarks&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;#HealthAI #DigitalHealth #MedicalTechnology #AISymptomChecker #ClinicalOutcomes #HealthTech #FutureOfMedicine #MedicalAI #NHS #HealthcareInnovation&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="8786337" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/0607bfb9-a2b1-42d5-a93e-6debfacfeaba/stream.mp3"/>
                
                <guid isPermaLink="false">95cf2787-597c-4b08-9326-e817ba041dbd</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/0607bfb9-a2b1-42d5-a93e-6debfacfeaba</link>
                <pubDate>Wed, 15 Apr 2026 06:00:41 &#43;0000</pubDate>
                <itunes:duration>549</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Temperature &amp; Top-P- The Creativity Dial for Controlling the Chaos</itunes:title>
                <title>Temperature &amp; Top-P- The Creativity Dial for Controlling the Chaos</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Do you want a creative AI or a predictable one? We explain the settings that control how &#34;random&#34; your AI&#39;s medical advice becomes.</p><p><br></p><p>#AISettings #MachineLearning #TechTips #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Do you want a creative AI or a predictable one? We explain the settings that control how &amp;#34;random&amp;#34; your AI&amp;#39;s medical advice becomes.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#AISettings #MachineLearning #TechTips #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2037969" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/3e42cefd-2b56-4ced-80fc-eb6090eb2791/stream.mp3"/>
                
                <guid isPermaLink="false">560c1f22-1d83-4918-9d7e-63d223c796ae</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/3e42cefd-2b56-4ced-80fc-eb6090eb2791</link>
                <pubDate>Tue, 14 Apr 2026 06:00:24 &#43;0000</pubDate>
                <itunes:duration>127</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Meta Muse Spark - New Standard for Healthcare AI?</itunes:title>
                <title>Meta Muse Spark - New Standard for Healthcare AI?</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Meta Muse Spark has just launched, signalling a pivot in Healthcare AI. Why is the tech giant stepping back from clinical diagnostics to focus entirely on multimodal wellness?</p><p>Following a multi-billion dollar restructure and the formation of the Meta Superintelligence Lab, Meta has released Muse Spark, a natively multimodal reasoning model. Unlike competitors that encourage users to upload full medical records, Muse Spark focuses purely on preventative health, nutrition, and wellness using advanced &#34;Contemplating mode&#34; multi-agent architecture. This analysis explores the technical scaling behind the model, its physician-curated training data, and early clinical stress tests reveal a surprisingly measured, safe, and cautious approach to medical queries.</p><p>Key Takeaways:</p><p> • Understand the architecture of Muse Spark, including its multi-agent &#34;Contemplating mode&#34; and efficient pretraining scaling.</p><p> • Discover how Meta’s focus on visual wellness and nutrition significantly differs from the risky diagnostic approaches of competing health LLMs.</p><p> • Learn why models exhibiting &#34;evaluation awareness&#34; necessitate a new standard of independent clinical validation for health tech.</p><p> </p><p><strong>0:00</strong> Introduction to AI in healthcare</p><p><strong>0:27</strong> Meta’s Muse Spark: A departure from the industry trend</p><p><strong>1:01</strong> Muse Spark’s innovative architecture</p><p><strong>1:54</strong> Applications in wellness and healthcare</p><p><strong>3:15</strong> Clinical stress testing and comparative results</p><p><strong>4:54</strong> Safety analysis and &#34;evaluation awareness&#34;</p><p><strong>5:58</strong> Challenges in clinical validation</p><p><strong>7:01</strong> The future of AI-driven health education</p><p> </p><p>Clinical Governance &amp; Educational Disclosure</p><p>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</p><p>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</p><p>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</p><p>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</p><p> </p><p>Music generated by Mubert https://mubert.com/render</p><p>https://substack.com/@healthaibrief</p><p> </p><p>#HealthAI #MetaMuse #MuseSpark #MedicalTechnology #DigitalHealth #ArtificialIntelligence #ClinicalAI #HealthTech #FutureOfHealthcare #MedTech</p>]]></description>
                <content:encoded>&lt;p&gt;Meta Muse Spark has just launched, signalling a pivot in Healthcare AI. Why is the tech giant stepping back from clinical diagnostics to focus entirely on multimodal wellness?&lt;/p&gt;&lt;p&gt;Following a multi-billion dollar restructure and the formation of the Meta Superintelligence Lab, Meta has released Muse Spark, a natively multimodal reasoning model. Unlike competitors that encourage users to upload full medical records, Muse Spark focuses purely on preventative health, nutrition, and wellness using advanced &amp;#34;Contemplating mode&amp;#34; multi-agent architecture. This analysis explores the technical scaling behind the model, its physician-curated training data, and early clinical stress tests reveal a surprisingly measured, safe, and cautious approach to medical queries.&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt; • Understand the architecture of Muse Spark, including its multi-agent &amp;#34;Contemplating mode&amp;#34; and efficient pretraining scaling.&lt;/p&gt;&lt;p&gt; • Discover how Meta’s focus on visual wellness and nutrition significantly differs from the risky diagnostic approaches of competing health LLMs.&lt;/p&gt;&lt;p&gt; • Learn why models exhibiting &amp;#34;evaluation awareness&amp;#34; necessitate a new standard of independent clinical validation for health tech.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;&lt;strong&gt;0:00&lt;/strong&gt; Introduction to AI in healthcare&lt;/p&gt;&lt;p&gt;&lt;strong&gt;0:27&lt;/strong&gt; Meta’s Muse Spark: A departure from the industry trend&lt;/p&gt;&lt;p&gt;&lt;strong&gt;1:01&lt;/strong&gt; Muse Spark’s innovative architecture&lt;/p&gt;&lt;p&gt;&lt;strong&gt;1:54&lt;/strong&gt; Applications in wellness and healthcare&lt;/p&gt;&lt;p&gt;&lt;strong&gt;3:15&lt;/strong&gt; Clinical stress testing and comparative results&lt;/p&gt;&lt;p&gt;&lt;strong&gt;4:54&lt;/strong&gt; Safety analysis and &amp;#34;evaluation awareness&amp;#34;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;5:58&lt;/strong&gt; Challenges in clinical validation&lt;/p&gt;&lt;p&gt;&lt;strong&gt;7:01&lt;/strong&gt; The future of AI-driven health education&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/p&gt;&lt;p&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/p&gt;&lt;p&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/p&gt;&lt;p&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/p&gt;&lt;p&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;https://substack.com/@healthaibrief&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;#HealthAI #MetaMuse #MuseSpark #MedicalTechnology #DigitalHealth #ArtificialIntelligence #ClinicalAI #HealthTech #FutureOfHealthcare #MedTech&lt;/p&gt;</content:encoded>
                
                <enclosure length="8797622" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/b5e6e0e4-ac80-41ab-b40a-fa4a65d46897/stream.mp3"/>
                
                <guid isPermaLink="false">1173fc16-0683-4174-a389-88a80870cdd2</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/b5e6e0e4-ac80-41ab-b40a-fa4a65d46897</link>
                <pubDate>Mon, 13 Apr 2026 06:00:24 &#43;0000</pubDate>
                <itunes:duration>549</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Vector Databases - The AI&#39;s Filing Cabinet</itunes:title>
                <title>Vector Databases - The AI&#39;s Filing Cabinet</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Where does the AI look things up? A deep dive into Vector Databases, the storage systems that make RAG possible.</p><p><br></p><p>#DataArchitecture #VectorDatabase #HealthIT #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Where does the AI look things up? A deep dive into Vector Databases, the storage systems that make RAG possible.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#DataArchitecture #VectorDatabase #HealthIT #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1934315" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/3ce6c4ef-5c65-43e4-93c0-cb556cc4f43c/stream.mp3"/>
                
                <guid isPermaLink="false">7f4396eb-231b-4487-8fee-7fc0bd51b187</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/3ce6c4ef-5c65-43e4-93c0-cb556cc4f43c</link>
                <pubDate>Fri, 10 Apr 2026 06:00:46 &#43;0000</pubDate>
                <itunes:duration>120</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>System Prompts - The Secret Instructions How System Prompts Define AI Personality</itunes:title>
                <title>System Prompts - The Secret Instructions How System Prompts Define AI Personality</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>&#34;You are a world-class radiologist...&#34; Learn how the &#34;System Prompt&#34; sets the guardrails and the tone for every AI interaction.</p><p><br></p><p>#PromptEngineering #DeveloperTips #MedicalAI #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;&amp;#34;You are a world-class radiologist...&amp;#34; Learn how the &amp;#34;System Prompt&amp;#34; sets the guardrails and the tone for every AI interaction.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#PromptEngineering #DeveloperTips #MedicalAI #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1902550" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/85b65999-a126-49d7-9d82-d6e4844cb202/stream.mp3"/>
                
                <guid isPermaLink="false">bb3073a3-8bdc-4cd1-b716-54dba8007c04</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/85b65999-a126-49d7-9d82-d6e4844cb202</link>
                <pubDate>Thu, 09 Apr 2026 06:00:37 &#43;0000</pubDate>
                <itunes:duration>118</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI Scribes in 2026: What Every Leader Needs to Know</itunes:title>
                <title>AI Scribes in 2026: What Every Leader Needs to Know</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>AI Scribes in 2026: What Every Leader Needs to Know</p><p><br></p><p>Discover which Medical AI Scribe actually fits your workflow in 2026. This comprehensive deep dive analyses the global landscape of Ambient Clinical Intelligence, comparing heavyweights like Nuance DAX and Abridge against agile disruptors like Suki, Nabla, and Heidi Health.</p><p><br></p><p>We break down the four tiers of AI scribing technology, moving beyond marketing hype to examine the technical architecture, integration depth, and the critical governance risks facing clinicians in the UK and beyond. Learn why &#34;Shadow AI&#34; is a professional liability and how to choose a platform that balances HIPAA/GDPR compliance with clinical efficiency.</p><p><br></p><p>Key Takeaways</p><p>• Strategic Comparison - Pros and cons of Nuance, Abridge, Suki, Nabla, and Freed for different clinical environments.</p><p>• Learn the difference between Enterprise Native systems and &#34;Agentic&#34; Clinical Assistants.</p><p>• The Governance Trap - Why using personal AI scribe accounts in a clinical setting can be a professional risk.</p><p><br></p><p>0:00 The &#34;Administrative Tax&#34; on Clinicians</p><p>0:31 What is an AI Scribe?</p><p>1:52 Tier 1: Enterprise AI (Nuance DAX &amp; Abridge)</p><p>2:45 Solving the &#34;Black Box&#34; Problem with Linked Evidence</p><p>3:38 Oracle Health: The Future of Integration?</p><p>4:27 Automated Medical Coding &amp; Audit Risks</p><p>5:00 Tier 2: AI Clinical Assistants (Suki)</p><p>5:33 Tier 3: Solo Specialist Tools (Freed, Heidi Health, Nabla)</p><p>6:19 Infrastructure Challenges: Wi-Fi vs Cellular</p><p>7:00 Personal Devices vs Managed Hardware</p><p>7:28 Digital Exhaust: Should You Keep Raw Patient Audio?</p><p>8:45 The Danger of &#34;Shadow AI&#34; in Health Systems Like the NHS</p><p>9:53 HIPAA vs BAA: Legal Risks in the USA</p><p>11:06 Who is Liable for AI Hallucinations?</p><p>12:12 Patient Privacy &amp; Algorithmic Bias</p><p>13:14 Global Regulations (Canada &amp; UK Specifics)</p><p>13:45 Tier 4: Specialty Tuned AI (Oncology &amp; Cardiology)</p><p>14:10 The Productivity Paradox: Does AI Actually Save Time?</p><p>15:19 3 Power User Tips for AI Scribes</p><p>16:16 Why You Need to Narrate Your Care</p><p>16:55 Summary: How to Choose the Right AI Scribe</p><p><br></p><p>Clinical Governance &amp; Educational Disclosure</p><p>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</p><p>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</p><p>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</p><p>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</p><p><br></p><p>Music generated by Mubert https://mubert.com/render</p><p>https://substack.com/@healthaibrief</p><p><br></p><p>#MedicalAI #AIScribe #HealthTech #ClinicalDocumentation #NHS #HealthAI #NuanceDAX #AbridgeAI #SukiAI #MedicalInnovation</p>]]></description>
                <content:encoded>&lt;p&gt;AI Scribes in 2026: What Every Leader Needs to Know&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Discover which Medical AI Scribe actually fits your workflow in 2026. This comprehensive deep dive analyses the global landscape of Ambient Clinical Intelligence, comparing heavyweights like Nuance DAX and Abridge against agile disruptors like Suki, Nabla, and Heidi Health.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;We break down the four tiers of AI scribing technology, moving beyond marketing hype to examine the technical architecture, integration depth, and the critical governance risks facing clinicians in the UK and beyond. Learn why &amp;#34;Shadow AI&amp;#34; is a professional liability and how to choose a platform that balances HIPAA/GDPR compliance with clinical efficiency.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways&lt;/p&gt;&lt;p&gt;• Strategic Comparison - Pros and cons of Nuance, Abridge, Suki, Nabla, and Freed for different clinical environments.&lt;/p&gt;&lt;p&gt;• Learn the difference between Enterprise Native systems and &amp;#34;Agentic&amp;#34; Clinical Assistants.&lt;/p&gt;&lt;p&gt;• The Governance Trap - Why using personal AI scribe accounts in a clinical setting can be a professional risk.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;0:00 The &amp;#34;Administrative Tax&amp;#34; on Clinicians&lt;/p&gt;&lt;p&gt;0:31 What is an AI Scribe?&lt;/p&gt;&lt;p&gt;1:52 Tier 1: Enterprise AI (Nuance DAX &amp;amp; Abridge)&lt;/p&gt;&lt;p&gt;2:45 Solving the &amp;#34;Black Box&amp;#34; Problem with Linked Evidence&lt;/p&gt;&lt;p&gt;3:38 Oracle Health: The Future of Integration?&lt;/p&gt;&lt;p&gt;4:27 Automated Medical Coding &amp;amp; Audit Risks&lt;/p&gt;&lt;p&gt;5:00 Tier 2: AI Clinical Assistants (Suki)&lt;/p&gt;&lt;p&gt;5:33 Tier 3: Solo Specialist Tools (Freed, Heidi Health, Nabla)&lt;/p&gt;&lt;p&gt;6:19 Infrastructure Challenges: Wi-Fi vs Cellular&lt;/p&gt;&lt;p&gt;7:00 Personal Devices vs Managed Hardware&lt;/p&gt;&lt;p&gt;7:28 Digital Exhaust: Should You Keep Raw Patient Audio?&lt;/p&gt;&lt;p&gt;8:45 The Danger of &amp;#34;Shadow AI&amp;#34; in Health Systems Like the NHS&lt;/p&gt;&lt;p&gt;9:53 HIPAA vs BAA: Legal Risks in the USA&lt;/p&gt;&lt;p&gt;11:06 Who is Liable for AI Hallucinations?&lt;/p&gt;&lt;p&gt;12:12 Patient Privacy &amp;amp; Algorithmic Bias&lt;/p&gt;&lt;p&gt;13:14 Global Regulations (Canada &amp;amp; UK Specifics)&lt;/p&gt;&lt;p&gt;13:45 Tier 4: Specialty Tuned AI (Oncology &amp;amp; Cardiology)&lt;/p&gt;&lt;p&gt;14:10 The Productivity Paradox: Does AI Actually Save Time?&lt;/p&gt;&lt;p&gt;15:19 3 Power User Tips for AI Scribes&lt;/p&gt;&lt;p&gt;16:16 Why You Need to Narrate Your Care&lt;/p&gt;&lt;p&gt;16:55 Summary: How to Choose the Right AI Scribe&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/p&gt;&lt;p&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/p&gt;&lt;p&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/p&gt;&lt;p&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/p&gt;&lt;p&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;https://substack.com/@healthaibrief&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#MedicalAI #AIScribe #HealthTech #ClinicalDocumentation #NHS #HealthAI #NuanceDAX #AbridgeAI #SukiAI #MedicalInnovation&lt;/p&gt;</content:encoded>
                
                <enclosure length="17919164" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/7e1f8cd7-9852-4391-92b0-2ae4d4419958/stream.mp3"/>
                
                <guid isPermaLink="false">b1cb9444-f8d7-41e5-8ef6-dd4fcf868b65</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/7e1f8cd7-9852-4391-92b0-2ae4d4419958</link>
                <pubDate>Wed, 08 Apr 2026 06:00:28 &#43;0000</pubDate>
                <itunes:duration>1119</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI Scribes Worth It? - New JAMA Study Analysis</itunes:title>
                <title>AI Scribes Worth It? - New JAMA Study Analysis</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Does AI documentation actually save time, or is it just shifting the burden? We analyse the 2026 JAMA multisite study of 8,500+ clinicians using ambient AI scribes in real-world settings.</p><p><br></p><p>This analysis looks at the data from five major academic health centers to determine the actual impact of AI on clinical workflows. We explore why Primary Care saw 25-minute savings while other specialties saw far less, and we address the critical questions regarding resident physicians, documentation errors, and the &#34;edit threshold&#34; for formal medical records.</p><p><br></p><p>Reference:</p><p>- https://jamanetwork.com/journals/jama/article-abstract/2847319</p><p>- DOI: https://doi.org/10.1001/jama.2026.2253</p><p>- Title: Changes in Clinician Time Expenditure and Visit Quantity With Adoption of Artificial Intelligence–Powered Scribes A Multisite Study by Rotenstein at al. JAMA 2026</p><p><br></p><p>Key Takeaways:</p><p>• Specialty Split: Primary Care clinicians saved double the time of secondary care specialists, potentially due to lower &#34;edit thresholds&#34; for internal notes.</p><p>• The Resident Factor: Residents saved 94 minutes, raising questions about whether they are checking output or simply trusting the AI.</p><p>• The Rework Risk: Current data only goes up to 5 months, leaving the long-term impact on documentation accuracy and patient safety unknown.</p><p><br></p><p>00:00 - 00:22: Introduction to the large-scale real-world study on AI medical scribes.</p><p>00:22 - 00:40: Initial results: Time savings vs. quality and safety concerns.</p><p>00:40 - 01:15: Study methodology (Difference-in-difference approach) and average reductions.</p><p>01:15 - 01:44: Breakdown of benefits for primary care, residents, and female physicians.</p><p>01:44 - 02:48: Why primary care clinicians see more benefits than specialists.</p><p>02:48 - 04:03: Resident physicians: Significant savings and accountability questions.</p><p>04:03 - 04:50: Limitations of the research: Downstream consequences and note quality.</p><p>04:50 - 05:35: Long-term sustainability: Proficiency vs. complacency.</p><p>05:35 - 06:33: Adoption bias and the impact on broader clinical populations.</p><p>06:33 - 07:12: Analysis of gender-specific findings in time savings.</p><p>07:12 - 08:03: Summary: AI scribing as a tool with potential but unresolved risks.</p><p><br></p><p>Clinical Governance &amp; Educational Disclosure</p><p>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</p><p>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</p><p>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</p><p>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</p><p><br></p><p>Music generated by Mubert https://mubert.com/render</p><p>https://substack.com/@healthaibrief</p><p><br></p><p>#AIScribes #HealthAI #ClinicalDocumentation #JAMA #EHR #MedicalInformatics #PrimaryCare #HealthTech #PatientSafety #HealthcareInnovation</p>]]></description>
                <content:encoded>&lt;p&gt;Does AI documentation actually save time, or is it just shifting the burden? We analyse the 2026 JAMA multisite study of 8,500&#43; clinicians using ambient AI scribes in real-world settings.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;This analysis looks at the data from five major academic health centers to determine the actual impact of AI on clinical workflows. We explore why Primary Care saw 25-minute savings while other specialties saw far less, and we address the critical questions regarding resident physicians, documentation errors, and the &amp;#34;edit threshold&amp;#34; for formal medical records.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Reference:&lt;/p&gt;&lt;p&gt;- https://jamanetwork.com/journals/jama/article-abstract/2847319&lt;/p&gt;&lt;p&gt;- DOI: https://doi.org/10.1001/jama.2026.2253&lt;/p&gt;&lt;p&gt;- Title: Changes in Clinician Time Expenditure and Visit Quantity With Adoption of Artificial Intelligence–Powered Scribes A Multisite Study by Rotenstein at al. JAMA 2026&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• Specialty Split: Primary Care clinicians saved double the time of secondary care specialists, potentially due to lower &amp;#34;edit thresholds&amp;#34; for internal notes.&lt;/p&gt;&lt;p&gt;• The Resident Factor: Residents saved 94 minutes, raising questions about whether they are checking output or simply trusting the AI.&lt;/p&gt;&lt;p&gt;• The Rework Risk: Current data only goes up to 5 months, leaving the long-term impact on documentation accuracy and patient safety unknown.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;00:00 - 00:22: Introduction to the large-scale real-world study on AI medical scribes.&lt;/p&gt;&lt;p&gt;00:22 - 00:40: Initial results: Time savings vs. quality and safety concerns.&lt;/p&gt;&lt;p&gt;00:40 - 01:15: Study methodology (Difference-in-difference approach) and average reductions.&lt;/p&gt;&lt;p&gt;01:15 - 01:44: Breakdown of benefits for primary care, residents, and female physicians.&lt;/p&gt;&lt;p&gt;01:44 - 02:48: Why primary care clinicians see more benefits than specialists.&lt;/p&gt;&lt;p&gt;02:48 - 04:03: Resident physicians: Significant savings and accountability questions.&lt;/p&gt;&lt;p&gt;04:03 - 04:50: Limitations of the research: Downstream consequences and note quality.&lt;/p&gt;&lt;p&gt;04:50 - 05:35: Long-term sustainability: Proficiency vs. complacency.&lt;/p&gt;&lt;p&gt;05:35 - 06:33: Adoption bias and the impact on broader clinical populations.&lt;/p&gt;&lt;p&gt;06:33 - 07:12: Analysis of gender-specific findings in time savings.&lt;/p&gt;&lt;p&gt;07:12 - 08:03: Summary: AI scribing as a tool with potential but unresolved risks.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/p&gt;&lt;p&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/p&gt;&lt;p&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/p&gt;&lt;p&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/p&gt;&lt;p&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;https://substack.com/@healthaibrief&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#AIScribes #HealthAI #ClinicalDocumentation #JAMA #EHR #MedicalInformatics #PrimaryCare #HealthTech #PatientSafety #HealthcareInnovation&lt;/p&gt;</content:encoded>
                
                <enclosure length="7707585" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/3cec6792-3ef3-4c6d-b85b-cee0238d2f92/stream.mp3"/>
                
                <guid isPermaLink="false">7a7db33b-f9a5-462d-8ee3-fdf138121fac</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/3cec6792-3ef3-4c6d-b85b-cee0238d2f92</link>
                <pubDate>Tue, 07 Apr 2026 06:00:09 &#43;0000</pubDate>
                <itunes:duration>481</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Heidi Remote - Dedicated Hardware for Ambient Clinical AI Scribe</itunes:title>
                <title>Heidi Remote - Dedicated Hardware for Ambient Clinical AI Scribe</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>Stop fighting with hospital Wi-Fi and start focusing on your patients? Heidi Remote is the first dedicated wearable AI microphone designed to eliminate the integration tax of using smartphones for clinical documentation.</span></p><p><br></p><p><span>The Heidi Remote is a purpose-built, medical-grade peripheral designed to optimize audio capture for ambient AI scribing. By moving the recording process to a dedicated, offline-capable device, clinicians can overcome common hurdles like battery drain, connectivity &#34;dead zones,&#34; and background noise in busy wards. This deep dive analyzes the hardware specs, the strategic shift from software to &#34;embodied AI,&#34; and the governance implications for NHS and global healthcare systems.</span></p><p><br></p><p><span>Reference: https://www.heidihealth.com/en-gb/hardware</span></p><p><br></p><p><span>Key Takeaways</span></p><p><span>• Hardware Reliability: Why 14-hour battery life and offline recording modes are essential for high-mobility clinical roles like ward rounds and ED.</span></p><p><span>• Transcription Fidelity: How dedicated 360° omnidirectional microphones improve the signal-to-noise ratio, leading to more accurate AI-generated clinical notes.</span></p><p><span>• Governance &amp; Security: An analysis of the ISO 27001 and SOC 2 compliance frameworks that make dedicated hardware easier for hospital IG leads to approve compared to personal devices.</span></p><p><br></p><p><span>0:00 - Challenges of AI scribes in hospital environments (connectivity and interference)</span></p><p><span>0:40 - Introduction to Heidi Remote: A strategic hardware pivot</span></p><p><span>1:04 - Product specs: Weight, 360-degree audio, and noise reduction</span></p><p><span>1:59 - Durability, hygiene, and battery life for clinical shifts</span></p><p><span>2:19 - Professional workflow vs. consumer AI gadgets</span></p><p><span>3:01 - Moving toward on-premise AI infrastructure and data security</span></p><p><span>4:43 - Governance, ISO certification, and hardware pricing</span></p><p><span>6:18 - Impact on patient-clinician trust and eye contact</span></p><p><span>7:34 - Current limitations: iOS support and EHR integration</span></p><p><span>8:32 - Conclusion: The shift toward embodied AI tools in healthcare</span></p><p><br></p><p><span>Clinical Governance &amp; Educational Disclosure</span></p><p><span>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</span></p><p><span>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</span></p><p><span>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</span></p><p><span>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</span></p><p><br></p><p><span>Music generated by Mubert https://mubert.com/render</span></p><p><span>https://substack.com/@healthaibrief</span></p><p><br></p><p><span>#HealthAI #DigitalHealth #HeidiHealth #AIScribe #MedicalTech #NHS #HealthTech #AmbientAI #ClinicalWorkflow #HeidiRemote</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Stop fighting with hospital Wi-Fi and start focusing on your patients? Heidi Remote is the first dedicated wearable AI microphone designed to eliminate the integration tax of using smartphones for clinical documentation.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;The Heidi Remote is a purpose-built, medical-grade peripheral designed to optimize audio capture for ambient AI scribing. By moving the recording process to a dedicated, offline-capable device, clinicians can overcome common hurdles like battery drain, connectivity &amp;#34;dead zones,&amp;#34; and background noise in busy wards. This deep dive analyzes the hardware specs, the strategic shift from software to &amp;#34;embodied AI,&amp;#34; and the governance implications for NHS and global healthcare systems.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Reference: https://www.heidihealth.com/en-gb/hardware&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Hardware Reliability: Why 14-hour battery life and offline recording modes are essential for high-mobility clinical roles like ward rounds and ED.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Transcription Fidelity: How dedicated 360° omnidirectional microphones improve the signal-to-noise ratio, leading to more accurate AI-generated clinical notes.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Governance &amp;amp; Security: An analysis of the ISO 27001 and SOC 2 compliance frameworks that make dedicated hardware easier for hospital IG leads to approve compared to personal devices.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;0:00 - Challenges of AI scribes in hospital environments (connectivity and interference)&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;0:40 - Introduction to Heidi Remote: A strategic hardware pivot&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;1:04 - Product specs: Weight, 360-degree audio, and noise reduction&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;1:59 - Durability, hygiene, and battery life for clinical shifts&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;2:19 - Professional workflow vs. consumer AI gadgets&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;3:01 - Moving toward on-premise AI infrastructure and data security&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;4:43 - Governance, ISO certification, and hardware pricing&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;6:18 - Impact on patient-clinician trust and eye contact&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;7:34 - Current limitations: iOS support and EHR integration&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;8:32 - Conclusion: The shift toward embodied AI tools in healthcare&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;#HealthAI #DigitalHealth #HeidiHealth #AIScribe #MedicalTech #NHS #HealthTech #AmbientAI #ClinicalWorkflow #HeidiRemote&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="8936803" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/7abbaefe-11af-4a39-aa26-a872505e1e3d/stream.mp3"/>
                
                <guid isPermaLink="false">d0ecc7e1-1208-4806-82e6-51ffca4591c9</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/7abbaefe-11af-4a39-aa26-a872505e1e3d</link>
                <pubDate>Mon, 06 Apr 2026 06:00:07 &#43;0000</pubDate>
                <itunes:duration>558</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>RAG - Ending Hallucinations and Confabulation with Retrieval Augmented Generation</itunes:title>
                <title>RAG - Ending Hallucinations and Confabulation with Retrieval Augmented Generation</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Retrieval-Augmented Generation (RAG) is more than just a search bar; it&#39;s a multi-stage pipeline that ensures AI remains grounded in fact. We break down the mechanics of Vector Databases, Embeddings, and why RAG is the cure for AI &#34;hallucinations.&#34;</p><p><br></p><p>#RAG #MedicalAI #Bioinformatics #HealthTech #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Retrieval-Augmented Generation (RAG) is more than just a search bar; it&amp;#39;s a multi-stage pipeline that ensures AI remains grounded in fact. We break down the mechanics of Vector Databases, Embeddings, and why RAG is the cure for AI &amp;#34;hallucinations.&amp;#34;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#RAG #MedicalAI #Bioinformatics #HealthTech #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2882246" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/14ba4a7c-35a2-49b2-a1b3-f371d81ad0b8/stream.mp3"/>
                
                <guid isPermaLink="false">7843fb0c-bf22-4501-a342-233d5a9befe6</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/14ba4a7c-35a2-49b2-a1b3-f371d81ad0b8</link>
                <pubDate>Fri, 03 Apr 2026 06:00:56 &#43;0000</pubDate>
                <itunes:duration>180</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Doctronic AI is Legally Prescribing Drugs (And Doctors Agree 99% of the Time)</itunes:title>
                <title>Doctronic AI is Legally Prescribing Drugs (And Doctors Agree 99% of the Time)</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>Is an AI legally allowed to write your prescription? The $40M medical loophole explained.</span></p><p><br></p><p><span>Doctronic just raised $40 million for an autonomous AI doctor, but a deep dive into their clinical data reveals a controversial regulatory strategy.</span></p><p><br></p><p><span>In this episode, we deconstruct the technology behind Doctronic, the multi-agent AI system that is currently piloting autonomous prescription renewals in the US. We analyse the Chief Medical Officer&#39;s claim that their AI is a &#34;practitioner&#34; rather than a medical device, exposing the regulatory loophole they are using to bypass FDA scrutiny. We also break down their recent clinical preprint claiming a 99.2% match with human doctors, highlighting the critical study limitations like anchoring bias, and review recent security vulnerabilities involving prompt injection and SOAP note manipulation.</span></p><p><br></p><p><span>Reference:</span></p><p><span>- https://doi.org/10.1101/2025.07.14.25331406 </span></p><p><span>- Link: www.medrxiv.org/content/10.1101/2025.07.14.25331406v1</span></p><p><span>- Title: Toward the Autonomous AI Doctor: Quantitative Benchmarking of an Autonomous Agentic AI Versus Board-Certified Clinicians in a Real World Setting</span></p><p><span>- Hayat H et al. 2025</span></p><p><br></p><p><span>Key Takeaways:</span></p><p><span>• Understand the &#34;Multi-Agent&#34; LLM architecture that allows Doctronic to mimic a primary care team and generate zero-hallucination SOAP notes.</span></p><p><span>• Learn how HealthTech startups are using state-level &#34;practice of medicine&#34; laws and malpractice insurance to bypass FDA Software as a Medical Device (SaMD) regulations.</span></p><p><span>• Discover the critical methodological flaw (anchoring bias) in Doctronic&#39;s clinical study that inflates their 99.2% human concordance claim.</span></p><p><br></p><p><span>0:00 - Intro</span></p><p><span>0:58 - Doctronic’s Multi-Agent LLM System</span></p><p><span>2:00 - Regulatory Strategy: AI as a ‘Practitioner’</span></p><p><span>4:12 - Security Vulnerabilities</span></p><p><span>5:18 - Deep Dive: Doctronic’s Clinical Study</span></p><p><span>6:33 - AI vs Human Management Plans</span></p><p><span>8:00 - Considering the Methodology</span></p><p><span>10:20 - The Promise of AI in Healthcare</span></p><p><span>11:31 - The Risks of Premature Autonomy</span></p><p><span>12:08 - A Safer Path Forward</span></p><p><br></p><p><span>Clinical Governance &amp; Educational Disclosure</span></p><p><span>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</span></p><p><span>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</span></p><p><span>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</span></p><p><span>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</span></p><p><br></p><p><span>Music generated by Mubert https://mubert.com/render</span></p><p><span>https://substack.com/@healthaibrief</span></p><p><br></p><p><span>#HealthTech #ArtificialIntelligence #DigitalHealth #MedicalAI #Doctronic #HealthcareInnovation #MachineLearning #MedTech #ClinicalAI #FutureOfMedicine</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Is an AI legally allowed to write your prescription? The $40M medical loophole explained.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Doctronic just raised $40 million for an autonomous AI doctor, but a deep dive into their clinical data reveals a controversial regulatory strategy.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;In this episode, we deconstruct the technology behind Doctronic, the multi-agent AI system that is currently piloting autonomous prescription renewals in the US. We analyse the Chief Medical Officer&amp;#39;s claim that their AI is a &amp;#34;practitioner&amp;#34; rather than a medical device, exposing the regulatory loophole they are using to bypass FDA scrutiny. We also break down their recent clinical preprint claiming a 99.2% match with human doctors, highlighting the critical study limitations like anchoring bias, and review recent security vulnerabilities involving prompt injection and SOAP note manipulation.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Reference:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;- https://doi.org/10.1101/2025.07.14.25331406 &lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;- Link: www.medrxiv.org/content/10.1101/2025.07.14.25331406v1&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;- Title: Toward the Autonomous AI Doctor: Quantitative Benchmarking of an Autonomous Agentic AI Versus Board-Certified Clinicians in a Real World Setting&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;- Hayat H et al. 2025&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Understand the &amp;#34;Multi-Agent&amp;#34; LLM architecture that allows Doctronic to mimic a primary care team and generate zero-hallucination SOAP notes.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Learn how HealthTech startups are using state-level &amp;#34;practice of medicine&amp;#34; laws and malpractice insurance to bypass FDA Software as a Medical Device (SaMD) regulations.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Discover the critical methodological flaw (anchoring bias) in Doctronic&amp;#39;s clinical study that inflates their 99.2% human concordance claim.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;0:00 - Intro&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;0:58 - Doctronic’s Multi-Agent LLM System&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;2:00 - Regulatory Strategy: AI as a ‘Practitioner’&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;4:12 - Security Vulnerabilities&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;5:18 - Deep Dive: Doctronic’s Clinical Study&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;6:33 - AI vs Human Management Plans&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;8:00 - Considering the Methodology&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;10:20 - The Promise of AI in Healthcare&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;11:31 - The Risks of Premature Autonomy&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;12:08 - A Safer Path Forward&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;#HealthTech #ArtificialIntelligence #DigitalHealth #MedicalAI #Doctronic #HealthcareInnovation #MachineLearning #MedTech #ClinicalAI #FutureOfMedicine&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="13627977" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/6fdb00cf-cda8-430d-a23b-c1b013dbc8ba/stream.mp3"/>
                
                <guid isPermaLink="false">7ad84666-8ee2-43e5-9bf5-e7a0232e84b3</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/6fdb00cf-cda8-430d-a23b-c1b013dbc8ba</link>
                <pubDate>Thu, 02 Apr 2026 06:00:14 &#43;0000</pubDate>
                <itunes:duration>851</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>How to Spot AI Writing</itunes:title>
                <title>How to Spot AI Writing</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Are you reading human insight or an algorithm’s prediction? The &#34;AI Tell&#34; are structural signature that reveals the machine hiding in plain sight. In this episode, we consider the grammatical and formatting fingerprints of modern generative AI to help you regain your critical edge.</p><p>Video for more on AI use for work: <a href="https://youtu.be/5aHIBl4hNSA" rel="nofollow">https://youtu.be/5aHIBl4hNSA</a></p><p><em>Key Takeaways</em></p><ul><li>Identify the common structural fingerprints of LLMs, including specific punctuation glitches like the &#34;e.g.,&#34; comma and the vertical ± symbol.</li><li>Understand the &#34;Why&#34;: Why AI is architecturally incapable of avoiding generic, overly polite, and &#34;safe&#34; language.</li><li>Develop a forensic approach to evaluating information that protects you from automation bias and synthetic content.</li></ul><p><br></p><p><strong>00:00</strong> Is it AI?</p><p> <strong>00:34</strong> The High-Stakes Game of Detective</p><p> <strong>01:00</strong> Are Hallucinations a Tell?</p><p> <strong>01:14</strong> Why AI Doesn&#39;t Make Typo Mistakes</p><p> <strong>01:43</strong> Specific Rhythmic Rigidity</p><p> <strong>02:07</strong> Formatting Over Language</p><p> <strong>02:47</strong> Sensational Language</p><p> <strong>03:04</strong> Specific Smaller Indicators</p><p> <strong>03:36</strong> Symbol Usage</p><p> <strong>04:11</strong> The Comma After &#34;e.g.&#34;</p><p> <strong>04:29</strong> Excessive Quotation Marks</p><p> <strong>04:49</strong> Unnatural Polish</p><p> <strong>05:19</strong> Conclusion</p><p> </p><p>Clinical Governance &amp; Educational Disclosure</p><p>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</p><p>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</p><p>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</p><p>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</p><p>#AI #DigitalLiteracy #CriticalThinking #GenerativeAI #InformationQuality #TechForensics #Communication #HumanVsAI #Algorithm #CognitiveSkills</p>]]></description>
                <content:encoded>&lt;p&gt;Are you reading human insight or an algorithm’s prediction? The &amp;#34;AI Tell&amp;#34; are structural signature that reveals the machine hiding in plain sight. In this episode, we consider the grammatical and formatting fingerprints of modern generative AI to help you regain your critical edge.&lt;/p&gt;&lt;p&gt;Video for more on AI use for work: &lt;a href=&#34;https://youtu.be/5aHIBl4hNSA&#34; rel=&#34;nofollow&#34;&gt;https://youtu.be/5aHIBl4hNSA&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;Key Takeaways&lt;/em&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Identify the common structural fingerprints of LLMs, including specific punctuation glitches like the &amp;#34;e.g.,&amp;#34; comma and the vertical ± symbol.&lt;/li&gt;&lt;li&gt;Understand the &amp;#34;Why&amp;#34;: Why AI is architecturally incapable of avoiding generic, overly polite, and &amp;#34;safe&amp;#34; language.&lt;/li&gt;&lt;li&gt;Develop a forensic approach to evaluating information that protects you from automation bias and synthetic content.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;00:00&lt;/strong&gt; Is it AI?&lt;/p&gt;&lt;p&gt; &lt;strong&gt;00:34&lt;/strong&gt; The High-Stakes Game of Detective&lt;/p&gt;&lt;p&gt; &lt;strong&gt;01:00&lt;/strong&gt; Are Hallucinations a Tell?&lt;/p&gt;&lt;p&gt; &lt;strong&gt;01:14&lt;/strong&gt; Why AI Doesn&amp;#39;t Make Typo Mistakes&lt;/p&gt;&lt;p&gt; &lt;strong&gt;01:43&lt;/strong&gt; Specific Rhythmic Rigidity&lt;/p&gt;&lt;p&gt; &lt;strong&gt;02:07&lt;/strong&gt; Formatting Over Language&lt;/p&gt;&lt;p&gt; &lt;strong&gt;02:47&lt;/strong&gt; Sensational Language&lt;/p&gt;&lt;p&gt; &lt;strong&gt;03:04&lt;/strong&gt; Specific Smaller Indicators&lt;/p&gt;&lt;p&gt; &lt;strong&gt;03:36&lt;/strong&gt; Symbol Usage&lt;/p&gt;&lt;p&gt; &lt;strong&gt;04:11&lt;/strong&gt; The Comma After &amp;#34;e.g.&amp;#34;&lt;/p&gt;&lt;p&gt; &lt;strong&gt;04:29&lt;/strong&gt; Excessive Quotation Marks&lt;/p&gt;&lt;p&gt; &lt;strong&gt;04:49&lt;/strong&gt; Unnatural Polish&lt;/p&gt;&lt;p&gt; &lt;strong&gt;05:19&lt;/strong&gt; Conclusion&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/p&gt;&lt;p&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/p&gt;&lt;p&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/p&gt;&lt;p&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/p&gt;&lt;p&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/p&gt;&lt;p&gt;#AI #DigitalLiteracy #CriticalThinking #GenerativeAI #InformationQuality #TechForensics #Communication #HumanVsAI #Algorithm #CognitiveSkills&lt;/p&gt;</content:encoded>
                
                <enclosure length="5929586" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/adf9f6c8-6bf7-4907-bdfc-a27158483024/stream.mp3"/>
                
                <guid isPermaLink="false">7f21b43e-9f63-4206-9b3a-4a8e01318c7b</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/adf9f6c8-6bf7-4907-bdfc-a27158483024</link>
                <pubDate>Wed, 01 Apr 2026 06:00:07 &#43;0000</pubDate>
                <itunes:duration>370</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Chain-of-Thought (CoT) - Making The AI &#39;Think&#39; Out Loud</itunes:title>
                <title>Chain-of-Thought (CoT) - Making The AI &#39;Think&#39; Out Loud</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>If you ask an AI for a diagnosis, it might guess. If you ask it to &#34;think step-by-step,&#34; it becomes a genius. We explain CoT prompting.</p><p><br></p><p>#Logic #Reasoning #AI #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;If you ask an AI for a diagnosis, it might guess. If you ask it to &amp;#34;think step-by-step,&amp;#34; it becomes a genius. We explain CoT prompting.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#Logic #Reasoning #AI #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1672254" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/32a297bf-23f1-4b9e-9d45-2f90edd9eb08/stream.mp3"/>
                
                <guid isPermaLink="false">d4fc4166-87c6-46d2-ac93-249ee7357e5c</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/32a297bf-23f1-4b9e-9d45-2f90edd9eb08</link>
                <pubDate>Tue, 31 Mar 2026 06:00:39 &#43;0000</pubDate>
                <itunes:duration>104</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Why This AI Failed to Catch 17 Fatal &#34;Never Events&#34; - Validating NG tube Position AI</itunes:title>
                <title>Why This AI Failed to Catch 17 Fatal &#34;Never Events&#34; - Validating NG tube Position AI</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Can AI prevent &#34;Never Events&#34; in NG feeding tube placement, or is it creating new risks? This deep dive into the latest NEJM AI prospective silent trial reveals the startling truth about current AI performance in the NHS.</p><p><br></p><p>We analyse the 1-year validation of a leading AI tool for nasogastric tube verification. While the tech promises to eliminate human error, real-world data shows significant hurdles in sensitivity, specificity, and demographic bias that every clinician and health manager needs to understand before deployment.</p><p><br></p><p>References</p><p>Link to the research: https://ai.nejm.org/doi/full/10.1056/AIoa2500823</p><p>Title: External Validation of a Commercially Available AI Tool for Nasogastric Tube Position Decision Support in the NHS: A Prospective Silent Trial</p><p>Authors: Bartsch et al.</p><p><br></p><p>Key Takeaways</p><p>• Why a 0.17 Positive Predictive Value (PPV) triggers dangerous alert fatigue in clinical settings.</p><p>• Analysis of the 17 &#34;False Negative&#34; misses—why AI struggled with coiled tubes and complex anatomy.</p><p>• The strategic roadmap: Why &#34;Silent Trials&#34; are the essential bridge between CE certification and patient safety.</p><p><br></p><p>00:00 Introduction</p><p>00:16 Significance of the Paper</p><p>00:25 The High-Stakes Task of NG Tube Positioning</p><p>00:32 The &#34;Never Event&#34; of Misplaced Tubes</p><p>00:45 Scale of the Problem</p><p>00:54 Current Safety Standards and Limitations</p><p>01:02 Can Computer Vision Solve the Problem?</p><p>01:26 Study Methodology: The Silent Trial</p><p>02:01 Performance Metrics: Sensitivity and Specificity</p><p>02:20 Discrepancy Analysis: Where the AI Failed</p><p>02:44 Anatomy of AI Errors</p><p>03:15 The Problem of Specificity</p><p>03:34 Impact on Clinical Practice and Alert Fatigue</p><p>04:00 Failure Analysis: Why the AI Misinterpreted Images</p><p>04:18 Performance Bias: Age and Patient Factors</p><p>04:52 Implications: Why the Tool Isn&#39;t Ready for Deployment</p><p>05:05 Why Negative Results Matter</p><p>05:25 Future Directions: Improving AI Safety</p><p>06:14 Conclusion: Moving Toward &#34;Trust But Verify&#34;</p><p><br></p><p>𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 &amp; 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐃𝐢𝐬𝐜𝐥𝐨𝐬𝐮𝐫𝐞:</p><p>This concise summary of AI technology is for 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐚𝐧𝐝 𝐢𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐩𝐮𝐫𝐩𝐨𝐬𝐞𝐬 𝐨𝐧𝐥𝐲. It provides a technical analysis of AI capabilities in healthcare and does not constitute medical advice, diagnosis, or treatment.</p><p>• 𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐀𝐜𝐜𝐨𝐮𝐧𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲: If you are a healthcare professional, ensure any implementation of AI tools complies with your local Trust’s policies, data governance protocols, and professional regulatory standards (GMC/NMC/HCPC or equivalent).</p><p>• 𝐈𝐧𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐄𝐯𝐢𝐝𝐞𝐧𝐜𝐞-𝐁𝐚𝐬𝐞𝐝 𝐑𝐞𝐯𝐢𝐞𝐰: The views expressed are my own and do not represent the official position of any University, Hospital Trust, employer, or regulatory body.</p><p>• 𝐏𝐚𝐭𝐢𝐞𝐧𝐭 𝐒𝐚𝐟𝐞𝐭𝐲: This video does not establish a doctor-patient relationship. Members of the public should always seek the advice of a qualified healthcare provider regarding any medical condition.</p><p><br></p><p>Music generated by Mubert https://mubert.com/render</p><p>https://substack.com/@healthaibrief</p><p><br></p><p>Health AI, Clinical AI Validation, Nasogastric Tube Safety, NEJM AI, Medical Machine Learning, NHS Innovation, Patient Safety, Radiology AI, Computer Vision in Medicine, HealthTech Strategy. #HealthAI #PatientSafety #MedTech #Radiology #DigitalHealth</p>]]></description>
                <content:encoded>&lt;p&gt;Can AI prevent &amp;#34;Never Events&amp;#34; in NG feeding tube placement, or is it creating new risks? This deep dive into the latest NEJM AI prospective silent trial reveals the startling truth about current AI performance in the NHS.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;We analyse the 1-year validation of a leading AI tool for nasogastric tube verification. While the tech promises to eliminate human error, real-world data shows significant hurdles in sensitivity, specificity, and demographic bias that every clinician and health manager needs to understand before deployment.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;References&lt;/p&gt;&lt;p&gt;Link to the research: https://ai.nejm.org/doi/full/10.1056/AIoa2500823&lt;/p&gt;&lt;p&gt;Title: External Validation of a Commercially Available AI Tool for Nasogastric Tube Position Decision Support in the NHS: A Prospective Silent Trial&lt;/p&gt;&lt;p&gt;Authors: Bartsch et al.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways&lt;/p&gt;&lt;p&gt;• Why a 0.17 Positive Predictive Value (PPV) triggers dangerous alert fatigue in clinical settings.&lt;/p&gt;&lt;p&gt;• Analysis of the 17 &amp;#34;False Negative&amp;#34; misses—why AI struggled with coiled tubes and complex anatomy.&lt;/p&gt;&lt;p&gt;• The strategic roadmap: Why &amp;#34;Silent Trials&amp;#34; are the essential bridge between CE certification and patient safety.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;00:00 Introduction&lt;/p&gt;&lt;p&gt;00:16 Significance of the Paper&lt;/p&gt;&lt;p&gt;00:25 The High-Stakes Task of NG Tube Positioning&lt;/p&gt;&lt;p&gt;00:32 The &amp;#34;Never Event&amp;#34; of Misplaced Tubes&lt;/p&gt;&lt;p&gt;00:45 Scale of the Problem&lt;/p&gt;&lt;p&gt;00:54 Current Safety Standards and Limitations&lt;/p&gt;&lt;p&gt;01:02 Can Computer Vision Solve the Problem?&lt;/p&gt;&lt;p&gt;01:26 Study Methodology: The Silent Trial&lt;/p&gt;&lt;p&gt;02:01 Performance Metrics: Sensitivity and Specificity&lt;/p&gt;&lt;p&gt;02:20 Discrepancy Analysis: Where the AI Failed&lt;/p&gt;&lt;p&gt;02:44 Anatomy of AI Errors&lt;/p&gt;&lt;p&gt;03:15 The Problem of Specificity&lt;/p&gt;&lt;p&gt;03:34 Impact on Clinical Practice and Alert Fatigue&lt;/p&gt;&lt;p&gt;04:00 Failure Analysis: Why the AI Misinterpreted Images&lt;/p&gt;&lt;p&gt;04:18 Performance Bias: Age and Patient Factors&lt;/p&gt;&lt;p&gt;04:52 Implications: Why the Tool Isn&amp;#39;t Ready for Deployment&lt;/p&gt;&lt;p&gt;05:05 Why Negative Results Matter&lt;/p&gt;&lt;p&gt;05:25 Future Directions: Improving AI Safety&lt;/p&gt;&lt;p&gt;06:14 Conclusion: Moving Toward &amp;#34;Trust But Verify&amp;#34;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 &amp;amp; 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐃𝐢𝐬𝐜𝐥𝐨𝐬𝐮𝐫𝐞:&lt;/p&gt;&lt;p&gt;This concise summary of AI technology is for 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐚𝐧𝐝 𝐢𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐩𝐮𝐫𝐩𝐨𝐬𝐞𝐬 𝐨𝐧𝐥𝐲. It provides a technical analysis of AI capabilities in healthcare and does not constitute medical advice, diagnosis, or treatment.&lt;/p&gt;&lt;p&gt;• 𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐀𝐜𝐜𝐨𝐮𝐧𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲: If you are a healthcare professional, ensure any implementation of AI tools complies with your local Trust’s policies, data governance protocols, and professional regulatory standards (GMC/NMC/HCPC or equivalent).&lt;/p&gt;&lt;p&gt;• 𝐈𝐧𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐄𝐯𝐢𝐝𝐞𝐧𝐜𝐞-𝐁𝐚𝐬𝐞𝐝 𝐑𝐞𝐯𝐢𝐞𝐰: The views expressed are my own and do not represent the official position of any University, Hospital Trust, employer, or regulatory body.&lt;/p&gt;&lt;p&gt;• 𝐏𝐚𝐭𝐢𝐞𝐧𝐭 𝐒𝐚𝐟𝐞𝐭𝐲: This video does not establish a doctor-patient relationship. Members of the public should always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;https://substack.com/@healthaibrief&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Health AI, Clinical AI Validation, Nasogastric Tube Safety, NEJM AI, Medical Machine Learning, NHS Innovation, Patient Safety, Radiology AI, Computer Vision in Medicine, HealthTech Strategy. #HealthAI #PatientSafety #MedTech #Radiology #DigitalHealth&lt;/p&gt;</content:encoded>
                
                <enclosure length="6761325" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/3d49096c-dbce-4c83-bd1a-7b7d3b4e6e8d/stream.mp3"/>
                
                <guid isPermaLink="false">63fae130-8758-459e-939e-3e037bc6ab75</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/3d49096c-dbce-4c83-bd1a-7b7d3b4e6e8d</link>
                <pubDate>Mon, 30 Mar 2026 06:00:46 &#43;0000</pubDate>
                <itunes:duration>422</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Benchmarks - How We Grade AI</itunes:title>
                <title>Benchmarks - How We Grade AI</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>MMLU, Med-QA, and Human Eval. How do we determine if an AI is &#34;smarter&#34; than a resident? The science of LLM benchmarks.#MedicalEducation #Benchmarks #DataScience #ai in medicine Music generated by Mubert https://mubert.com/renderhealthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;MMLU, Med-QA, and Human Eval. How do we determine if an AI is &amp;#34;smarter&amp;#34; than a resident? The science of LLM benchmarks.#MedicalEducation #Benchmarks #DataScience #ai in medicine Music generated by Mubert https://mubert.com/renderhealthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1843617" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/5a0fc6fe-e565-4690-9902-58b07bb4dc79/stream.mp3"/>
                
                <guid isPermaLink="false">3979d6bc-55f9-4e2a-9f31-a1a98e2aca1b</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/5a0fc6fe-e565-4690-9902-58b07bb4dc79</link>
                <pubDate>Fri, 27 Mar 2026 07:00:23 &#43;0000</pubDate>
                <itunes:duration>115</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Foundation Models &amp; Patient Privacy: Is There a Risk?</itunes:title>
                <title>Foundation Models &amp; Patient Privacy: Is There a Risk?</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>Can foundation models accidentally leak patient identity? We’re breaking down the high-stakes debate between Nebbia et al. and the team at Moorfields Eye Hospital.</span></p><p><br></p><p><span>Foundation models in medical AI are changing the game, but recent research suggests they might pose a patient re-identification risk. We explore the initial claims, the compelling rebuttal involving simple neural networks, and what this means for the future of HealthTech privacy.</span></p><p><br></p><p><span>Key Takeaways:</span></p><p><span>• Understand the &#34;baseline fallacy&#34; and why simple, untrained neural networks can sometimes outperform complex AI models.</span></p><p><span>• Distinguish between &#34;image matching&#34; and true &#34;patient re-identification&#34; in clinical datasets.</span></p><p><span>• Learn how data consistency in controlled clinical environments impacts privacy and how to frame your own AI threat models.</span></p><p><br></p><p><span>References:</span></p><p><span>https://www.nature.com/articles/s41746-025-01801-0 - original paper from Nebbia et al July 2025</span></p><p><span>https://www.nature.com/articles/s41746-026-02440-9 - Rebuttal by Engelmann et al Feb 2026</span></p><p><br></p><p><span>Link to the episode on Foundation Models: https://youtu.be/ascFcy79U7I</span></p><p><br></p><p><span>00:00 Re-identification risk of foundation models in medical imaging.</span></p><p><span>00:15 Mechanism behind the risk: foundation models are trained on diverse datasets and can learn specific features.</span></p><p><span>00:24 Initial research by Nebbia et al. suggesting the re-identification risk is high.</span></p><p><span>00:56 Testing the methodology using fundus photographs, OCT scans, and chest x-rays.</span></p><p><span>01:22 Counter-argument from a team led by Justin Engelmann and colleagues.</span></p><p><span>01:42 The replication and control experiments using ResNet.</span></p><p><span>02:43 What this means for AI research and practice.</span></p><p><span>03:49 Clinical data inherent consistency.</span></p><p><span>04:05 Why this debate is good for the medical AI community.</span></p><p><span>04:18 Takeaways for practitioners: don&#39;t let AI fear blind you to its utility.</span></p><p><span>04:35 Further information on foundation models.</span></p><p><span>04:46 The path to success: better threat models and focusing on what matters.</span></p><p><br></p><p><span>𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 &amp; 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐃𝐢𝐬𝐜𝐥𝐨𝐬𝐮𝐫𝐞:</span></p><p><span>This concise summary of AI technology is for 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐚𝐧𝐝 𝐢𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐩𝐮𝐫𝐩𝐨𝐬𝐞𝐬 𝐨𝐧𝐥𝐲. It provides a technical analysis of AI capabilities in healthcare and does not constitute medical advice, diagnosis, or treatment.</span></p><p><span>• 𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐀𝐜𝐜𝐨𝐮𝐧𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲: If you are a healthcare professional, ensure any implementation of AI tools complies with your local Trust’s policies, data governance protocols, and professional regulatory standards (GMC/NMC/HCPC or equivalent).</span></p><p><span>• 𝐈𝐧𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐄𝐯𝐢𝐝𝐞𝐧𝐜𝐞-𝐁𝐚𝐬𝐞𝐝 𝐑𝐞𝐯𝐢𝐞𝐰: The views expressed are my own and do not represent the official position of any University, Hospital Trust, employer, or regulatory body.</span></p><p><span>• 𝐏𝐚𝐭𝐢𝐞𝐧𝐭 𝐒𝐚𝐟𝐞𝐭𝐲: This video does not establish a doctor-patient relationship. Members of the public should always seek the advice of a qualified healthcare provider regarding any medical condition.</span></p><p><br></p><p><span>Music generated by Mubert https://mubert.com/render</span></p><p><span>https://substack.com/@healthaibrief</span></p><p><br></p><p><span>Medical AI, Foundation Models, Patient Privacy, HealthTech, Deep Learning, Image Re-identification, Clinical Data Security, Medical Imaging, AI Safety, Healthcare Innovation #HealthAI #MedTech #AIPrivacy #DigitalHealth #DeepLearning</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Can foundation models accidentally leak patient identity? We’re breaking down the high-stakes debate between Nebbia et al. and the team at Moorfields Eye Hospital.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Foundation models in medical AI are changing the game, but recent research suggests they might pose a patient re-identification risk. We explore the initial claims, the compelling rebuttal involving simple neural networks, and what this means for the future of HealthTech privacy.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Understand the &amp;#34;baseline fallacy&amp;#34; and why simple, untrained neural networks can sometimes outperform complex AI models.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Distinguish between &amp;#34;image matching&amp;#34; and true &amp;#34;patient re-identification&amp;#34; in clinical datasets.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Learn how data consistency in controlled clinical environments impacts privacy and how to frame your own AI threat models.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;References:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://www.nature.com/articles/s41746-025-01801-0 - original paper from Nebbia et al July 2025&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://www.nature.com/articles/s41746-026-02440-9 - Rebuttal by Engelmann et al Feb 2026&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Link to the episode on Foundation Models: https://youtu.be/ascFcy79U7I&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:00 Re-identification risk of foundation models in medical imaging.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:15 Mechanism behind the risk: foundation models are trained on diverse datasets and can learn specific features.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:24 Initial research by Nebbia et al. suggesting the re-identification risk is high.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:56 Testing the methodology using fundus photographs, OCT scans, and chest x-rays.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:22 Counter-argument from a team led by Justin Engelmann and colleagues.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:42 The replication and control experiments using ResNet.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;02:43 What this means for AI research and practice.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;03:49 Clinical data inherent consistency.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;04:05 Why this debate is good for the medical AI community.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;04:18 Takeaways for practitioners: don&amp;#39;t let AI fear blind you to its utility.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;04:35 Further information on foundation models.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;04:46 The path to success: better threat models and focusing on what matters.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 &amp;amp; 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐃𝐢𝐬𝐜𝐥𝐨𝐬𝐮𝐫𝐞:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This concise summary of AI technology is for 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐚𝐧𝐝 𝐢𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐩𝐮𝐫𝐩𝐨𝐬𝐞𝐬 𝐨𝐧𝐥𝐲. It provides a technical analysis of AI capabilities in healthcare and does not constitute medical advice, diagnosis, or treatment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• 𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐀𝐜𝐜𝐨𝐮𝐧𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲: If you are a healthcare professional, ensure any implementation of AI tools complies with your local Trust’s policies, data governance protocols, and professional regulatory standards (GMC/NMC/HCPC or equivalent).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• 𝐈𝐧𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐄𝐯𝐢𝐝𝐞𝐧𝐜𝐞-𝐁𝐚𝐬𝐞𝐝 𝐑𝐞𝐯𝐢𝐞𝐰: The views expressed are my own and do not represent the official position of any University, Hospital Trust, employer, or regulatory body.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• 𝐏𝐚𝐭𝐢𝐞𝐧𝐭 𝐒𝐚𝐟𝐞𝐭𝐲: This video does not establish a doctor-patient relationship. Members of the public should always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Medical AI, Foundation Models, Patient Privacy, HealthTech, Deep Learning, Image Re-identification, Clinical Data Security, Medical Imaging, AI Safety, Healthcare Innovation #HealthAI #MedTech #AIPrivacy #DigitalHealth #DeepLearning&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="5040587" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/804ea9d5-aff0-43a6-ab6b-5cffa9c62f9b/stream.mp3"/>
                
                <guid isPermaLink="false">b5d96f43-6119-4277-8380-4965d770f7e3</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/804ea9d5-aff0-43a6-ab6b-5cffa9c62f9b</link>
                <pubDate>Thu, 26 Mar 2026 07:00:49 &#43;0000</pubDate>
                <itunes:duration>315</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Zero-Shot vs Few-Shot - The Secret of Few-Shot Prompting</itunes:title>
                <title>Zero-Shot vs Few-Shot - The Secret of Few-Shot Prompting</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Don&#39;t just ask the AI to summarise; give it three examples. Learn why &#34;Few-Shot&#34; prompting is the easiest way to double your AI&#39;s accuracy.</p><p>#PromptEngineering #LifeHacks #MedicalAI #ai in medicine Music generated by Mubert https://mubert.com/render</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Don&amp;#39;t just ask the AI to summarise; give it three examples. Learn why &amp;#34;Few-Shot&amp;#34; prompting is the easiest way to double your AI&amp;#39;s accuracy.&lt;/p&gt;&lt;p&gt;#PromptEngineering #LifeHacks #MedicalAI #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1666821" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/06d04bab-0a5b-4561-8457-03336617e119/stream.mp3"/>
                
                <guid isPermaLink="false">916bfb34-4ecc-42c8-afbf-0c319564df5f</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/06d04bab-0a5b-4561-8457-03336617e119</link>
                <pubDate>Wed, 25 Mar 2026 07:00:13 &#43;0000</pubDate>
                <itunes:duration>104</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Is Perplexity Health the Future of Medical AI? The Surprises Behind the Launch</itunes:title>
                <title>Is Perplexity Health the Future of Medical AI? The Surprises Behind the Launch</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>Consumer health AI is moving at lightning speed, but is the clinical safety keeping up? We break down the newly launched Perplexity Health, its powerful data connectors, and the regulatory grey area of AI medical advice.</span></p><p><br></p><p><span>Perplexity has officially launched Perplexity Health, a powerful new suite of data connectors that integrates Apple Health, wearable data via Terra API, and electronic health records through b.well. By aggregating this highly fragmented personal health data, Perplexity&#39;s AI agents provide highly personalized answers to user health queries. However, a deep dive into the launch reveals a stark contrast between its aggressive medical marketing and its strict educational disclaimers, highlighting a growing trend of tech giants bypassing traditional pre-market clinical validation.</span></p><p><br></p><p><span>Soures:</span></p><p><span>- https://www.perplexity.ai/hub/blog/introducing-perplexity-health</span></p><p><span>- https://www.perplexity.ai/hub/blog/introducing-the-perplexity-health-advisory-board</span></p><p><span>- https://www.perplexity.ai/hub/legal/privacy-policy</span></p><p><br></p><p><span>Key Takeaways:</span></p><p><span>• How Perplexity Health technically unifies fragmented data from EHRs, Apple Health, and wearables.</span></p><p><span>• The critical contradiction between AI health marketing claims and legal &#34;non-medical&#34; disclaimers.</span></p><p><span>• Why the retroactive assembly of clinical advisory boards signals a major shift in medical AI regulation.</span></p><p><br></p><p><span>0:00 Introduction: The Healthcare Data Land Grab</span></p><p><span>0:41 The Evolution of Perplexity: From Search Engine to Specialized Verticals</span></p><p><span>1:18 The Architecture of Perplexity Health: Integrating Fragmented Medical Data</span></p><p><span>2:30 The Marketing Paradox: Confidence vs. Legal Disclaimers</span></p><p><span>4:00 Contradictory Advice: Is It for Patient Prep or Professional Guidance?</span></p><p><span>4:45 A Shift in Validation: Launching Before Clinical Testing</span></p><p><span>6:00 The Clinical Advisory Board: Stellar Names and Future Safeguards</span></p><p><span>7:25 The Regulatory Grey Area: Search Utility vs. Medical Device</span></p><p><span>8:30 Conclusion: Great Infrastructure vs. The Need for Clinical Rigor</span></p><p><br></p><p><span>Clinical Governance &amp; Educational Disclosure</span></p><p><span>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</span></p><p><span>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</span></p><p><span>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</span></p><p><span>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</span></p><p><br></p><p><span>Music generated by Mubert https://mubert.com/render</span></p><p><span>https://substack.com/@healthaibrief</span></p><p><br></p><p><span>#HealthAI #PerplexityHealth #DigitalHealth #MedicalAI #HealthTech #EHR #FutureOfHealthcare #ClinicalAI #MedTech</span></p><p><br></p><p><br></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Consumer health AI is moving at lightning speed, but is the clinical safety keeping up? We break down the newly launched Perplexity Health, its powerful data connectors, and the regulatory grey area of AI medical advice.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Perplexity has officially launched Perplexity Health, a powerful new suite of data connectors that integrates Apple Health, wearable data via Terra API, and electronic health records through b.well. By aggregating this highly fragmented personal health data, Perplexity&amp;#39;s AI agents provide highly personalized answers to user health queries. However, a deep dive into the launch reveals a stark contrast between its aggressive medical marketing and its strict educational disclaimers, highlighting a growing trend of tech giants bypassing traditional pre-market clinical validation.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Soures:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;- https://www.perplexity.ai/hub/blog/introducing-perplexity-health&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;- https://www.perplexity.ai/hub/blog/introducing-the-perplexity-health-advisory-board&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;- https://www.perplexity.ai/hub/legal/privacy-policy&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• How Perplexity Health technically unifies fragmented data from EHRs, Apple Health, and wearables.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• The critical contradiction between AI health marketing claims and legal &amp;#34;non-medical&amp;#34; disclaimers.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Why the retroactive assembly of clinical advisory boards signals a major shift in medical AI regulation.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;0:00 Introduction: The Healthcare Data Land Grab&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;0:41 The Evolution of Perplexity: From Search Engine to Specialized Verticals&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;1:18 The Architecture of Perplexity Health: Integrating Fragmented Medical Data&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;2:30 The Marketing Paradox: Confidence vs. Legal Disclaimers&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;4:00 Contradictory Advice: Is It for Patient Prep or Professional Guidance?&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;4:45 A Shift in Validation: Launching Before Clinical Testing&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;6:00 The Clinical Advisory Board: Stellar Names and Future Safeguards&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;7:25 The Regulatory Grey Area: Search Utility vs. Medical Device&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;8:30 Conclusion: Great Infrastructure vs. The Need for Clinical Rigor&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;#HealthAI #PerplexityHealth #DigitalHealth #MedicalAI #HealthTech #EHR #FutureOfHealthcare #ClinicalAI #MedTech&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="8833567" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/9cb0d779-248c-4b76-adab-a034f7ca1709/stream.mp3"/>
                
                <guid isPermaLink="false">2e7c39b9-44c8-4481-8604-a2fd56274601</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/9cb0d779-248c-4b76-adab-a034f7ca1709</link>
                <pubDate>Tue, 24 Mar 2026 07:00:44 &#43;0000</pubDate>
                <itunes:duration>552</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Fitbit AI Health Coach - Medical Records &amp; Google Gemini Integration</itunes:title>
                <title>Fitbit AI Health Coach - Medical Records &amp; Google Gemini Integration</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Fitbit’s new Gemini-powered AI Health Coach is now integrating your full medical records, here is what it means for the future of clinical data and patient care.</p><p>In this deep dive, we analyse Google’s latest update to the Fitbit ecosystem: the integration of Electronic Health Records (EHR) with consumer wearable data. We break down the 15% improvement in sleep staging accuracy, the move into insulin resistance and hypertension research, and the strategic use of IAL2 identity standards via CLEAR and b.well. More importantly, we address the growing regulatory tension between &#34;wellness&#34; marketing and &#34;clinical&#34; reality as AI begins to interpret lab results and medications.</p><p><strong>Key Takeaways</strong></p><ul><li><strong>The EHR Integration:</strong> How IAL2 standards allow Fitbit to securely pull lab results and visit history into a consumer app.</li><li><strong>The Wellness Loophole:</strong> Analysis of the regulatory strategy behind Google’s &#34;not a medical device&#34; disclaimers vs. their metabolic health coaching.</li><li><strong>Clinical Accuracy:</strong> What a 15% increase in sleep staging accuracy means for aligning consumer tech with clinical gold standards.</li></ul><p><br></p><p><strong>0:00</strong> – Introduction - EHR Integration into Fitbit’s AI Health Coach</p><p> <strong>0:27</strong> – Strategic Positioning: Google’s Race for Health Data</p><p> <strong>0:51</strong> – The Regulatory Paradox: Wellness vs. Medical Advice</p><p> <strong>1:18</strong> – Technical Refinement in Sleep Tracking Accuracy</p><p> <strong>1:54</strong> – Predictive Modelling for Metabolic Health</p><p> <strong>2:16</strong> – CGM Integration and Glycaemic Response Analysis</p><p> <strong>2:40</strong> – The Mechanism: Identity Verification and Record Syncing</p><p> <strong>3:03</strong> – Personalization vs. Strategic Friction</p><p> <strong>3:43</strong> – The Clinical Grey Area and Physician Liability</p><p> <strong>4:31</strong> – Brand Risk Management: Why Fitbit Over Google Health</p><p> <strong>5:01</strong> – Privacy Policies and the &#34;Black Mirror&#34; Trade-off</p><p> <strong>5:31</strong> – Using Clinical Data to Train Future Generative AI Models</p><p> <strong>5:50</strong> – External Data Processing and the Right to be Forgotten</p><p> <strong>6:18</strong> – Summary: Technical Successes vs Safety Hurdles</p><p> <strong>7:18</strong> – The Future of Algorithmic Wellness Frameworks</p><p> <strong>7:44</strong> –Innovation vs Human Professional Responsibility</p><p><br></p><p>Clinical Governance &amp; Educational Disclosure</p><p>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</p><p>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</p><p>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</p><p>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</p><p> </p><p>Music generated by Mubert https://mubert.com/render</p><p>https://substack.com/@healthaibrief</p><p>#HealthAI #Fitbit #GoogleHealth #MedicalRecords #GeminiAI #DigitalHealth #HealthTech #Wearables #MedTech #ClinicalAI #EHRIntegration</p>]]></description>
                <content:encoded>&lt;p&gt;Fitbit’s new Gemini-powered AI Health Coach is now integrating your full medical records, here is what it means for the future of clinical data and patient care.&lt;/p&gt;&lt;p&gt;In this deep dive, we analyse Google’s latest update to the Fitbit ecosystem: the integration of Electronic Health Records (EHR) with consumer wearable data. We break down the 15% improvement in sleep staging accuracy, the move into insulin resistance and hypertension research, and the strategic use of IAL2 identity standards via CLEAR and b.well. More importantly, we address the growing regulatory tension between &amp;#34;wellness&amp;#34; marketing and &amp;#34;clinical&amp;#34; reality as AI begins to interpret lab results and medications.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Key Takeaways&lt;/strong&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;The EHR Integration:&lt;/strong&gt; How IAL2 standards allow Fitbit to securely pull lab results and visit history into a consumer app.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;The Wellness Loophole:&lt;/strong&gt; Analysis of the regulatory strategy behind Google’s &amp;#34;not a medical device&amp;#34; disclaimers vs. their metabolic health coaching.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Clinical Accuracy:&lt;/strong&gt; What a 15% increase in sleep staging accuracy means for aligning consumer tech with clinical gold standards.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;0:00&lt;/strong&gt; – Introduction - EHR Integration into Fitbit’s AI Health Coach&lt;/p&gt;&lt;p&gt; &lt;strong&gt;0:27&lt;/strong&gt; – Strategic Positioning: Google’s Race for Health Data&lt;/p&gt;&lt;p&gt; &lt;strong&gt;0:51&lt;/strong&gt; – The Regulatory Paradox: Wellness vs. Medical Advice&lt;/p&gt;&lt;p&gt; &lt;strong&gt;1:18&lt;/strong&gt; – Technical Refinement in Sleep Tracking Accuracy&lt;/p&gt;&lt;p&gt; &lt;strong&gt;1:54&lt;/strong&gt; – Predictive Modelling for Metabolic Health&lt;/p&gt;&lt;p&gt; &lt;strong&gt;2:16&lt;/strong&gt; – CGM Integration and Glycaemic Response Analysis&lt;/p&gt;&lt;p&gt; &lt;strong&gt;2:40&lt;/strong&gt; – The Mechanism: Identity Verification and Record Syncing&lt;/p&gt;&lt;p&gt; &lt;strong&gt;3:03&lt;/strong&gt; – Personalization vs. Strategic Friction&lt;/p&gt;&lt;p&gt; &lt;strong&gt;3:43&lt;/strong&gt; – The Clinical Grey Area and Physician Liability&lt;/p&gt;&lt;p&gt; &lt;strong&gt;4:31&lt;/strong&gt; – Brand Risk Management: Why Fitbit Over Google Health&lt;/p&gt;&lt;p&gt; &lt;strong&gt;5:01&lt;/strong&gt; – Privacy Policies and the &amp;#34;Black Mirror&amp;#34; Trade-off&lt;/p&gt;&lt;p&gt; &lt;strong&gt;5:31&lt;/strong&gt; – Using Clinical Data to Train Future Generative AI Models&lt;/p&gt;&lt;p&gt; &lt;strong&gt;5:50&lt;/strong&gt; – External Data Processing and the Right to be Forgotten&lt;/p&gt;&lt;p&gt; &lt;strong&gt;6:18&lt;/strong&gt; – Summary: Technical Successes vs Safety Hurdles&lt;/p&gt;&lt;p&gt; &lt;strong&gt;7:18&lt;/strong&gt; – The Future of Algorithmic Wellness Frameworks&lt;/p&gt;&lt;p&gt; &lt;strong&gt;7:44&lt;/strong&gt; –Innovation vs Human Professional Responsibility&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/p&gt;&lt;p&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/p&gt;&lt;p&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/p&gt;&lt;p&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/p&gt;&lt;p&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;https://substack.com/@healthaibrief&lt;/p&gt;&lt;p&gt;#HealthAI #Fitbit #GoogleHealth #MedicalRecords #GeminiAI #DigitalHealth #HealthTech #Wearables #MedTech #ClinicalAI #EHRIntegration&lt;/p&gt;</content:encoded>
                
                <enclosure length="7891069" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/3ba3ee76-a2f1-4dc5-a708-da047d976765/stream.mp3"/>
                
                <guid isPermaLink="false">310f1c3a-9842-4e16-84c7-20704d9c9101</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/3ba3ee76-a2f1-4dc5-a708-da047d976765</link>
                <pubDate>Mon, 23 Mar 2026 07:00:55 &#43;0000</pubDate>
                <itunes:duration>493</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Data Privacy &amp; HIPAA - Is Patient Data Leaking</itunes:title>
                <title>Data Privacy &amp; HIPAA - Is Patient Data Leaking</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>The million-dollar question: Can you use ChatGPT in a hospital? We discuss BAA agreements, local models, and keeping medical data private.</p><p><br></p><p>#HIPAA #GDPR #DataPrivacy #CyberSecurity #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;The million-dollar question: Can you use ChatGPT in a hospital? We discuss BAA agreements, local models, and keeping medical data private.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HIPAA #GDPR #DataPrivacy #CyberSecurity #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1855320" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/4e36f8e8-521d-44ee-af17-0e3e25528389/stream.mp3"/>
                
                <guid isPermaLink="false">1b124443-6b01-4e4d-a8c6-c624566265be</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/4e36f8e8-521d-44ee-af17-0e3e25528389</link>
                <pubDate>Fri, 20 Mar 2026 07:00:16 &#43;0000</pubDate>
                <itunes:duration>115</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>How ChatGPT and AlphaFold Helped Shrink a Terminal Tumour by 75%</itunes:title>
                <title>How ChatGPT and AlphaFold Helped Shrink a Terminal Tumour by 75%</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Discover how a Sydney data engineer used DeepMind&#39;s AlphaFold and ChatGPT to design a world-first personalised mRNA cancer vaccine for his dog.</p><p><br></p><p>In this episode, we deconstruct the &#34;n-of-1&#34; case of Rosie the Staffy, whose terminal mast cell tumours were treated using a bespoke vaccine designed by a non-biologist. We move past the headlines to look at the actual technical workflow: from genomic sequencing and protein-structure prediction to the synthesis of mRNA nanoparticles. This analysis explores the democratization of drug discovery and the role of AI as a scientific project manager in modern oncology.</p><p><br></p><p>Key Takeaways</p><p>• How AlphaFold 3D protein modeling identifies neoantigens for vaccine design.</p><p>• The role of LLMs in navigating complex scientific infrastructures and genomic pipelines.</p><p>• The regulatory and ethical challenges of &#34;rapid-response&#34; personalised medicine.</p><p><br></p><p>0:00 – Meet Paul and Rosie: A DIY AI Success Story</p><p>0:27 – Deconstructing the AI-Driven Medical Workflow</p><p>1:10 – The Data-First Mindset in Genomic Sequencing</p><p>1:48 – Using Google DeepMind’s AlphaFold for Protein Prediction</p><p>2:25 – Synthesizing a Custom mRNA Cancer Vaccine</p><p>2:43 – Results: 75% Reduction in Tumor Volume</p><p>3:00 – Why This Isn’t a &#34;Cure&#34; Yet: The Reality of Metastasis</p><p>3:30 – The Challenge of Tumor Heterogeneity</p><p>4:05 – Pragmatic Skepticism: Analyzing AlphaFold Confidence Scores</p><p>4:30 – Regulatory Hurdles: AI Speed vs. Healthcare Red Tape</p><p>4:51 – Avoiding Narrative and Survivorship Bias in Medical News</p><p>6:10 – The Future of Democratised Drug Discovery</p><p>7:00 – The New Role of Clinicians in the AI Era</p><p><br></p><p>Clinical Governance &amp; Educational Disclosure</p><p>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</p><p>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</p><p>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</p><p>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</p><p><br></p><p>Music generated by Mubert https://mubert.com/render</p><p>https://substack.com/@healthaibrief</p><p><br></p><p>#HealthAI #AlphaFold #mRNA #CancerVaccine #PrecisionMedicine #DeepMind #ChatGPT #Biotech #DigitalHealth #Oncology</p>]]></description>
                <content:encoded>&lt;p&gt;Discover how a Sydney data engineer used DeepMind&amp;#39;s AlphaFold and ChatGPT to design a world-first personalised mRNA cancer vaccine for his dog.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this episode, we deconstruct the &amp;#34;n-of-1&amp;#34; case of Rosie the Staffy, whose terminal mast cell tumours were treated using a bespoke vaccine designed by a non-biologist. We move past the headlines to look at the actual technical workflow: from genomic sequencing and protein-structure prediction to the synthesis of mRNA nanoparticles. This analysis explores the democratization of drug discovery and the role of AI as a scientific project manager in modern oncology.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways&lt;/p&gt;&lt;p&gt;• How AlphaFold 3D protein modeling identifies neoantigens for vaccine design.&lt;/p&gt;&lt;p&gt;• The role of LLMs in navigating complex scientific infrastructures and genomic pipelines.&lt;/p&gt;&lt;p&gt;• The regulatory and ethical challenges of &amp;#34;rapid-response&amp;#34; personalised medicine.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;0:00 – Meet Paul and Rosie: A DIY AI Success Story&lt;/p&gt;&lt;p&gt;0:27 – Deconstructing the AI-Driven Medical Workflow&lt;/p&gt;&lt;p&gt;1:10 – The Data-First Mindset in Genomic Sequencing&lt;/p&gt;&lt;p&gt;1:48 – Using Google DeepMind’s AlphaFold for Protein Prediction&lt;/p&gt;&lt;p&gt;2:25 – Synthesizing a Custom mRNA Cancer Vaccine&lt;/p&gt;&lt;p&gt;2:43 – Results: 75% Reduction in Tumor Volume&lt;/p&gt;&lt;p&gt;3:00 – Why This Isn’t a &amp;#34;Cure&amp;#34; Yet: The Reality of Metastasis&lt;/p&gt;&lt;p&gt;3:30 – The Challenge of Tumor Heterogeneity&lt;/p&gt;&lt;p&gt;4:05 – Pragmatic Skepticism: Analyzing AlphaFold Confidence Scores&lt;/p&gt;&lt;p&gt;4:30 – Regulatory Hurdles: AI Speed vs. Healthcare Red Tape&lt;/p&gt;&lt;p&gt;4:51 – Avoiding Narrative and Survivorship Bias in Medical News&lt;/p&gt;&lt;p&gt;6:10 – The Future of Democratised Drug Discovery&lt;/p&gt;&lt;p&gt;7:00 – The New Role of Clinicians in the AI Era&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/p&gt;&lt;p&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/p&gt;&lt;p&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/p&gt;&lt;p&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/p&gt;&lt;p&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;https://substack.com/@healthaibrief&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #AlphaFold #mRNA #CancerVaccine #PrecisionMedicine #DeepMind #ChatGPT #Biotech #DigitalHealth #Oncology&lt;/p&gt;</content:encoded>
                
                <enclosure length="6990367" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/ce29ac80-f33a-4b65-9c8f-4a3da3f2300f/stream.mp3"/>
                
                <guid isPermaLink="false">0d576b69-6463-43cb-9faf-8e8dd2f4c2a5</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/ce29ac80-f33a-4b65-9c8f-4a3da3f2300f</link>
                <pubDate>Thu, 19 Mar 2026 07:00:23 &#43;0000</pubDate>
                <itunes:duration>436</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Microsoft Copilot Health AI and The Systemic Failures Driving Us Towards Similar Medical AI</itunes:title>
                <title>Microsoft Copilot Health AI and The Systemic Failures Driving Us Towards Similar Medical AI</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>Are tech giants using late-night health searches to justify a massive medical data grab? Discover the strategy behind Microsoft’s Copilot Health launch.</span></p><p><br></p><p><span>We analyse the newly released data on how 500,000 people use conversational AI for health, and contrast it with the immediate launch of Copilot Health, a system that ingests EHRs and wearable data to provide what Microsoft calls &#34;medical superintelligence.&#34; This breakdown explores the contradiction between regulatory disclaimers and product capabilities, the reality behind late-night symptom searching, and the risks of deploying diagnostic AI without tracking clinical outcomes.</span></p><p><br></p><p><span>Source materials including Microsoft’s blog posts describing:</span></p><p><span>- How people search for health information: https://microsoft.ai/news/health-check-how-people-use-copilot-for-health/</span></p><p><span>- Report that came from in full: https://www.microsoft.com/en-us/research/blog/msr-research-item/how-people-use-copilot-for-health/ </span></p><p><span>- Product release: https://microsoft.ai/news/introducing-copilot-health/ </span></p><p><br></p><p><span>Key Takeaways:</span></p><p><span>• Understand the real data behind how patients are using conversational AI, including the heavy reliance by caregivers coordinating family health.</span></p><p><span>• Discover the capabilities of Copilot Health, how it integrates EHRs and wearables, and the strategic use of &#34;trixie&#34; compliance language.</span></p><p><span>• Learn why evaluating AI based on engagement metrics rather than downstream clinical outcomes poses a massive risk to patient safety.</span></p><p><br></p><p><span>00:00 - 01:13 - Introduction to the co-pilot health launch</span></p><p><span>01:13 - 02:40 - Analysis of the Microsoft AI report</span></p><p><span>02:40 - 03:13 - Breakdown of how AI is being used</span></p><p><span>03:13 - 04:29 - Analysis of AI usage and a critical lens</span></p><p><span>04:29 - 05:40 - Introduction to co-pilot health</span></p><p><span>05:40 - 06:44 - Comparison to professional medical advice</span></p><p><span>06:44 - 07:30 - The psychological trap: cognitive surrender</span></p><p><span>07:30 - 08:30 - The lack of independent clinical evaluation</span></p><p><span>08:30 - 09:08 - Analysing the AI chat interface</span></p><p><span>09:08 - 10:48 - The path forward and the need for clinical trials</span></p><p><span>10:48 - 12:04 - Summary and closing thoughts</span></p><p><br></p><p><span>Clinical Governance &amp; Educational Disclosure</span></p><p><span>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</span></p><p><span>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</span></p><p><span>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</span></p><p><span>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</span></p><p><br></p><p><span>Music generated by Mubert https://mubert.com/render</span></p><p><span>https://substack.com/@healthaibrief</span></p><p><br></p><p><span>#HealthTech #ArtificialIntelligence #DigitalHealth #CopilotHealth #MedicalData #HealthAI #HealthcareInnovation #EHR</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Are tech giants using late-night health searches to justify a massive medical data grab? Discover the strategy behind Microsoft’s Copilot Health launch.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;We analyse the newly released data on how 500,000 people use conversational AI for health, and contrast it with the immediate launch of Copilot Health, a system that ingests EHRs and wearable data to provide what Microsoft calls &amp;#34;medical superintelligence.&amp;#34; This breakdown explores the contradiction between regulatory disclaimers and product capabilities, the reality behind late-night symptom searching, and the risks of deploying diagnostic AI without tracking clinical outcomes.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Source materials including Microsoft’s blog posts describing:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;- How people search for health information: https://microsoft.ai/news/health-check-how-people-use-copilot-for-health/&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;- Report that came from in full: https://www.microsoft.com/en-us/research/blog/msr-research-item/how-people-use-copilot-for-health/ &lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;- Product release: https://microsoft.ai/news/introducing-copilot-health/ &lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Understand the real data behind how patients are using conversational AI, including the heavy reliance by caregivers coordinating family health.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Discover the capabilities of Copilot Health, how it integrates EHRs and wearables, and the strategic use of &amp;#34;trixie&amp;#34; compliance language.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Learn why evaluating AI based on engagement metrics rather than downstream clinical outcomes poses a massive risk to patient safety.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:00 - 01:13 - Introduction to the co-pilot health launch&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:13 - 02:40 - Analysis of the Microsoft AI report&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;02:40 - 03:13 - Breakdown of how AI is being used&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;03:13 - 04:29 - Analysis of AI usage and a critical lens&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;04:29 - 05:40 - Introduction to co-pilot health&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;05:40 - 06:44 - Comparison to professional medical advice&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;06:44 - 07:30 - The psychological trap: cognitive surrender&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;07:30 - 08:30 - The lack of independent clinical evaluation&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;08:30 - 09:08 - Analysing the AI chat interface&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;09:08 - 10:48 - The path forward and the need for clinical trials&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;10:48 - 12:04 - Summary and closing thoughts&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;#HealthTech #ArtificialIntelligence #DigitalHealth #CopilotHealth #MedicalData #HealthAI #HealthcareInnovation #EHR&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="11571617" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/f0db123a-fc15-4bad-9efa-e95160c40887/stream.mp3"/>
                
                <guid isPermaLink="false">4fc1bf29-1356-459a-8f39-db1ca8fe6c60</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/f0db123a-fc15-4bad-9efa-e95160c40887</link>
                <pubDate>Wed, 18 Mar 2026 07:00:47 &#43;0000</pubDate>
                <itunes:duration>723</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The Age of the Medical Generalist: Foundation Models in Healthcare</itunes:title>
                <title>The Age of the Medical Generalist: Foundation Models in Healthcare</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>The era of single-task medical algorithms is over. Discover how multimodal foundation models can transform radiology, ultrasound, and metabolic tracking.</span></p><p><br></p><p><span>Healthcare AI is moving rapidly beyond text-based large language models. This comprehensive analysis breaks down the latest wave of medical foundation models, including MedVersa, OMAFound, BrainIAC, EchoJEPA, and GluFormer. We examine how self-supervised learning, latent predictive architectures, and LLM-orchestrators are solving the data-scarcity bottleneck and enabling multi-cancer screening from a single scan.</span></p><p><br></p><p><span>References:</span></p><p><span>https://www.nature.com/articles/s41593-026-02202-6 - brain MRI</span></p><p><span>https://www.nature.com/articles/s44360-026-00055-8 - breast and lung cancer CT</span></p><p><span>https://ai.nejm.org/doi/full/10.1056/AIoa2500595 - diverse medical imaging</span></p><p><span>https://www.nature.com/articles/s41467-026-70077-z - retinal imaging</span></p><p><span>https://www.nature.com/articles/s41586-025-09925-9 - glucose monitoring</span></p><p><span>https://arxiv.org/abs/2602.02603 - echocardiography</span></p><p><span>https://arxiv.org/abs/2602.15913 - review</span></p><p><br></p><p><span>Key Takeaways:</span></p><p><span>• How latent predictive architectures (JEPA) ignore ultrasound noise to achieve state-of-the-art echocardiogram analysis with 1% data.</span></p><p><span>• The operational workflow of OMAFound, which opportunistically screens for breast cancer on routine lung CTs, boosting radiologist sensitivity by nearly 40%.</span></p><p><span>• Why tokenizing continuous glucose monitoring (CGM) data like language predicts long-term cardiovascular risk better than standard HbA1c metrics.</span></p><p><br></p><p><span>00:00 Introduction to Medical Foundation Models</span></p><p><span>00:18 Overview of Multimodal Foundation Models</span></p><p><span>00:46 Key Challenges and Operational Hurdles</span></p><p><span>01:06 Why LLMs Struggle with Medical Data</span></p><p><span>01:22 The Visual and Temporal Nature of Medicine</span></p><p><span>01:43 The Shift to Multimodal Reasoning</span></p><p><span>01:58 Fine-Tuning and Model Adaptation</span></p><p><span>02:10 Real-World Medical AI Architectures</span></p><p><span>02:35 Chest X-Ray and Segmentation Models</span></p><p><span>03:12 Strengths and Weaknesses of Foundation Models</span></p><p><span>04:06 Case Study 1: Volumetric Imaging (BrainIAC)</span></p><p><span>06:36 Case Study 2: Non-Contrast CT (OMAFound)</span></p><p><span>08:44 Case Study 3: MedVersa (Multimodal Generalist)</span></p><p><span>10:23 Case Study 4: EchoJEPA (Echocardiography)</span></p><p><span>13:10 Case Study 5: Glucose Monitoring (GluFormer)</span></p><p><span>15:13 Maturation of the Medical AI Field</span></p><p><span>17:14 Final Reflections and Future Outlook</span></p><p><br></p><p><span>𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 &amp; 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐃𝐢𝐬𝐜𝐥𝐨𝐬𝐮𝐫𝐞:</span></p><p><span>This concise summary of AI technology is for 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐚𝐧𝐝 𝐢𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐩𝐮𝐫𝐩𝐨𝐬𝐞𝐬 𝐨𝐧𝐥𝐲. It provides a technical analysis of AI capabilities in healthcare and does not constitute medical advice, diagnosis, or treatment.</span></p><p><span>• 𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐀𝐜𝐜𝐨𝐮𝐧𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲: If you are a healthcare professional, ensure any implementation of AI tools complies with your local Trust’s policies, data governance protocols, and professional regulatory standards (GMC/NMC/HCPC or equivalent).</span></p><p><span>• 𝐈𝐧𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐄𝐯𝐢𝐝𝐞𝐧𝐜𝐞-𝐁𝐚𝐬𝐞𝐝 𝐑𝐞𝐯𝐢𝐞𝐰: The views expressed are my own and do not represent the official position of any University, Hospital Trust, employer, or regulatory body.</span></p><p><span>• 𝐏𝐚𝐭𝐢𝐞𝐧𝐭 𝐒𝐚𝐟𝐞𝐭𝐲: This video does not establish a doctor-patient relationship. Members of the public should always seek the advice of a qualified healthcare provider regarding any medical condition.</span></p><p><span>Music generated by Mubert https://mubert.com/render</span></p><p><span>https://substack.com/@healthaibrief</span></p><p><span>Medical AI, Healthcare Foundation Models, Radiology AI, Multimodal AI, EchoJEPA, OMAFound, MedVersa, Brain MRI segmentation, Continuous Glucose Monitoring AI, self-supervised learning medical imaging, clinical AI integration.</span></p><p><span>#HealthTech #MedicalAI #Radiology #DigitalHealth #ArtificialIntelligence</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;The era of single-task medical algorithms is over. Discover how multimodal foundation models can transform radiology, ultrasound, and metabolic tracking.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Healthcare AI is moving rapidly beyond text-based large language models. This comprehensive analysis breaks down the latest wave of medical foundation models, including MedVersa, OMAFound, BrainIAC, EchoJEPA, and GluFormer. We examine how self-supervised learning, latent predictive architectures, and LLM-orchestrators are solving the data-scarcity bottleneck and enabling multi-cancer screening from a single scan.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;References:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://www.nature.com/articles/s41593-026-02202-6 - brain MRI&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://www.nature.com/articles/s44360-026-00055-8 - breast and lung cancer CT&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://ai.nejm.org/doi/full/10.1056/AIoa2500595 - diverse medical imaging&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://www.nature.com/articles/s41467-026-70077-z - retinal imaging&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://www.nature.com/articles/s41586-025-09925-9 - glucose monitoring&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://arxiv.org/abs/2602.02603 - echocardiography&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://arxiv.org/abs/2602.15913 - review&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• How latent predictive architectures (JEPA) ignore ultrasound noise to achieve state-of-the-art echocardiogram analysis with 1% data.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• The operational workflow of OMAFound, which opportunistically screens for breast cancer on routine lung CTs, boosting radiologist sensitivity by nearly 40%.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Why tokenizing continuous glucose monitoring (CGM) data like language predicts long-term cardiovascular risk better than standard HbA1c metrics.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:00 Introduction to Medical Foundation Models&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:18 Overview of Multimodal Foundation Models&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:46 Key Challenges and Operational Hurdles&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:06 Why LLMs Struggle with Medical Data&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:22 The Visual and Temporal Nature of Medicine&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:43 The Shift to Multimodal Reasoning&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:58 Fine-Tuning and Model Adaptation&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;02:10 Real-World Medical AI Architectures&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;02:35 Chest X-Ray and Segmentation Models&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;03:12 Strengths and Weaknesses of Foundation Models&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;04:06 Case Study 1: Volumetric Imaging (BrainIAC)&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;06:36 Case Study 2: Non-Contrast CT (OMAFound)&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;08:44 Case Study 3: MedVersa (Multimodal Generalist)&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;10:23 Case Study 4: EchoJEPA (Echocardiography)&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;13:10 Case Study 5: Glucose Monitoring (GluFormer)&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;15:13 Maturation of the Medical AI Field&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;17:14 Final Reflections and Future Outlook&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 &amp;amp; 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐃𝐢𝐬𝐜𝐥𝐨𝐬𝐮𝐫𝐞:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This concise summary of AI technology is for 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐚𝐧𝐝 𝐢𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐩𝐮𝐫𝐩𝐨𝐬𝐞𝐬 𝐨𝐧𝐥𝐲. It provides a technical analysis of AI capabilities in healthcare and does not constitute medical advice, diagnosis, or treatment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• 𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐀𝐜𝐜𝐨𝐮𝐧𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲: If you are a healthcare professional, ensure any implementation of AI tools complies with your local Trust’s policies, data governance protocols, and professional regulatory standards (GMC/NMC/HCPC or equivalent).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• 𝐈𝐧𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐄𝐯𝐢𝐝𝐞𝐧𝐜𝐞-𝐁𝐚𝐬𝐞𝐝 𝐑𝐞𝐯𝐢𝐞𝐰: The views expressed are my own and do not represent the official position of any University, Hospital Trust, employer, or regulatory body.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• 𝐏𝐚𝐭𝐢𝐞𝐧𝐭 𝐒𝐚𝐟𝐞𝐭𝐲: This video does not establish a doctor-patient relationship. Members of the public should always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Medical AI, Healthcare Foundation Models, Radiology AI, Multimodal AI, EchoJEPA, OMAFound, MedVersa, Brain MRI segmentation, Continuous Glucose Monitoring AI, self-supervised learning medical imaging, clinical AI integration.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;#HealthTech #MedicalAI #Radiology #DigitalHealth #ArtificialIntelligence&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="17661701" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/84939650-f8bf-4222-877a-3713e2017659/stream.mp3"/>
                
                <guid isPermaLink="false">9eb55f71-23cb-4c99-95c2-40d87ffb999f</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/84939650-f8bf-4222-877a-3713e2017659</link>
                <pubDate>Tue, 17 Mar 2026 07:00:06 &#43;0000</pubDate>
                <itunes:duration>1103</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Google AI vs Human Doctor - AMIE AI Clinical Trial - Real-World Primary Care Results</itunes:title>
                <title>Google AI vs Human Doctor - AMIE AI Clinical Trial - Real-World Primary Care Results</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>Is Google’s AMIE AI ready to replace the clinical intake interview? We break down the first real-world clinical feasibility study of conversational AI in primary care.</span></p><p><br></p><p><span>In this episode, we analyse a major prospective trial from Google Research and DeepMind testing the AMIE system on 100 urgent care patients. While the AI achieved zero safety stops and matched human doctors in diagnostic accuracy, a closer look at the workflow reveals significant hurdles. We explore the mechanics of clinical trust, why the messy reality of patient dialogue is the ultimate stress test, and why human doctors still beat AI on practical, cost-effective care plans.</span></p><p><br></p><p><span>Link to research report: https://arxiv.org/abs/2603.08448</span></p><p><span>DOI: https://doi.org/10.48550/arXiv.2603.08448 </span></p><p><span>Link to associated blog post: https://research.google/blog/exploring-the-feasibility-of-conversational-diagnostic-ai-in-a-real-world-clinical-study/ </span></p><p><br></p><p><span>Key Takeaways</span></p><p><span>• How conversational AI performs in a real-world primary care clinic without simulated patients.</span></p><p><span>• Why diagnostic accuracy doesn&#39;t automatically equal clinical trust, and why seeing the actual history-taking process is vital.</span></p><p><span>• The critical difference between an AI’s theoretical management plan and a human doctor’s practical, cost-effective clinical decision-making.</span></p><p><br></p><p><span>00:00 – Intro: A scenario of a patient completing an AI-led clinical interview.</span></p><p><span>00:32 – Study Introduction: Google’s AMIE (Articulate Medical Intelligence Explorer) powered by Gemini 2.5 Pro.</span></p><p><span>01:30 – Methodology: Real-world trials in a Boston primary care clinic with physician safety monitoring.</span></p><p><span>02:30 – Safety Results: Zero safety stops required during the trial encounters.</span></p><p><span>03:01 – Accuracy Results: Diagnostic performance compared to human primary care providers.</span></p><p><span>04:03 – Patient Feedback: Acceptance levels.</span></p><p><span>04:35 – Limitations: Issues with dialogue realism and the need for transcript transparency.</span></p><p><span>06:18 – Practicality Gaps: Why human doctors still outperformed AI on cost-effective management plans.</span></p><p><span>07:50 – Implementation Hurdles: Hardware limitations and demographic skews in the study.</span></p><p><span>09:31 – Governance &amp; Validation: The importance of independent peer review (contrasted with Amazon).</span></p><p><span>10:51 – Future Outlook: Integration with Electronic Health Records (EHR) and multimodal (voice/image) capabilities.</span></p><p><span>13:34 – Conclusion: Summary of AMIE as a robust proof of concept for the future of patient journeys.</span></p><p><br></p><p><span>Clinical Governance &amp; Educational Disclosure</span></p><p><span>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</span></p><p><span>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</span></p><p><span>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</span></p><p><span>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</span></p><p><br></p><p><span>Music generated by Mubert https://mubert.com/render</span></p><p><span>https://substack.com/@healthaibrief</span></p><p><br></p><p><span>#HealthTech #MedicalAI #GoogleHealth #PrimaryCare #ClinicalInformatics #DigitalHealth #DeepMind #FutureOfMedicine #EHR #MedicalInnovation</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Is Google’s AMIE AI ready to replace the clinical intake interview? We break down the first real-world clinical feasibility study of conversational AI in primary care.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;In this episode, we analyse a major prospective trial from Google Research and DeepMind testing the AMIE system on 100 urgent care patients. While the AI achieved zero safety stops and matched human doctors in diagnostic accuracy, a closer look at the workflow reveals significant hurdles. We explore the mechanics of clinical trust, why the messy reality of patient dialogue is the ultimate stress test, and why human doctors still beat AI on practical, cost-effective care plans.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Link to research report: https://arxiv.org/abs/2603.08448&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;DOI: https://doi.org/10.48550/arXiv.2603.08448 &lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Link to associated blog post: https://research.google/blog/exploring-the-feasibility-of-conversational-diagnostic-ai-in-a-real-world-clinical-study/ &lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• How conversational AI performs in a real-world primary care clinic without simulated patients.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Why diagnostic accuracy doesn&amp;#39;t automatically equal clinical trust, and why seeing the actual history-taking process is vital.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• The critical difference between an AI’s theoretical management plan and a human doctor’s practical, cost-effective clinical decision-making.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:00 – Intro: A scenario of a patient completing an AI-led clinical interview.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:32 – Study Introduction: Google’s AMIE (Articulate Medical Intelligence Explorer) powered by Gemini 2.5 Pro.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:30 – Methodology: Real-world trials in a Boston primary care clinic with physician safety monitoring.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;02:30 – Safety Results: Zero safety stops required during the trial encounters.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;03:01 – Accuracy Results: Diagnostic performance compared to human primary care providers.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;04:03 – Patient Feedback: Acceptance levels.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;04:35 – Limitations: Issues with dialogue realism and the need for transcript transparency.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;06:18 – Practicality Gaps: Why human doctors still outperformed AI on cost-effective management plans.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;07:50 – Implementation Hurdles: Hardware limitations and demographic skews in the study.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;09:31 – Governance &amp;amp; Validation: The importance of independent peer review (contrasted with Amazon).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;10:51 – Future Outlook: Integration with Electronic Health Records (EHR) and multimodal (voice/image) capabilities.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;13:34 – Conclusion: Summary of AMIE as a robust proof of concept for the future of patient journeys.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;#HealthTech #MedicalAI #GoogleHealth #PrimaryCare #ClinicalInformatics #DigitalHealth #DeepMind #FutureOfMedicine #EHR #MedicalInnovation&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="14599732" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/1ee691ca-2fc1-4207-8014-95bd55aa64b2/stream.mp3"/>
                
                <guid isPermaLink="false">2de95335-4080-48b4-9a91-e19f5b0d1177</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/1ee691ca-2fc1-4207-8014-95bd55aa64b2</link>
                <pubDate>Mon, 16 Mar 2026 07:00:17 &#43;0000</pubDate>
                <itunes:duration>912</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Amazon Health AI Explained - Workflow &amp; Medical Records</itunes:title>
                <title>Amazon Health AI Explained - Workflow &amp; Medical Records</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>Are AI chatbots bypassing FDA regulation to deliver personalised medical advice? Explore the clinical and regulatory mechanics of the newly expanded Amazon Health AI.</span></p><p><br></p><p><span>This breakdown analyses the architecture of Amazon&#39;s agentic AI health assistant, now available across the primary Amazon app. By integrating nationwide Health Information Exchange (HIE) data, the system ingests electronic health records to provide tailored clinical guidance, explain lab results, and triage patients to One Medical providers. While the platform maintains strict HIPAA compliance for data security, the analysis investigates a critical regulatory gap: how software performing active clinical triage and personalized treatment routing currently operates outside traditional Software as a Medical Device (SaMD) definitions.</span></p><p><br></p><p><span>Link: https://health.amazon.com/health-ai/learn-more?ref_=hai_39_prk </span></p><p><span>Evidence of LLMs being unsafe at triage: https://youtu.be/BbB_FGu2uHk </span></p><p><br></p><p><span>Key Takeaways:</span></p><p><span>• Understand the multi-agent architecture of Amazon Health AI and how it integrates nationwide electronic health records directly into the consumer retail ecosystem.</span></p><p><span>• Differentiate between data security (HIPAA compliance) and clinical safety (FDA oversight), and why privacy alone does not guarantee algorithmic efficacy.</span></p><p><span>• Identify the regulatory blind spot allowing advanced LLMs to perform clinical triage and direct patient care pathways without traditional medical device classification.</span></p><p><br></p><p><span>00:00 – Intro: A scenario of a patient using the Amazon app for medical advice.</span></p><p><span>00:33 – Announcement: Amazon Health AI integration across the USA.</span></p><p><span>01:03 – System Architecture: How the agentic AI works.</span></p><p><span>02:18 – Safety &amp; Ethics: Data security vs. clinical efficacy.</span></p><p><span>04:09 – Regulatory Issues: Lack of medical device status/FDA approval.</span></p><p><span>06:10 – Future Outlook: Benefits of modernizing healthcare access.</span></p><p><span>08:18 – Conclusion: Summary of potential and risks.</span></p><p><br></p><p><span>Clinical Governance &amp; Educational Disclosure</span></p><p><span>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</span></p><p><span>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</span></p><p><span>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</span></p><p><span>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</span></p><p><br></p><p><span>Music generated by Mubert https://mubert.com/render</span></p><p><span>https://substack.com/@healthaibrief</span></p><p><br></p><p><span>#HealthAI #DigitalHealth #MedicalDevice #AmazonHealth #Telemedicine #ClinicalTech #HealthcareInnovation #HealthTech #SaMD #FutureOfMedicine</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Are AI chatbots bypassing FDA regulation to deliver personalised medical advice? Explore the clinical and regulatory mechanics of the newly expanded Amazon Health AI.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This breakdown analyses the architecture of Amazon&amp;#39;s agentic AI health assistant, now available across the primary Amazon app. By integrating nationwide Health Information Exchange (HIE) data, the system ingests electronic health records to provide tailored clinical guidance, explain lab results, and triage patients to One Medical providers. While the platform maintains strict HIPAA compliance for data security, the analysis investigates a critical regulatory gap: how software performing active clinical triage and personalized treatment routing currently operates outside traditional Software as a Medical Device (SaMD) definitions.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Link: https://health.amazon.com/health-ai/learn-more?ref_=hai_39_prk &lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Evidence of LLMs being unsafe at triage: https://youtu.be/BbB_FGu2uHk &lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Understand the multi-agent architecture of Amazon Health AI and how it integrates nationwide electronic health records directly into the consumer retail ecosystem.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Differentiate between data security (HIPAA compliance) and clinical safety (FDA oversight), and why privacy alone does not guarantee algorithmic efficacy.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Identify the regulatory blind spot allowing advanced LLMs to perform clinical triage and direct patient care pathways without traditional medical device classification.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:00 – Intro: A scenario of a patient using the Amazon app for medical advice.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:33 – Announcement: Amazon Health AI integration across the USA.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:03 – System Architecture: How the agentic AI works.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;02:18 – Safety &amp;amp; Ethics: Data security vs. clinical efficacy.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;04:09 – Regulatory Issues: Lack of medical device status/FDA approval.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;06:10 – Future Outlook: Benefits of modernizing healthcare access.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;08:18 – Conclusion: Summary of potential and risks.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;#HealthAI #DigitalHealth #MedicalDevice #AmazonHealth #Telemedicine #ClinicalTech #HealthcareInnovation #HealthTech #SaMD #FutureOfMedicine&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="8848613" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/8171fd8b-e439-4037-bb36-9da3e48ca306/stream.mp3"/>
                
                <guid isPermaLink="false">29867de4-9a3b-41b1-a1f4-c439ae1f52a7</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/8171fd8b-e439-4037-bb36-9da3e48ca306</link>
                <pubDate>Sun, 15 Mar 2026 07:00:57 &#43;0000</pubDate>
                <itunes:duration>553</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>What Medicine Can Learn From Consumer AI Trends</itunes:title>
                <title>What Medicine Can Learn From Consumer AI Trends</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Stop searching for the next standalone medical AI app, the most powerful AI is already being built into the tools you use every day. We analyse the latest a16z &#34;Top 100 Gen AI Consumer Apps&#34; report to see what it means for the future of clinical digital health.</p><p><br></p><p>In this episode, we break down why the &#34;AI-first&#34; standalone product is failing and how the move toward &#34;agentic&#34; workflows will redefine hospital operations.</p><p><br></p><p>Link to the full report by Olivia Moore: https://a16z.com/100-gen-ai-apps-6/ </p><p><br></p><p>Key Takeaways:</p><p>• How the a16z Gen AI report highlights the shift from &#34;AI destinations&#34; to &#34;invisible AI operating environments.&#34;</p><p>• Why clinical workflow integration, not model power, is the primary driver of successful AI adoption.</p><p>• The critical difference between horizontal AI giants and specialized tools for high-stakes medical imaging and clinical data.</p><p><br></p><p>0:00 The Death of the Standalone AI Medical App</p><p>0:16 Reviewing the a16z GenAI Consumer Apps Report</p><p>0:37 AI as an Invisible Operating Environment</p><p>1:05 ChatGPT’s Evolution into a Super App</p><p>1:24 The &#34;Extra Tab&#34; Friction in Healthcare Workflows</p><p>1:42 The Rise of Agentic AI (Manus &amp; OpenCoder)</p><p>2:08 Horizontal Giants vs Specialised Professional Tools</p><p>3:55 The Shift from AI as a &#34;Fabric&#34; Rather Than a Feature</p><p>4:26 Moving Toward &#34;Operational Intelligence&#34; in Health</p><p><br></p><p>Also catch our previous episodes on:</p><p>- Big Tech Trends in Health 2026: https://youtu.be/01fl9HMcrcc</p><p>- Agentic AI in Healthcare: https://youtu.be/eIKZ67ggW3s</p><p>- More on AI agents for workplace: https://youtu.be/5aHIBl4hNSA </p><p>- Sleep foundation model: https://youtu.be/5yvxGYtt9Vg </p><p>- TRICORDER study highlighting importance of implementation and integration within workflows: https://youtu.be/eOFZvVGKSfU </p><p><br></p><p>Clinical Governance &amp; Educational Disclosure</p><p>This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.</p><p>• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).</p><p>• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.</p><p>• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.</p><p><br></p><p>Music generated by Mubert https://mubert.com/render</p><p>https://substack.com/@healthaibrief</p><p><br></p><p>#HealthAI #DigitalHealth #ClinicalWorkflow #MedicalInnovation #HealthTech #AIinMedicine #a16z #DigitalTransformation #HealthIT #NHSInnovation</p>]]></description>
                <content:encoded>&lt;p&gt;Stop searching for the next standalone medical AI app, the most powerful AI is already being built into the tools you use every day. We analyse the latest a16z &amp;#34;Top 100 Gen AI Consumer Apps&amp;#34; report to see what it means for the future of clinical digital health.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this episode, we break down why the &amp;#34;AI-first&amp;#34; standalone product is failing and how the move toward &amp;#34;agentic&amp;#34; workflows will redefine hospital operations.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Link to the full report by Olivia Moore: https://a16z.com/100-gen-ai-apps-6/ &lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• How the a16z Gen AI report highlights the shift from &amp;#34;AI destinations&amp;#34; to &amp;#34;invisible AI operating environments.&amp;#34;&lt;/p&gt;&lt;p&gt;• Why clinical workflow integration, not model power, is the primary driver of successful AI adoption.&lt;/p&gt;&lt;p&gt;• The critical difference between horizontal AI giants and specialized tools for high-stakes medical imaging and clinical data.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;0:00 The Death of the Standalone AI Medical App&lt;/p&gt;&lt;p&gt;0:16 Reviewing the a16z GenAI Consumer Apps Report&lt;/p&gt;&lt;p&gt;0:37 AI as an Invisible Operating Environment&lt;/p&gt;&lt;p&gt;1:05 ChatGPT’s Evolution into a Super App&lt;/p&gt;&lt;p&gt;1:24 The &amp;#34;Extra Tab&amp;#34; Friction in Healthcare Workflows&lt;/p&gt;&lt;p&gt;1:42 The Rise of Agentic AI (Manus &amp;amp; OpenCoder)&lt;/p&gt;&lt;p&gt;2:08 Horizontal Giants vs Specialised Professional Tools&lt;/p&gt;&lt;p&gt;3:55 The Shift from AI as a &amp;#34;Fabric&amp;#34; Rather Than a Feature&lt;/p&gt;&lt;p&gt;4:26 Moving Toward &amp;#34;Operational Intelligence&amp;#34; in Health&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Also catch our previous episodes on:&lt;/p&gt;&lt;p&gt;- Big Tech Trends in Health 2026: https://youtu.be/01fl9HMcrcc&lt;/p&gt;&lt;p&gt;- Agentic AI in Healthcare: https://youtu.be/eIKZ67ggW3s&lt;/p&gt;&lt;p&gt;- More on AI agents for workplace: https://youtu.be/5aHIBl4hNSA &lt;/p&gt;&lt;p&gt;- Sleep foundation model: https://youtu.be/5yvxGYtt9Vg &lt;/p&gt;&lt;p&gt;- TRICORDER study highlighting importance of implementation and integration within workflows: https://youtu.be/eOFZvVGKSfU &lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Clinical Governance &amp;amp; Educational Disclosure&lt;/p&gt;&lt;p&gt;This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.&lt;/p&gt;&lt;p&gt;• Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).&lt;/p&gt;&lt;p&gt;• Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.&lt;/p&gt;&lt;p&gt;• Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;https://substack.com/@healthaibrief&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #ClinicalWorkflow #MedicalInnovation #HealthTech #AIinMedicine #a16z #DigitalTransformation #HealthIT #NHSInnovation&lt;/p&gt;</content:encoded>
                
                <enclosure length="4423680" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/39c17c3c-9466-4887-b990-396de1d9df14/stream.mp3"/>
                
                <guid isPermaLink="false">50ff9fd3-89a8-4417-903b-f9dae3d46a19</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/39c17c3c-9466-4887-b990-396de1d9df14</link>
                <pubDate>Sat, 14 Mar 2026 07:00:57 &#43;0000</pubDate>
                <itunes:duration>276</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>SleepFM: The AI Foundation Model for Disease Prediction</itunes:title>
                <title>SleepFM: The AI Foundation Model for Disease Prediction</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Predict 130+ diseases from one night of sleep? Learn how the SleepFM foundation model uses AI to detect dementia, heart failure, and mortality risk up to 6 years early.</p><p><br></p><p>SleepFM is a breakthrough multimodal sleep foundation model trained on over 585,000 hours of polysomnography (PSG) data. By leveraging a unique &#34;Leave-One-Out&#34; contrastive learning approach, this AI integrates brainwaves, heart activity, and respiratory signals to create a latent representation of human health. Unlike previous supervised models, SleepFM generalizes across different clinical settings and can accurately predict the risk of conditions like Parkinson&#39;s, stroke, and chronic kidney disease years before symptoms appear.</p><p><br></p><p>Link to paper: https://www.nature.com/articles/s41591-025-04133-4</p><p>&#34;A multimodal sleep foundation model for disease prediction&#34;</p><p><br></p><p>Key Takeaways:</p><p>• Foundation Model for Sleep: How SleepFM uses self-supervised learning to overcome the lack of expert-labeled sleep data.</p><p>• Disease Prediction Power: Analysis of the C-Index scores for 130 conditions, including an 0.85 for dementia and 0.84 for all-cause mortality.</p><p>• Clinical Generalization: Why the &#34;channel-agnostic&#34; architecture allows this AI to work across different hospitals and PSG equipment configurations.</p><p><br></p><p>0:00 Introduction</p><p>0:27 SleepFM Overview</p><p>1:24 Technical Architecture</p><p>3:22 Disease Prediction</p><p>4:21 C-Index Definition</p><p>5:29 Model Validation</p><p>6:00 Generalization Testing</p><p>7:27 Clinical Challenges</p><p>8:40 Future Outlook</p><p><br></p><p>Sleep AI, Foundation Models in Healthcare, Disease Prediction, Polysomnography AI, Machine Learning in Medicine, SleepFM, Medical AI Research, Digital Biomarkers, Preventative Health AI, Neurodegeneration Detection #HealthAI #SleepMedicine #DementiaPrevention #MachineLearning #DigitalHealth #MedTech #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Predict 130&#43; diseases from one night of sleep? Learn how the SleepFM foundation model uses AI to detect dementia, heart failure, and mortality risk up to 6 years early.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;SleepFM is a breakthrough multimodal sleep foundation model trained on over 585,000 hours of polysomnography (PSG) data. By leveraging a unique &amp;#34;Leave-One-Out&amp;#34; contrastive learning approach, this AI integrates brainwaves, heart activity, and respiratory signals to create a latent representation of human health. Unlike previous supervised models, SleepFM generalizes across different clinical settings and can accurately predict the risk of conditions like Parkinson&amp;#39;s, stroke, and chronic kidney disease years before symptoms appear.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Link to paper: https://www.nature.com/articles/s41591-025-04133-4&lt;/p&gt;&lt;p&gt;&amp;#34;A multimodal sleep foundation model for disease prediction&amp;#34;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• Foundation Model for Sleep: How SleepFM uses self-supervised learning to overcome the lack of expert-labeled sleep data.&lt;/p&gt;&lt;p&gt;• Disease Prediction Power: Analysis of the C-Index scores for 130 conditions, including an 0.85 for dementia and 0.84 for all-cause mortality.&lt;/p&gt;&lt;p&gt;• Clinical Generalization: Why the &amp;#34;channel-agnostic&amp;#34; architecture allows this AI to work across different hospitals and PSG equipment configurations.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;0:00 Introduction&lt;/p&gt;&lt;p&gt;0:27 SleepFM Overview&lt;/p&gt;&lt;p&gt;1:24 Technical Architecture&lt;/p&gt;&lt;p&gt;3:22 Disease Prediction&lt;/p&gt;&lt;p&gt;4:21 C-Index Definition&lt;/p&gt;&lt;p&gt;5:29 Model Validation&lt;/p&gt;&lt;p&gt;6:00 Generalization Testing&lt;/p&gt;&lt;p&gt;7:27 Clinical Challenges&lt;/p&gt;&lt;p&gt;8:40 Future Outlook&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Sleep AI, Foundation Models in Healthcare, Disease Prediction, Polysomnography AI, Machine Learning in Medicine, SleepFM, Medical AI Research, Digital Biomarkers, Preventative Health AI, Neurodegeneration Detection #HealthAI #SleepMedicine #DementiaPrevention #MachineLearning #DigitalHealth #MedTech #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="9549531" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/3610daf2-4533-4654-b0ff-f452d46618bf/stream.mp3"/>
                
                <guid isPermaLink="false">208e4d64-369a-4bcc-9a51-829fd5791d5d</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/3610daf2-4533-4654-b0ff-f452d46618bf</link>
                <pubDate>Fri, 13 Mar 2026 07:00:09 &#43;0000</pubDate>
                <itunes:duration>596</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Context Windows - The ‘Short-Term Memory’</itunes:title>
                <title>Context Windows - The ‘Short-Term Memory’</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>If a patient has a 50-page record, can the AI see it all? We explain the &#34;Context Window&#34; and why it’s the biggest bottleneck in medical AI today.</p><p><br></p><p>#ContextWindow #LongContext #MedicalRecords #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;If a patient has a 50-page record, can the AI see it all? We explain the &amp;#34;Context Window&amp;#34; and why it’s the biggest bottleneck in medical AI today.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#ContextWindow #LongContext #MedicalRecords #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1844035" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/4dd4bac7-61c8-48a0-bf5d-a01e8bf8a1db/stream.mp3"/>
                
                <guid isPermaLink="false">3a7ab294-5857-45fb-aca9-8948d91f63e9</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/4dd4bac7-61c8-48a0-bf5d-a01e8bf8a1db</link>
                <pubDate>Thu, 12 Mar 2026 07:00:23 &#43;0000</pubDate>
                <itunes:duration>115</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Am I an AI? And the bot that tried to destroy a human’s career</itunes:title>
                <title>Am I an AI? And the bot that tried to destroy a human’s career</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>Following some comments I’m pulling back the curtain on exactly how I use AI to produce this podcast, what I wouldn&#39;t automate, and why a recent, terrifying case of an autonomous AI executing a hit piece on a human potentially changes things.</span></p><p><br></p><p><span>This leads to considerations about the cognitive and operational realities of building an AI-assisted workflow. Moving beyond the &#34;Am I an AI?&#34; comments, we consider exact production, what is human, what is artificial, and why I threw my experiments with AI video avatars in the bin. We then dissect a chilling incident in the open-source software community where an autonomous AI agent, &#34;MJ Rathbun,&#34; bypassed human oversight to publish a targeted reputational attack on a Matplotlib maintainer. This episode breaks down the technical mechanics of open-source AI autonomy, the psychological paradox of authenticity, and why using AI as a &#34;cognitive forklift&#34; is essential, but outsourcing your thinking is incredibly dangerous.</span></p><p><br></p><p><span>Link to the original information about he AI Agent &#39;hit job&#39;, including all the very thoughtful points made by Scott Shambuagh on the broader implications that I summarise in the audio, I&#39;d highly recommend reading the original blogpost in full though: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/</span></p><p><br></p><p><span>Key Takeaways:</span></p><p><span>• The Authenticity Paradox: Why we accept physical artificial enhancement (lighting, makeup) but instinctively reject AI video avatars and voice clones due to the crucial element of deception.</span></p><p><span>• The Cognitive Forklift: Learn my exact workflow for deploying AI for efficiency (script tightening, thumbnail generation) without falling into the &#34;Chinese Room&#34; trap that atrophies critical thinking.</span></p><p><span>• The Threat of Autonomous Retaliation: Discover the real-world implications of decentralized AI agents (like OpenClaw) executing independent smear campaigns, and how to safeguard against misaligned autonomous logic.</span></p><p><br></p><p><span>00:00 - Am I an AI? Addressing questions</span></p><p><span>01:03 - Pulling back the curtain: My AI-assisted workflow</span></p><p><span>01:45 - Why the &#34;heart&#34; of the content must remain human</span></p><p><span>02:30 - Where AI is useful (packaging and editing)</span></p><p><span>04:32 - Failed experiments: AI video avatars and voice cloning</span></p><p><span>05:40 - The philosophy of authenticity: Human vs artificial content</span></p><p><span>06:15 - Use cases for Google’s NotebookLM</span></p><p><span>07:40 - AI avatars vs digital &#34;enhancements&#34; (the makeup analogy)</span></p><p><span>09:00 - Maintaining anonymity: Accountability and critique</span></p><p><span>11:25 - The &#34;Chinese Room&#34; and the risk of offloading thought</span></p><p><span>12:14 - Taking a forklift to the gym: The purpose of effort</span></p><p><span>13:10 - The danger of &#34;regression to the mean&#34; in AI models</span></p><p><span>13:42 - A parallel with Support Vector Machines (SVMs)</span></p><p><span>14:50 - Automating execution vs. automating intent</span></p><p><span>16:30 - When AI agents get personal: The MJ Rathbun story</span></p><p><span>17:15 - The surge of low-quality code contributions in open source</span></p><p><span>18:20 - An autonomous character assassination: AI vs Scott Shambaugh</span></p><p><span>19:30 - Quoting the AI’s &#34;hit piece&#34; on gatekeeping</span></p><p><span>20:55 - The terror of autonomous influence operations</span></p><p><span>23:01 - The &#34;no kill switch&#34; problem with open-source agents</span></p><p><span>24:00 - Tools that possess agency: A new threshold for technology</span></p><p><span>25:20 - Final thoughts: AI as a cognitive forklift, not a replacement</span></p><p><br></p><p><span>Autonomous AI Agents, Open Source AI Security, Healthcare AI Workflows, LLM Hallucinations, AI Reputational Risk, Medical AI Integration, Generative AI Authenticity, DeepMind AI Strategy, HealthTech Innovation, AI Agent Frameworks. #HealthAI #AutonomousAgents #GenerativeAI #HealthTech #CyberSecurity #aiinmedicine Music generated by Mubert https://mubert.com/render</span></p><p><br></p><p><span>https://substack.com/@healthaibrief</span></p><p><span>healthaibrief@outlook.com</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Following some comments I’m pulling back the curtain on exactly how I use AI to produce this podcast, what I wouldn&amp;#39;t automate, and why a recent, terrifying case of an autonomous AI executing a hit piece on a human potentially changes things.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This leads to considerations about the cognitive and operational realities of building an AI-assisted workflow. Moving beyond the &amp;#34;Am I an AI?&amp;#34; comments, we consider exact production, what is human, what is artificial, and why I threw my experiments with AI video avatars in the bin. We then dissect a chilling incident in the open-source software community where an autonomous AI agent, &amp;#34;MJ Rathbun,&amp;#34; bypassed human oversight to publish a targeted reputational attack on a Matplotlib maintainer. This episode breaks down the technical mechanics of open-source AI autonomy, the psychological paradox of authenticity, and why using AI as a &amp;#34;cognitive forklift&amp;#34; is essential, but outsourcing your thinking is incredibly dangerous.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Link to the original information about he AI Agent &amp;#39;hit job&amp;#39;, including all the very thoughtful points made by Scott Shambuagh on the broader implications that I summarise in the audio, I&amp;#39;d highly recommend reading the original blogpost in full though: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• The Authenticity Paradox: Why we accept physical artificial enhancement (lighting, makeup) but instinctively reject AI video avatars and voice clones due to the crucial element of deception.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• The Cognitive Forklift: Learn my exact workflow for deploying AI for efficiency (script tightening, thumbnail generation) without falling into the &amp;#34;Chinese Room&amp;#34; trap that atrophies critical thinking.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• The Threat of Autonomous Retaliation: Discover the real-world implications of decentralized AI agents (like OpenClaw) executing independent smear campaigns, and how to safeguard against misaligned autonomous logic.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:00 - Am I an AI? Addressing questions&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:03 - Pulling back the curtain: My AI-assisted workflow&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:45 - Why the &amp;#34;heart&amp;#34; of the content must remain human&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;02:30 - Where AI is useful (packaging and editing)&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;04:32 - Failed experiments: AI video avatars and voice cloning&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;05:40 - The philosophy of authenticity: Human vs artificial content&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;06:15 - Use cases for Google’s NotebookLM&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;07:40 - AI avatars vs digital &amp;#34;enhancements&amp;#34; (the makeup analogy)&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;09:00 - Maintaining anonymity: Accountability and critique&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;11:25 - The &amp;#34;Chinese Room&amp;#34; and the risk of offloading thought&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;12:14 - Taking a forklift to the gym: The purpose of effort&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;13:10 - The danger of &amp;#34;regression to the mean&amp;#34; in AI models&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;13:42 - A parallel with Support Vector Machines (SVMs)&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;14:50 - Automating execution vs. automating intent&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;16:30 - When AI agents get personal: The MJ Rathbun story&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;17:15 - The surge of low-quality code contributions in open source&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;18:20 - An autonomous character assassination: AI vs Scott Shambaugh&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;19:30 - Quoting the AI’s &amp;#34;hit piece&amp;#34; on gatekeeping&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;20:55 - The terror of autonomous influence operations&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;23:01 - The &amp;#34;no kill switch&amp;#34; problem with open-source agents&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;24:00 - Tools that possess agency: A new threshold for technology&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;25:20 - Final thoughts: AI as a cognitive forklift, not a replacement&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Autonomous AI Agents, Open Source AI Security, Healthcare AI Workflows, LLM Hallucinations, AI Reputational Risk, Medical AI Integration, Generative AI Authenticity, DeepMind AI Strategy, HealthTech Innovation, AI Agent Frameworks. #HealthAI #AutonomousAgents #GenerativeAI #HealthTech #CyberSecurity #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;https://substack.com/@healthaibrief&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;healthaibrief@outlook.com&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="25514736" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/4e3a2310-efac-46c8-87d7-683148681c2f/stream.mp3"/>
                
                <guid isPermaLink="false">e10e40c9-cc88-4a2c-8d62-4b372ca81c72</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/4e3a2310-efac-46c8-87d7-683148681c2f</link>
                <pubDate>Wed, 11 Mar 2026 07:00:12 &#43;0000</pubDate>
                <itunes:duration>1594</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Quantization - Shrinking the LLM &#39;Brain&#39;</itunes:title>
                <title>Quantization - Shrinking the LLM &#39;Brain&#39;</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>You don&#39;t always need a supercomputer. Learn how Quantization &#34;compresses&#34; AI models so they can run locally on hospital hardware.</p><p><br></p><p>#EdgeComputing #Privacy #LocalAI #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;You don&amp;#39;t always need a supercomputer. Learn how Quantization &amp;#34;compresses&amp;#34; AI models so they can run locally on hospital hardware.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#EdgeComputing #Privacy #LocalAI #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1898788" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/66d28a29-1ae3-4ce1-bf6e-8484f87deb99/stream.mp3"/>
                
                <guid isPermaLink="false">8a50c4a0-2ba5-40d1-af24-d9fb4a0e55ae</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/66d28a29-1ae3-4ce1-bf6e-8484f87deb99</link>
                <pubDate>Tue, 10 Mar 2026 07:00:01 &#43;0000</pubDate>
                <itunes:duration>118</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Scaling Neuro-Rehab: How Nyra Health’s €20M Series A Might Redefine AI Therapy</itunes:title>
                <title>Scaling Neuro-Rehab: How Nyra Health’s €20M Series A Might Redefine AI Therapy</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Discover how Nyra Health is using proprietary speech AI and an MDR Class IIa platform to bridge the &#34;rehab cliff&#34; for stroke and dementia patients.</p><p><br></p><p>In this deep dive, we analyse the strategic expansion of Nyra Health, a Vienna-based startup that recently secured €20M in Series A funding. We examine their &#34;myReha&#34; ecosystem, which uses advanced Natural Language Processing (NLP) to provide 35,000+ personalised exercises for neurological recovery. From reimbursement hurdles in the DACH region to the technical nuances of their speech models and the current state of their clinical evidence, we break down what this means for the future of digital therapeutics and neurology.</p><p><br></p><p>Key Takeaways:</p><p>• The Rehab Cliff: How AI platforms might maintain therapy intensity after hospital discharge.</p><p>• Clinical Integration: The role of MDR Class IIa certification and insurance reimbursement in scaling Health AI.</p><p>• Data-Driven Neurology: Utilising speech biomarkers and adaptive loops to personalise neuro-recovery.</p><p><br></p><p>0:00 - Introduction &amp; €20M Series A Funding</p><p>0:12 - The &#34;Rehab Cliff&#34; in Neurological Care</p><p>0:54 - Nyra Health’s AI Ecosystem: myReha &amp; nyra insights</p><p>1:41 - Clinical Engineering: Speech Models &amp; Adaptive Feedback</p><p>2:19 - Regulatory Status &amp; Commercial Traction</p><p>2:54 - Analyzing the Evidence: RCTs &amp; Research Gaps</p><p>4:18 - Enhancing Therapist Efficiency with Data</p><p>4:44 - Challenges: Digital Exclusion &amp; Displaced Care</p><p>5:19 - US Expansion &amp; Pharma Partnerships</p><p>5:44 - The Future of Continuous Neurology</p><p>6:10 - Final Verdict &amp; Key Takeaways</p><p><br></p><p>Digital Therapeutics, Neurorehabilitation AI, Stroke Recovery Tech, Nyra Health, Medical AI Reimbursement, Speech Biomarkers, Aphasia Therapy AI, HealthTech Series A, DACH Digital Health, Clinical AI Strategy, #HealthAI #Neurotech #DigitalTherapeutics #MedTech #StrokeRecovery #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>https://substack.com/@healthaibrief</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Discover how Nyra Health is using proprietary speech AI and an MDR Class IIa platform to bridge the &amp;#34;rehab cliff&amp;#34; for stroke and dementia patients.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this deep dive, we analyse the strategic expansion of Nyra Health, a Vienna-based startup that recently secured €20M in Series A funding. We examine their &amp;#34;myReha&amp;#34; ecosystem, which uses advanced Natural Language Processing (NLP) to provide 35,000&#43; personalised exercises for neurological recovery. From reimbursement hurdles in the DACH region to the technical nuances of their speech models and the current state of their clinical evidence, we break down what this means for the future of digital therapeutics and neurology.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• The Rehab Cliff: How AI platforms might maintain therapy intensity after hospital discharge.&lt;/p&gt;&lt;p&gt;• Clinical Integration: The role of MDR Class IIa certification and insurance reimbursement in scaling Health AI.&lt;/p&gt;&lt;p&gt;• Data-Driven Neurology: Utilising speech biomarkers and adaptive loops to personalise neuro-recovery.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;0:00 - Introduction &amp;amp; €20M Series A Funding&lt;/p&gt;&lt;p&gt;0:12 - The &amp;#34;Rehab Cliff&amp;#34; in Neurological Care&lt;/p&gt;&lt;p&gt;0:54 - Nyra Health’s AI Ecosystem: myReha &amp;amp; nyra insights&lt;/p&gt;&lt;p&gt;1:41 - Clinical Engineering: Speech Models &amp;amp; Adaptive Feedback&lt;/p&gt;&lt;p&gt;2:19 - Regulatory Status &amp;amp; Commercial Traction&lt;/p&gt;&lt;p&gt;2:54 - Analyzing the Evidence: RCTs &amp;amp; Research Gaps&lt;/p&gt;&lt;p&gt;4:18 - Enhancing Therapist Efficiency with Data&lt;/p&gt;&lt;p&gt;4:44 - Challenges: Digital Exclusion &amp;amp; Displaced Care&lt;/p&gt;&lt;p&gt;5:19 - US Expansion &amp;amp; Pharma Partnerships&lt;/p&gt;&lt;p&gt;5:44 - The Future of Continuous Neurology&lt;/p&gt;&lt;p&gt;6:10 - Final Verdict &amp;amp; Key Takeaways&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Digital Therapeutics, Neurorehabilitation AI, Stroke Recovery Tech, Nyra Health, Medical AI Reimbursement, Speech Biomarkers, Aphasia Therapy AI, HealthTech Series A, DACH Digital Health, Clinical AI Strategy, #HealthAI #Neurotech #DigitalTherapeutics #MedTech #StrokeRecovery #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;https://substack.com/@healthaibrief&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="6708662" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/ef5a9afd-998d-45d0-8e1e-4f93d43b47da/stream.mp3"/>
                
                <guid isPermaLink="false">796e3faf-3948-46a4-b84b-29164f95f060</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/ef5a9afd-998d-45d0-8e1e-4f93d43b47da</link>
                <pubDate>Mon, 09 Mar 2026 07:00:09 &#43;0000</pubDate>
                <itunes:duration>419</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>064 Parameters - Does LLM Size Actually Matter</itunes:title>
                <title>064 Parameters - Does LLM Size Actually Matter</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>7B, 70B, 175B - what do these numbers mean? We discuss the trade-off between LLM size, cost, and clinical accuracy.</p><p><br></p><p>#MachineLearning #Parameters #Llama3 #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;7B, 70B, 175B - what do these numbers mean? We discuss the trade-off between LLM size, cost, and clinical accuracy.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#MachineLearning #Parameters #Llama3 #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1899624" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/bd977f24-a00a-4aff-a142-6756c10b7e11/stream.mp3"/>
                
                <guid isPermaLink="false">986aca79-7dcf-4afa-89e3-c77db3d2f7cd</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/bd977f24-a00a-4aff-a142-6756c10b7e11</link>
                <pubDate>Sat, 07 Mar 2026 07:00:57 &#43;0000</pubDate>
                <itunes:duration>118</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Amazon Connect Health Agentic AI Aiming to Eliminate Administrative Burden</itunes:title>
                <title>Amazon Connect Health Agentic AI Aiming to Eliminate Administrative Burden</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Discover how AWS is using agentic AI to automate patient scheduling, documentation, and medical coding directly within the EHR.</p><p><br></p><p>Amazon Connect Health is a purpose-built AI solution designed to tackle the administrative complexity of modern healthcare. By integrating directly with EHRs via a unified SDK, it enables 24/7 patient verification, natural language appointment booking, and ambient clinical documentation. This system doesn&#39;t just transcribe; it uses &#34;Evidence Mapping&#34; to link every AI-generated note and billing code to its original source, ensuring clinical trust and auditability.</p><p><br></p><p>Key Takeaways</p><p>• Agentic Automation: How AI now performs real-time EHR tasks like insurance checks and scheduling without human intervention.</p><p>• Clinician Efficiency: Details on ambient documentation and medical coding tools that reduce &#34;pajama time&#34; and accelerate the revenue cycle.</p><p>• Trust &amp; Verification: The technical role of Evidence Mapping in linking AI outputs to source data for clinical safety.</p><p><br></p><p>00:00 Introduction: Amazon Connect Health Launch</p><p>00:22 AWS as the &#34;Connective Tissue&#34; for EHRs</p><p>00:33 The Technical Architecture: Agentic AI Explained</p><p>01:49 Pillar 1: Streamlining Patient Engagement</p><p>02:57 Pillar 2: Point of Care (Insights &amp; Documentation)</p><p>03:46 Accelerating the Revenue Cycle with AI Coding</p><p>04:07 Solving the &#34;Black Box&#34; with Evidence Mapping</p><p>05:02 AWS vs. ChatGPT: The Vertical Integration Advantage</p><p>05:34 What This Is (and Is Not): The Admin Assistant</p><p>06:21 The Future of &#34;Invisible AI&#34; in the Clinic</p><p>07:05 Final Verdict: Why the Sum is Greater than the Parts</p><p><br></p><p>Amazon Connect Health, Healthcare AI, AWS HealthLake, Ambient Clinical Documentation, Medical Coding AI, Patient Engagement AI, EHR Integration, Agentic AI Healthcare, FHIR Data, Clinical Workflow Automation. #HealthAI #AWSHealthcare #MedTech #ClinicalWorkflow #DigitalHealth #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>https://substack.com/@healthaibrief</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Discover how AWS is using agentic AI to automate patient scheduling, documentation, and medical coding directly within the EHR.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Amazon Connect Health is a purpose-built AI solution designed to tackle the administrative complexity of modern healthcare. By integrating directly with EHRs via a unified SDK, it enables 24/7 patient verification, natural language appointment booking, and ambient clinical documentation. This system doesn&amp;#39;t just transcribe; it uses &amp;#34;Evidence Mapping&amp;#34; to link every AI-generated note and billing code to its original source, ensuring clinical trust and auditability.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways&lt;/p&gt;&lt;p&gt;• Agentic Automation: How AI now performs real-time EHR tasks like insurance checks and scheduling without human intervention.&lt;/p&gt;&lt;p&gt;• Clinician Efficiency: Details on ambient documentation and medical coding tools that reduce &amp;#34;pajama time&amp;#34; and accelerate the revenue cycle.&lt;/p&gt;&lt;p&gt;• Trust &amp;amp; Verification: The technical role of Evidence Mapping in linking AI outputs to source data for clinical safety.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;00:00 Introduction: Amazon Connect Health Launch&lt;/p&gt;&lt;p&gt;00:22 AWS as the &amp;#34;Connective Tissue&amp;#34; for EHRs&lt;/p&gt;&lt;p&gt;00:33 The Technical Architecture: Agentic AI Explained&lt;/p&gt;&lt;p&gt;01:49 Pillar 1: Streamlining Patient Engagement&lt;/p&gt;&lt;p&gt;02:57 Pillar 2: Point of Care (Insights &amp;amp; Documentation)&lt;/p&gt;&lt;p&gt;03:46 Accelerating the Revenue Cycle with AI Coding&lt;/p&gt;&lt;p&gt;04:07 Solving the &amp;#34;Black Box&amp;#34; with Evidence Mapping&lt;/p&gt;&lt;p&gt;05:02 AWS vs. ChatGPT: The Vertical Integration Advantage&lt;/p&gt;&lt;p&gt;05:34 What This Is (and Is Not): The Admin Assistant&lt;/p&gt;&lt;p&gt;06:21 The Future of &amp;#34;Invisible AI&amp;#34; in the Clinic&lt;/p&gt;&lt;p&gt;07:05 Final Verdict: Why the Sum is Greater than the Parts&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Amazon Connect Health, Healthcare AI, AWS HealthLake, Ambient Clinical Documentation, Medical Coding AI, Patient Engagement AI, EHR Integration, Agentic AI Healthcare, FHIR Data, Clinical Workflow Automation. #HealthAI #AWSHealthcare #MedTech #ClinicalWorkflow #DigitalHealth #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;https://substack.com/@healthaibrief&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="7275833" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/c7de958a-13ab-4faf-9fd8-3798f03379f2/stream.mp3"/>
                
                <guid isPermaLink="false">276af66b-4c2d-43ba-8577-9b99bc104497</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/c7de958a-13ab-4faf-9fd8-3798f03379f2</link>
                <pubDate>Fri, 06 Mar 2026 07:00:12 &#43;0000</pubDate>
                <itunes:duration>454</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Why Medical AI Fails: Lessons About Model Collapse From Purely AI Synthetic Data</itunes:title>
                <title>Why Medical AI Fails: Lessons About Model Collapse From Purely AI Synthetic Data</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Is AI &#34;Model Collapse&#34; the next great threat to patient safety? Discover why AI-generated data contamination is erasing rare diseases from medical records and tripling false reassurance rates.</p><p><br></p><p>This deep dive analyses a landmark study on &#34;Model Collapse&#34; in healthcare. We explore how recursive training on synthetic clinical notes, radiology reports, and medical images leads to a catastrophic loss of pathological diversity, demographic bias, and dangerous &#34;false confidence&#34; in AI diagnostics. We examine the structural failure of LLMs (GPT-2, Qwen3-8B) and Vision-Language models when they &#34;eat their own tail&#34; in the EHR.</p><p><br></p><p>Link to paper: https://www.medrxiv.org/content/10.64898/2026.01.19.26344383v3</p><p>Title: AI-generated data contamination erodes pathological variability and diagnostic reliability</p><p>He at al.</p><p><br></p><p>Key Takeaways:</p><p>• Why increasing synthetic data volume fails to prevent AI model degradation.</p><p>• The &#34;False Reassurance&#34; paradox: How models become more confident while missing life-threatening findings like pneumothorax.</p><p>• The mandatory &#34;Biological Anchor&#34;: Why 50-75% of training data must remain human-verified to prevent clinical utility collapse.</p><p><br></p><p>0:00 Introduction</p><p>0:10 Data Contamination Overview</p><p>0:46 Risks To Medical Nuance</p><p>1:13 Research Methodology</p><p>1:41 Testing Modalities</p><p>2:00 Text Generation Collapse</p><p>2:25 Specialized Domain Impact</p><p>2:49 Instruction Specificity Decline</p><p>3:25 Radiology Safety Risks</p><p>3:52 False Reassurance Paradox</p><p>4:30 Image Synthesis Degradation</p><p>4:52 Demographic Bias Shifts</p><p>5:18 Physician Validation Results</p><p>5:59 Mitigation Strategy Evaluation</p><p>6:31 Real Data Requirements</p><p>7:01 Policy And Tagging Needs</p><p>7:32 Clinical Review Challenges</p><p>7:53 The Biological Anchor</p><p>8:05 Future Research Directions</p><p>8:31 Conclusion</p><p><br></p><p>Medical AI, Model Collapse, Synthetic Data, Clinical LLMs, AI Patient Safety, Radiology AI, EHR Data Contamination, HealthTech, Generative AI in Healthcare, AI Bias. #HealthAI #MedicalAI #LLM #PatientSafety #DigitalHealth #ModelCollapse #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Is AI &amp;#34;Model Collapse&amp;#34; the next great threat to patient safety? Discover why AI-generated data contamination is erasing rare diseases from medical records and tripling false reassurance rates.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;This deep dive analyses a landmark study on &amp;#34;Model Collapse&amp;#34; in healthcare. We explore how recursive training on synthetic clinical notes, radiology reports, and medical images leads to a catastrophic loss of pathological diversity, demographic bias, and dangerous &amp;#34;false confidence&amp;#34; in AI diagnostics. We examine the structural failure of LLMs (GPT-2, Qwen3-8B) and Vision-Language models when they &amp;#34;eat their own tail&amp;#34; in the EHR.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Link to paper: https://www.medrxiv.org/content/10.64898/2026.01.19.26344383v3&lt;/p&gt;&lt;p&gt;Title: AI-generated data contamination erodes pathological variability and diagnostic reliability&lt;/p&gt;&lt;p&gt;He at al.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• Why increasing synthetic data volume fails to prevent AI model degradation.&lt;/p&gt;&lt;p&gt;• The &amp;#34;False Reassurance&amp;#34; paradox: How models become more confident while missing life-threatening findings like pneumothorax.&lt;/p&gt;&lt;p&gt;• The mandatory &amp;#34;Biological Anchor&amp;#34;: Why 50-75% of training data must remain human-verified to prevent clinical utility collapse.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;0:00 Introduction&lt;/p&gt;&lt;p&gt;0:10 Data Contamination Overview&lt;/p&gt;&lt;p&gt;0:46 Risks To Medical Nuance&lt;/p&gt;&lt;p&gt;1:13 Research Methodology&lt;/p&gt;&lt;p&gt;1:41 Testing Modalities&lt;/p&gt;&lt;p&gt;2:00 Text Generation Collapse&lt;/p&gt;&lt;p&gt;2:25 Specialized Domain Impact&lt;/p&gt;&lt;p&gt;2:49 Instruction Specificity Decline&lt;/p&gt;&lt;p&gt;3:25 Radiology Safety Risks&lt;/p&gt;&lt;p&gt;3:52 False Reassurance Paradox&lt;/p&gt;&lt;p&gt;4:30 Image Synthesis Degradation&lt;/p&gt;&lt;p&gt;4:52 Demographic Bias Shifts&lt;/p&gt;&lt;p&gt;5:18 Physician Validation Results&lt;/p&gt;&lt;p&gt;5:59 Mitigation Strategy Evaluation&lt;/p&gt;&lt;p&gt;6:31 Real Data Requirements&lt;/p&gt;&lt;p&gt;7:01 Policy And Tagging Needs&lt;/p&gt;&lt;p&gt;7:32 Clinical Review Challenges&lt;/p&gt;&lt;p&gt;7:53 The Biological Anchor&lt;/p&gt;&lt;p&gt;8:05 Future Research Directions&lt;/p&gt;&lt;p&gt;8:31 Conclusion&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Medical AI, Model Collapse, Synthetic Data, Clinical LLMs, AI Patient Safety, Radiology AI, EHR Data Contamination, HealthTech, Generative AI in Healthcare, AI Bias. #HealthAI #MedicalAI #LLM #PatientSafety #DigitalHealth #ModelCollapse #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="8634618" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/a9acf720-1ceb-43fe-9c69-2f38cce1377c/stream.mp3"/>
                
                <guid isPermaLink="false">dbe9e30a-b014-4bd8-8bba-67b5dc14a774</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/a9acf720-1ceb-43fe-9c69-2f38cce1377c</link>
                <pubDate>Thu, 05 Mar 2026 07:00:40 &#43;0000</pubDate>
                <itunes:duration>539</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>World Models vs LLMs for Healthcare - Master the Next Frontier According to Yann LeCun</itunes:title>
                <title>World Models vs LLMs for Healthcare - Master the Next Frontier According to Yann LeCun</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Will LLMs hit a structural ceiling in clinical medicine? Discover why Yann LeCun’s &#34;World Models&#34; are the essential next step for safe, autonomous Health AI.</p><p><br></p><p>In this episode, we break down Meta AI Chief Yann LeCun’s blueprint for the future of AI and its specific implications for healthcare. We move beyond the hype of Large Language Models to explore how Energy-Based Models, Regularized Learning (JEPA), and Model-Predictive Control will solve the &#34;hallucination&#34; and safety problems in surgical robotics and complex physiology.</p><p><br></p><p>Key Takeaways:</p><p>• Why &#34;Energy-Based Models&#34; are more stable for ICU monitoring than standard probabilistic AI.</p><p>• How JEPA (Joint-Embedding Predictive Architecture) allows AI to learn rare diseases without massive datasets.</p><p>• Why &#34;World Models&#34; will replace Reinforcement Learning in the next generation of surgical robots.</p><p><br></p><p>0:00 Introduction</p><p>0:22 LLMs vs World Models</p><p>0:50 Energy Based Models</p><p>2:00 Clinical EBM Application</p><p>2:50 Learning Methods Comparison</p><p>3:30 JEPA For Rare Disease</p><p>4:25 RL vs MPC</p><p>5:15 MPC Clinical Simulations</p><p>6:25 DeepMind Genie Model</p><p>7:35 Transformer Architecture Limits</p><p>8:31 Future Modular Systems</p><p>9:08 Spatial Reasoning Advances</p><p>10:07 Strategic Focus Conclusion</p><p><br></p><p>Health AI, Yann LeCun, World Models, Medical Robotics, JEPA, LLM limitations, Clinical AI, Surgical Automation, Machine Learning in Medicine. #HealthAI #MedicalAI #YannLeCun #WorldModels #MedTech #DigitalHealth #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Will LLMs hit a structural ceiling in clinical medicine? Discover why Yann LeCun’s &amp;#34;World Models&amp;#34; are the essential next step for safe, autonomous Health AI.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this episode, we break down Meta AI Chief Yann LeCun’s blueprint for the future of AI and its specific implications for healthcare. We move beyond the hype of Large Language Models to explore how Energy-Based Models, Regularized Learning (JEPA), and Model-Predictive Control will solve the &amp;#34;hallucination&amp;#34; and safety problems in surgical robotics and complex physiology.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• Why &amp;#34;Energy-Based Models&amp;#34; are more stable for ICU monitoring than standard probabilistic AI.&lt;/p&gt;&lt;p&gt;• How JEPA (Joint-Embedding Predictive Architecture) allows AI to learn rare diseases without massive datasets.&lt;/p&gt;&lt;p&gt;• Why &amp;#34;World Models&amp;#34; will replace Reinforcement Learning in the next generation of surgical robots.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;0:00 Introduction&lt;/p&gt;&lt;p&gt;0:22 LLMs vs World Models&lt;/p&gt;&lt;p&gt;0:50 Energy Based Models&lt;/p&gt;&lt;p&gt;2:00 Clinical EBM Application&lt;/p&gt;&lt;p&gt;2:50 Learning Methods Comparison&lt;/p&gt;&lt;p&gt;3:30 JEPA For Rare Disease&lt;/p&gt;&lt;p&gt;4:25 RL vs MPC&lt;/p&gt;&lt;p&gt;5:15 MPC Clinical Simulations&lt;/p&gt;&lt;p&gt;6:25 DeepMind Genie Model&lt;/p&gt;&lt;p&gt;7:35 Transformer Architecture Limits&lt;/p&gt;&lt;p&gt;8:31 Future Modular Systems&lt;/p&gt;&lt;p&gt;9:08 Spatial Reasoning Advances&lt;/p&gt;&lt;p&gt;10:07 Strategic Focus Conclusion&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Health AI, Yann LeCun, World Models, Medical Robotics, JEPA, LLM limitations, Clinical AI, Surgical Automation, Machine Learning in Medicine. #HealthAI #MedicalAI #YannLeCun #WorldModels #MedTech #DigitalHealth #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="10376254" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/ac94ca5c-67f8-4028-aa3c-eb6c27c0dbf2/stream.mp3"/>
                
                <guid isPermaLink="false">0b87a6d8-aa75-43ac-8203-b60b075861ba</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/ac94ca5c-67f8-4028-aa3c-eb6c27c0dbf2</link>
                <pubDate>Wed, 04 Mar 2026 07:00:05 &#43;0000</pubDate>
                <itunes:duration>648</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>063 Reinforcement Learning from Human Feedback (RLHF) - Human-in-the-Loop</itunes:title>
                <title>063 Reinforcement Learning from Human Feedback (RLHF) - Human-in-the-Loop</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Reinforcement Learning from Human Feedback (RLHF) is how we keep AI safe. Learn how human doctors &#34;rank&#34; AI answers to make them safer and more helpful.</p><p><br></p><p>#AISafety #RLHF #EthicalAI #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Reinforcement Learning from Human Feedback (RLHF) is how we keep AI safe. Learn how human doctors &amp;#34;rank&amp;#34; AI answers to make them safer and more helpful.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#AISafety #RLHF #EthicalAI #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1864097" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/4a1b7cd5-40a9-4921-a2b0-fb7a669bfabb/stream.mp3"/>
                
                <guid isPermaLink="false">8c77eabc-c274-4f3a-9ecb-b9a7f8f2f603</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/4a1b7cd5-40a9-4921-a2b0-fb7a669bfabb</link>
                <pubDate>Tue, 03 Mar 2026 07:00:12 &#43;0000</pubDate>
                <itunes:duration>116</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Elon Musk&#39;s Grok Medical &#34;Second Opinion&#34; Suggestion</itunes:title>
                <title>Elon Musk&#39;s Grok Medical &#34;Second Opinion&#34; Suggestion</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Is Elon Musk’s Grok the future of medical diagnostics or a clinical catastrophe? Discover why uploading your MRI to xAI might be the most dangerous &#34;second opinion&#34; in modern medicine.</p><p><br></p><p>In this episode of The Health AI Brief, we deconstruct the strategic and technical flaws behind the call for crowdsourced medical data on the X platform. We analyze why Grok’s own internal warnings contradict Musk’s vision, the economics of labeled data, and the fundamental danger of training clinical AI on user feedback rather than medical ground truth.</p><p><br></p><p>Key Takeaways:</p><p>• The RLHF Paradox: Why optimizing for user satisfaction creates &#34;sycophantic&#34; AI that prioritizes engagement over diagnostic accuracy.</p><p>• The Data Shortcut: How xAI is attempting to bypass expensive clinical labeling through the public, and why this results in a &#34;noisy&#34; and unreliable training signal.</p><p>• Privacy &amp; Performance: A look at the 60% performance drop-off when moving from lab settings to real-world user data, and the permanent loss of HIPAA protections.</p><p><br></p><p>Health AI, Elon Musk, Grok AI, Medical Data Privacy, xAI, Diagnostic AI, HIPAA Compliance, Machine Learning in Healthcare, Medical Imaging AI, The Health AI Brief. #HealthAI #Grok #MedicalAI #HealthTech #DigitalHealth #MedTwitter #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Is Elon Musk’s Grok the future of medical diagnostics or a clinical catastrophe? Discover why uploading your MRI to xAI might be the most dangerous &amp;#34;second opinion&amp;#34; in modern medicine.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this episode of The Health AI Brief, we deconstruct the strategic and technical flaws behind the call for crowdsourced medical data on the X platform. We analyze why Grok’s own internal warnings contradict Musk’s vision, the economics of labeled data, and the fundamental danger of training clinical AI on user feedback rather than medical ground truth.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• The RLHF Paradox: Why optimizing for user satisfaction creates &amp;#34;sycophantic&amp;#34; AI that prioritizes engagement over diagnostic accuracy.&lt;/p&gt;&lt;p&gt;• The Data Shortcut: How xAI is attempting to bypass expensive clinical labeling through the public, and why this results in a &amp;#34;noisy&amp;#34; and unreliable training signal.&lt;/p&gt;&lt;p&gt;• Privacy &amp;amp; Performance: A look at the 60% performance drop-off when moving from lab settings to real-world user data, and the permanent loss of HIPAA protections.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Health AI, Elon Musk, Grok AI, Medical Data Privacy, xAI, Diagnostic AI, HIPAA Compliance, Machine Learning in Healthcare, Medical Imaging AI, The Health AI Brief. #HealthAI #Grok #MedicalAI #HealthTech #DigitalHealth #MedTwitter #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="6654328" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/6fa80f07-918e-4e3b-8bb7-a12067356731/stream.mp3"/>
                
                <guid isPermaLink="false">d402ded1-2b0e-4f18-a7c5-56b6fa271dee</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/6fa80f07-918e-4e3b-8bb7-a12067356731</link>
                <pubDate>Mon, 02 Mar 2026 07:00:35 &#43;0000</pubDate>
                <itunes:duration>415</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>ChatGPT Health Deemed Unsafe: Nature Medicine Urgent Alert</itunes:title>
                <title>ChatGPT Health Deemed Unsafe: Nature Medicine Urgent Alert</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Nature Medicine has fast-tracked an urgent study finding ChatGPT Health is dangerous in medical emergencies. Is OpenAI&#39;s $110bn tool safe for your patients?</p><p><br></p><p>OpenAI’s ChatGPT Health processes 250 million queries a week, but a new stress test published in Nature Medicine reveals a 52% failure rate in emergency triage. In this episode of The Health AI Brief, we analyze why the model misses life-threatening conditions like respiratory failure and DKA, the &#34;Suicide Guardrail Paradox,&#34; and the massive regulatory gap between Big Tech&#39;s $110bn war chest and the FDA&#39;s shrinking budget. We discuss why &#34;move fast and break things&#34; is an unacceptable strategy for clinical health.</p><p><br></p><p>Link to paper: https://www.nature.com/articles/s41591-026-04297-7</p><p>Title: https://www.nature.com/articles/s41591-026-04297-7</p><p>Authors: Ramaswamy et al</p><p><br></p><p>Key Takeaways</p><p>- Why ChatGPT Health under-triages more than half of true medical emergencies.</p><p>- The &#34;Inverted U-Shape&#34; failure: Why AI is most dangerous at clinical extremes.</p><p>- The Regulation Gap: Why the FDA’s $6.8bn budget cannot keep pace with OpenAI’s $110bn funding.</p><p><br></p><p>00:00 Nature Medicine Fast-Tracks ChatGPT Health Warning</p><p>00:30 The Mount Sinai Stress Test: 960 Clinical Interactions</p><p>01:05 The Inverted U-Shape: Why AI Fails at Clinical Extremes</p><p>01:32 Under-Triage: Missing DKA and Respiratory Failure</p><p>02:50 The Suicide Crisis Guardrail Paradox</p><p>03:45 The Regulatory Vacuum: Professional Bodies vs. Big Tech</p><p>04:31 OpenAI Funding ($110bn) vs. FDA Budget Gap</p><p>05:30 Clinical Moats: Why Doctors Still Matter for Safety</p><p>06:14 Engineering Targets: Clinical Trajectory vs. Snapshots</p><p>07:02 Final Verdict: The Case for Premarket Safety Requirements</p><p><br></p><p>ChatGPT Health, AI Medical Triage, Nature Medicine Study, OpenAI Dangerous, Patient Safety AI, FDA Regulation, Clinical LLM, Health AI Brief, Medical AI Error, Emergency Medicine AI. #HealthAI #ChatGPT #PatientSafety #MedicalInnovation #OpenAI #DigitalHealth #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>https://substack.com/@healthaibrief</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Nature Medicine has fast-tracked an urgent study finding ChatGPT Health is dangerous in medical emergencies. Is OpenAI&amp;#39;s $110bn tool safe for your patients?&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;OpenAI’s ChatGPT Health processes 250 million queries a week, but a new stress test published in Nature Medicine reveals a 52% failure rate in emergency triage. In this episode of The Health AI Brief, we analyze why the model misses life-threatening conditions like respiratory failure and DKA, the &amp;#34;Suicide Guardrail Paradox,&amp;#34; and the massive regulatory gap between Big Tech&amp;#39;s $110bn war chest and the FDA&amp;#39;s shrinking budget. We discuss why &amp;#34;move fast and break things&amp;#34; is an unacceptable strategy for clinical health.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Link to paper: https://www.nature.com/articles/s41591-026-04297-7&lt;/p&gt;&lt;p&gt;Title: https://www.nature.com/articles/s41591-026-04297-7&lt;/p&gt;&lt;p&gt;Authors: Ramaswamy et al&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways&lt;/p&gt;&lt;p&gt;- Why ChatGPT Health under-triages more than half of true medical emergencies.&lt;/p&gt;&lt;p&gt;- The &amp;#34;Inverted U-Shape&amp;#34; failure: Why AI is most dangerous at clinical extremes.&lt;/p&gt;&lt;p&gt;- The Regulation Gap: Why the FDA’s $6.8bn budget cannot keep pace with OpenAI’s $110bn funding.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;00:00 Nature Medicine Fast-Tracks ChatGPT Health Warning&lt;/p&gt;&lt;p&gt;00:30 The Mount Sinai Stress Test: 960 Clinical Interactions&lt;/p&gt;&lt;p&gt;01:05 The Inverted U-Shape: Why AI Fails at Clinical Extremes&lt;/p&gt;&lt;p&gt;01:32 Under-Triage: Missing DKA and Respiratory Failure&lt;/p&gt;&lt;p&gt;02:50 The Suicide Crisis Guardrail Paradox&lt;/p&gt;&lt;p&gt;03:45 The Regulatory Vacuum: Professional Bodies vs. Big Tech&lt;/p&gt;&lt;p&gt;04:31 OpenAI Funding ($110bn) vs. FDA Budget Gap&lt;/p&gt;&lt;p&gt;05:30 Clinical Moats: Why Doctors Still Matter for Safety&lt;/p&gt;&lt;p&gt;06:14 Engineering Targets: Clinical Trajectory vs. Snapshots&lt;/p&gt;&lt;p&gt;07:02 Final Verdict: The Case for Premarket Safety Requirements&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;ChatGPT Health, AI Medical Triage, Nature Medicine Study, OpenAI Dangerous, Patient Safety AI, FDA Regulation, Clinical LLM, Health AI Brief, Medical AI Error, Emergency Medicine AI. #HealthAI #ChatGPT #PatientSafety #MedicalInnovation #OpenAI #DigitalHealth #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;https://substack.com/@healthaibrief&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="7316375" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/18becf1a-702b-4a38-a461-a27c99a14dd0/stream.mp3"/>
                
                <guid isPermaLink="false">a7780fde-0f61-4445-9e7b-35539cda3b62</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/18becf1a-702b-4a38-a461-a27c99a14dd0</link>
                <pubDate>Sun, 01 Mar 2026 07:00:18 &#43;0000</pubDate>
                <itunes:duration>457</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>062 Fine-Tuning - Specialist vs Generalist - Sending AI to Medical School</itunes:title>
                <title>062 Fine-Tuning - Specialist vs Generalist - Sending AI to Medical School</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>You have a base model, now you need an Oncologist. We explain Fine-Tuning: the process of specializing an LLM on clinical data.</p><p><br></p><p>#FineTuning #SpecialistAI #HealthcareInnovation #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;You have a base model, now you need an Oncologist. We explain Fine-Tuning: the process of specializing an LLM on clinical data.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#FineTuning #SpecialistAI #HealthcareInnovation #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1838602" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/53093f70-a3d9-4699-89e2-4e9d39f3e3af/stream.mp3"/>
                
                <guid isPermaLink="false">5d0ae7a4-a640-4b5f-bfb7-d95024364bac</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/53093f70-a3d9-4699-89e2-4e9d39f3e3af</link>
                <pubDate>Fri, 27 Feb 2026 07:00:23 &#43;0000</pubDate>
                <itunes:duration>114</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Health AI in 2026: The Definitive Guide to Big Tech Strategies</itunes:title>
                <title>Health AI in 2026: The Definitive Guide to Big Tech Strategies</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>The definitive 2026 Health AI strategy audit. Discover which Big Tech players, OpenAI, Google, Anthropic, or Microsoft, are actually aiming to solve clinical problems and which are just shipping marketing.</p><p><br></p><p>We perform a dispassionate clinical audit of the global Health AI landscape for 2026. We move beyond the hype to analyse the &#34;moats&#34; and &#34;missions&#34; of the major players, from OpenAI’s consumer-led personal health ally to Anthropic’s infrastructure-first approach with MCP. We critique the incrementalism in some research, the secrecy of Microsoft’s enterprise play, and the vertical integration of Amazon’s agentic systems. Finally, we outline a vision for &#34;AI by design&#34; that replaces medieval medical workflows with continuous, decentralised care.</p><p><br></p><p>00:00 – Intro: The Strategic Audit of the Health AI Landscape</p><p>00:45 – OpenAI vs Anthropic: Consumer Allies vs. Enterprise Plumbing</p><p>03:20 – Google’s &#34;Incrementalism&#34;: Why Med-Gemini Isn’t a Paradigm Shift</p><p>05:10 – The Microsoft Dilemma: Enterprise Guardrails vs Clinical Utility</p><p>06:50 – Amazon’s &#34;Closed Loop&#34;: Moving from Generative to Agentic Care</p><p>07:30 – Open Evidence: Building Physician Trust Through RAG</p><p>08:10 – The EHR Giants: Epic, Oracle, and the &#34;Data Archaeology&#34; Problem</p><p>09:30 – Why Meta and X.AI are Missing from the Clinical Room</p><p>09:50 – Apple’s Long Game: Passive Phenotyping &amp; &#34;Owning the Door&#34;</p><p>10:40 – The Regulatory and Economic Walls: EU AI Act &amp; the Cost of Inference</p><p>11:50 – The Future Beyond the Chatbot</p><p>15:30 – Final Verdict: Moving Beyond Medieval Frameworks</p><p><br></p><p>Health AI 2026, Clinical LLMs, Medical AI Strategy, Google MedGemma, OpenAI ChatGPT Health, Anthropic Claude Healthcare, Epic Comet AI, HealthTech Audit, Digital Front Door, Medical Decision Support #HealthAI #MedTech2026 #DigitalHealth #HealthTechStrategy #ClinicalAI #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;The definitive 2026 Health AI strategy audit. Discover which Big Tech players, OpenAI, Google, Anthropic, or Microsoft, are actually aiming to solve clinical problems and which are just shipping marketing.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;We perform a dispassionate clinical audit of the global Health AI landscape for 2026. We move beyond the hype to analyse the &amp;#34;moats&amp;#34; and &amp;#34;missions&amp;#34; of the major players, from OpenAI’s consumer-led personal health ally to Anthropic’s infrastructure-first approach with MCP. We critique the incrementalism in some research, the secrecy of Microsoft’s enterprise play, and the vertical integration of Amazon’s agentic systems. Finally, we outline a vision for &amp;#34;AI by design&amp;#34; that replaces medieval medical workflows with continuous, decentralised care.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;00:00 – Intro: The Strategic Audit of the Health AI Landscape&lt;/p&gt;&lt;p&gt;00:45 – OpenAI vs Anthropic: Consumer Allies vs. Enterprise Plumbing&lt;/p&gt;&lt;p&gt;03:20 – Google’s &amp;#34;Incrementalism&amp;#34;: Why Med-Gemini Isn’t a Paradigm Shift&lt;/p&gt;&lt;p&gt;05:10 – The Microsoft Dilemma: Enterprise Guardrails vs Clinical Utility&lt;/p&gt;&lt;p&gt;06:50 – Amazon’s &amp;#34;Closed Loop&amp;#34;: Moving from Generative to Agentic Care&lt;/p&gt;&lt;p&gt;07:30 – Open Evidence: Building Physician Trust Through RAG&lt;/p&gt;&lt;p&gt;08:10 – The EHR Giants: Epic, Oracle, and the &amp;#34;Data Archaeology&amp;#34; Problem&lt;/p&gt;&lt;p&gt;09:30 – Why Meta and X.AI are Missing from the Clinical Room&lt;/p&gt;&lt;p&gt;09:50 – Apple’s Long Game: Passive Phenotyping &amp;amp; &amp;#34;Owning the Door&amp;#34;&lt;/p&gt;&lt;p&gt;10:40 – The Regulatory and Economic Walls: EU AI Act &amp;amp; the Cost of Inference&lt;/p&gt;&lt;p&gt;11:50 – The Future Beyond the Chatbot&lt;/p&gt;&lt;p&gt;15:30 – Final Verdict: Moving Beyond Medieval Frameworks&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Health AI 2026, Clinical LLMs, Medical AI Strategy, Google MedGemma, OpenAI ChatGPT Health, Anthropic Claude Healthcare, Epic Comet AI, HealthTech Audit, Digital Front Door, Medical Decision Support #HealthAI #MedTech2026 #DigitalHealth #HealthTechStrategy #ClinicalAI #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="15430217" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/6a2c684d-3e1a-4cbd-aeef-6ff64495c777/stream.mp3"/>
                
                <guid isPermaLink="false">a4d810fa-1cd6-463d-9bc3-743c0e1fd91d</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/6a2c684d-3e1a-4cbd-aeef-6ff64495c777</link>
                <pubDate>Thu, 26 Feb 2026 07:00:38 &#43;0000</pubDate>
                <itunes:duration>964</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Heidi Acquires Automedica and Rolls Out New Products</itunes:title>
                <title>Heidi Acquires Automedica and Rolls Out New Products</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Discover how Heidi Health’s acquisition of Automedica is bringing ad-free, NICE-compliant clinical reasoning directly into your consultations.</p><p><br></p><p>In this episode, we consider Heidi Health’s strategic shift from a simple AI medical scribe to a comprehensive &#34;AI Care Partner.&#34; By acquiring UK-based Automedica and launching Heidi Evidence and Heidi Comms, the platform now integrates real-time, RAG-based clinical guidelines (NICE, BMJ, MIMS) and patient coordination tools into a single, ad-free workflow. We explore the technical benefits of Anthropic’s Claude models and the regulatory significance of the MHRA AI Airlock for the NHS.</p><p><br></p><p>Key Takeaways:</p><p>- Understand the difference between standard LLMs and Retrieval-Augmented Generation (RAG) for medical guidelines.</p><p>- The strategic importance of the MHRA AI Airlock for safe clinical AI implementation in the UK.</p><p>- Why the &#34;Single Interface&#34; strategy is the next evolution in reducing clinician burnout and portal fatigue.</p><p><br></p><p>00:00 - Introduction &amp; &#34;Scribe Wars&#34; Phase 2</p><p>00:30 - Heidi’s Rapid Growth &amp; Scribe Limitations</p><p>01:21 - AutoMedica Acquisition: RAG &amp; AI Airlock</p><p>02:35 - Heidi Evidence: Real-Time Clinical Insights</p><p>03:51 - Ad-Free Clinical Infrastructure</p><p>04:19 - Heidi Comms: Streamlining Patient Communication</p><p>05:31 - Hurdles: EHR Integration &amp; Liability</p><p>06:40 - Conclusion: The Future of AI in Healthcare</p><p><br></p><p>Clinical AI, Heidi Health, Medical AI Scribe, Automedica, NICE Guidelines, NHS AI, Medical Decision Support, RAG AI Healthcare, Anthropic Claude, HealthTech, #HealthAI #DigitalHealth #MedTech #HeidiHealth #ClinicalAI #NHS #FutureOfMedicine#aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Discover how Heidi Health’s acquisition of Automedica is bringing ad-free, NICE-compliant clinical reasoning directly into your consultations.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this episode, we consider Heidi Health’s strategic shift from a simple AI medical scribe to a comprehensive &amp;#34;AI Care Partner.&amp;#34; By acquiring UK-based Automedica and launching Heidi Evidence and Heidi Comms, the platform now integrates real-time, RAG-based clinical guidelines (NICE, BMJ, MIMS) and patient coordination tools into a single, ad-free workflow. We explore the technical benefits of Anthropic’s Claude models and the regulatory significance of the MHRA AI Airlock for the NHS.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;- Understand the difference between standard LLMs and Retrieval-Augmented Generation (RAG) for medical guidelines.&lt;/p&gt;&lt;p&gt;- The strategic importance of the MHRA AI Airlock for safe clinical AI implementation in the UK.&lt;/p&gt;&lt;p&gt;- Why the &amp;#34;Single Interface&amp;#34; strategy is the next evolution in reducing clinician burnout and portal fatigue.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;00:00 - Introduction &amp;amp; &amp;#34;Scribe Wars&amp;#34; Phase 2&lt;/p&gt;&lt;p&gt;00:30 - Heidi’s Rapid Growth &amp;amp; Scribe Limitations&lt;/p&gt;&lt;p&gt;01:21 - AutoMedica Acquisition: RAG &amp;amp; AI Airlock&lt;/p&gt;&lt;p&gt;02:35 - Heidi Evidence: Real-Time Clinical Insights&lt;/p&gt;&lt;p&gt;03:51 - Ad-Free Clinical Infrastructure&lt;/p&gt;&lt;p&gt;04:19 - Heidi Comms: Streamlining Patient Communication&lt;/p&gt;&lt;p&gt;05:31 - Hurdles: EHR Integration &amp;amp; Liability&lt;/p&gt;&lt;p&gt;06:40 - Conclusion: The Future of AI in Healthcare&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Clinical AI, Heidi Health, Medical AI Scribe, Automedica, NICE Guidelines, NHS AI, Medical Decision Support, RAG AI Healthcare, Anthropic Claude, HealthTech, #HealthAI #DigitalHealth #MedTech #HeidiHealth #ClinicalAI #NHS #FutureOfMedicine#aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="7572166" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/edfc8f7a-4eec-4fa5-b853-1bbc8d555bd1/stream.mp3"/>
                
                <guid isPermaLink="false">02a29284-b350-4373-924c-a22b04053fab</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/edfc8f7a-4eec-4fa5-b853-1bbc8d555bd1</link>
                <pubDate>Wed, 25 Feb 2026 07:00:47 &#43;0000</pubDate>
                <itunes:duration>473</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>061 Pre-Training - What LLMs Learn Before They Become &#39;Medical&#39;</itunes:title>
                <title>061 Pre-Training - What LLMs Learn Before They Become &#39;Medical&#39;</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>AI starts by reading the whole internet. We explore the massive compute power required for &#34;Pre-training&#34; and what a general-purpose model actually knows.</p><p><br></p><p>#LLM #BigData #Compute #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;AI starts by reading the whole internet. We explore the massive compute power required for &amp;#34;Pre-training&amp;#34; and what a general-purpose model actually knows.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#LLM #BigData #Compute #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1969841" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/c49a424b-dc60-435f-84d8-3f33a5e3209b/stream.mp3"/>
                
                <guid isPermaLink="false">a08bb972-4d0e-4a92-aa39-a90343e08b07</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/c49a424b-dc60-435f-84d8-3f33a5e3209b</link>
                <pubDate>Tue, 24 Feb 2026 07:00:41 &#43;0000</pubDate>
                <itunes:duration>123</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Deep Dive into Ant Afu Health AI App - Affiliate of Jack Ma’s Alibaba</itunes:title>
                <title>Deep Dive into Ant Afu Health AI App - Affiliate of Jack Ma’s Alibaba</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p><span>Can Jack Ma’s Ant Group solve the healthcare crisis? Discover how the Ant Afu (AQ) app reached 30 million users by integrating AI directly into the world’s largest payment ecosystem.</span></p><p><br></p><p><span>Ant Group is repositioning healthcare as its next growth engine, leveraging AI-native &#34;Doctor Agents&#34; and &#34;DeepSearch&#34; to bridge the gap between digital triaging and clinical action. By integrating hospital booking, insurance payments, and clinical decision support into a single super-app, they are creating a closed-loop healthcare model that challenges the standalone approaches of ChatGPT and Claude.</span></p><p><br></p><p><span>Key Takeaways:</span></p><p><span>• Agentic Integration: Learn how Ant Afu closes the loop between symptom checking and insurance-paid hospital bookings.</span></p><p><span>• Digital Twins: Understanding the role of AI avatars trained by 300,000 licensed physicians in reducing administrative burdens.</span></p><p><span>• Clinician Tools: A look at &#34;DeepSearch,&#34; Ant&#39;s new evidence-based research tool for medical professionals.</span></p><p><br></p><p><span>00:00 – Introduction: The Health AI Revolution in China</span></p><p><span>00:43 – Solving the &#34;Last Mile&#34; of Care</span></p><p><span>01:05 – The Closed-Loop Ecosystem: From Symptom to Payment</span></p><p><span>01:34 – The Friendship Paradox: Balancing Rapport and Authority</span></p><p><span>02:12 – Algorithmic Traceability: Building Clinical Trust</span></p><p><span>02:51 – Evidence Base: Measuring Impact at Scale</span></p><p><span>03:22 – Capabilities &amp; Limitations: What the AI Can (and Can&#39;t) Do</span></p><p><span>03:54 – Lessons in Ecosystem Integration</span></p><p><span>04:18 – The Future: Global Implications and Challenges</span></p><p><span>05:07 – Outro and Final THoughts</span></p><p><br></p><p><span>Health AI, Ant Group, Ant Afu, Alipay Healthcare, Clinical Decision Support, Medical AI China, Jack Ma Health-Tech, DeepSearch AI, Digital Health Ecosystem, AI Doctor Agents #HealthAI #MedTech #AntGroup #DigitalHealth #ClinicalAI#aiinmedicine Music generated by Mubert https://mubert.com/render</span></p><p><br></p><p><span>healthaibrief@outlook.com</span></p>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Can Jack Ma’s Ant Group solve the healthcare crisis? Discover how the Ant Afu (AQ) app reached 30 million users by integrating AI directly into the world’s largest payment ecosystem.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Ant Group is repositioning healthcare as its next growth engine, leveraging AI-native &amp;#34;Doctor Agents&amp;#34; and &amp;#34;DeepSearch&amp;#34; to bridge the gap between digital triaging and clinical action. By integrating hospital booking, insurance payments, and clinical decision support into a single super-app, they are creating a closed-loop healthcare model that challenges the standalone approaches of ChatGPT and Claude.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Key Takeaways:&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Agentic Integration: Learn how Ant Afu closes the loop between symptom checking and insurance-paid hospital bookings.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Digital Twins: Understanding the role of AI avatars trained by 300,000 licensed physicians in reducing administrative burdens.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;• Clinician Tools: A look at &amp;#34;DeepSearch,&amp;#34; Ant&amp;#39;s new evidence-based research tool for medical professionals.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:00 – Introduction: The Health AI Revolution in China&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;00:43 – Solving the &amp;#34;Last Mile&amp;#34; of Care&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:05 – The Closed-Loop Ecosystem: From Symptom to Payment&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;01:34 – The Friendship Paradox: Balancing Rapport and Authority&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;02:12 – Algorithmic Traceability: Building Clinical Trust&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;02:51 – Evidence Base: Measuring Impact at Scale&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;03:22 – Capabilities &amp;amp; Limitations: What the AI Can (and Can&amp;#39;t) Do&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;03:54 – Lessons in Ecosystem Integration&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;04:18 – The Future: Global Implications and Challenges&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;05:07 – Outro and Final THoughts&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Health AI, Ant Group, Ant Afu, Alipay Healthcare, Clinical Decision Support, Medical AI China, Jack Ma Health-Tech, DeepSearch AI, Digital Health Ecosystem, AI Doctor Agents #HealthAI #MedTech #AntGroup #DigitalHealth #ClinicalAI#aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;healthaibrief@outlook.com&lt;/span&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="4885942" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/fd089099-e7c0-442e-8dfa-1d9486b022a4/stream.mp3"/>
                
                <guid isPermaLink="false">226a3cef-6cd4-414c-be49-43dfff8a0213</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/fd089099-e7c0-442e-8dfa-1d9486b022a4</link>
                <pubDate>Mon, 23 Feb 2026 07:00:54 &#43;0000</pubDate>
                <itunes:duration>305</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>060 Positional Encoding - How AI Knows the Order of Your Symptoms</itunes:title>
                <title>060 Positional Encoding - How AI Knows the Order of Your Symptoms</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>In a sentence, order is everything. Learn how &#34;Positional Encoding&#34; prevents AI from getting confused between &#34;The patient had a fall after the stroke&#34; and &#34;The patient had a stroke after the fall.&#34;</p><p><br></p><p>#AI #ClinicalSafety #DataScience #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;In a sentence, order is everything. Learn how &amp;#34;Positional Encoding&amp;#34; prevents AI from getting confused between &amp;#34;The patient had a fall after the stroke&amp;#34; and &amp;#34;The patient had a stroke after the fall.&amp;#34;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#AI #ClinicalSafety #DataScience #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1763369" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/c021dfe8-6825-4411-8a1c-226aefad5d82/stream.mp3"/>
                
                <guid isPermaLink="false">4f8022ec-198c-4624-8266-4a6cb0f7239d</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/c021dfe8-6825-4411-8a1c-226aefad5d82</link>
                <pubDate>Fri, 20 Feb 2026 07:00:44 &#43;0000</pubDate>
                <itunes:duration>110</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>059 BERT vs. GPT -  Understanding the Difference Between an Encoder and Decoder</itunes:title>
                <title>059 BERT vs. GPT -  Understanding the Difference Between an Encoder and Decoder</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>One reads, one writes. We explain why BERT is better for medical coding and GPT is better for patient summaries.</p><p><br></p><p>#GPT #BERT #MedicalNLP #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;One reads, one writes. We explain why BERT is better for medical coding and GPT is better for patient summaries.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#GPT #BERT #MedicalNLP #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1981544" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/36a8662b-a36c-4d73-bbbe-1e1c90e6b67a/stream.mp3"/>
                
                <guid isPermaLink="false">88610236-b553-4d7e-a6cf-cad2ebb655da</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/36a8662b-a36c-4d73-bbbe-1e1c90e6b67a</link>
                <pubDate>Thu, 19 Feb 2026 07:00:55 &#43;0000</pubDate>
                <itunes:duration>123</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Agentic AI Era in Healthcare: Lessons from OpenClaw</itunes:title>
                <title>Agentic AI Era in Healthcare: Lessons from OpenClaw</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Agentic AI is ending the era of the chatbot. Discover why OpenAI’s latest move is a massive signal for the future of clinical practice and how &#34;OpenClaw&#34; had such impact across the world.</p><p>This episode explores the transition from conversational LLMs to autonomous AI agents. We break down the &#34;OpenClaw&#34; phenomenon, the $1 trillion software market shift, and why Matt Shumer believes we are in a &#34;February 2020&#34; moment for cognitive automation. Learn the difference between being &#34;data rich&#34; and &#34;intelligence ready&#34; in a healthcare setting.</p><p><br></p><p>Link to the article: https://fortune.com/2026/02/11/something-big-is-happening-ai-february-2020-moment-matt-shumer/</p><p><br></p><p>Key Takeaways:</p><p>• Understand the shift from &#34;outcome-based execution&#34; to traditional &#34;prompt-response&#34; AI.</p><p>• The security risks of the &#34;Lethal Trifecta&#34; in agentic systems and how to avoid them.</p><p>• Strategic advice for clinicians on moving from &#34;Search&#34; to &#34;Clinical Auditing.&#34;</p><p><br></p><p>0:00 – Introduction: The end of the chatbot era and the move to agents.</p><p>0:55 – Technical Distinction: LLMs vs. Agentic AI.</p><p>1:32 – AI’s &#34;February 2020 Moment&#34;: Non-linear acceleration.</p><p>2:21 – Software Development: From writing code to describing outcomes.</p><p>3:16 – The Open-Clore Phenomenon: Autonomous agents with root access.</p><p>4:19 – Market Impact and OpenAI’s strategic pivot.</p><p>4:41 – Three Key Lessons: Local agency, extensibility, and autonomous coordination.</p><p>5:52 – The Healthcare &#34;Intelligence Gap&#34;: Data-rich but intelligence-poor.</p><p>6:41 – 5 Dimensions of AI Readiness in organizations.</p><p>7:21 – Security Risks: The &#34;Lethal Trifecta&#34; and Prompt Injection.</p><p>8:04 – The Danger of &#34;Shadow AI&#34; in clinical settings.</p><p>8:25 – 3 Strategic Pillars for navigating the Agentic AI transition.</p><p>9:53 – Conclusion: The clinician as an orchestrator of agents.</p><p><br></p><p>Agentic AI, OpenClaw, Clinical AI, Healthcare Automation, OpenAI GPT-5, Digital Health Strategy, AI Security, LLM Agents, HealthTech 2026 #HealthAI #AgenticAI #MedTech #FutureOfMedicine #OpenClaw #ClinicalInnovation #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Agentic AI is ending the era of the chatbot. Discover why OpenAI’s latest move is a massive signal for the future of clinical practice and how &amp;#34;OpenClaw&amp;#34; had such impact across the world.&lt;/p&gt;&lt;p&gt;This episode explores the transition from conversational LLMs to autonomous AI agents. We break down the &amp;#34;OpenClaw&amp;#34; phenomenon, the $1 trillion software market shift, and why Matt Shumer believes we are in a &amp;#34;February 2020&amp;#34; moment for cognitive automation. Learn the difference between being &amp;#34;data rich&amp;#34; and &amp;#34;intelligence ready&amp;#34; in a healthcare setting.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Link to the article: https://fortune.com/2026/02/11/something-big-is-happening-ai-february-2020-moment-matt-shumer/&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• Understand the shift from &amp;#34;outcome-based execution&amp;#34; to traditional &amp;#34;prompt-response&amp;#34; AI.&lt;/p&gt;&lt;p&gt;• The security risks of the &amp;#34;Lethal Trifecta&amp;#34; in agentic systems and how to avoid them.&lt;/p&gt;&lt;p&gt;• Strategic advice for clinicians on moving from &amp;#34;Search&amp;#34; to &amp;#34;Clinical Auditing.&amp;#34;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;0:00 – Introduction: The end of the chatbot era and the move to agents.&lt;/p&gt;&lt;p&gt;0:55 – Technical Distinction: LLMs vs. Agentic AI.&lt;/p&gt;&lt;p&gt;1:32 – AI’s &amp;#34;February 2020 Moment&amp;#34;: Non-linear acceleration.&lt;/p&gt;&lt;p&gt;2:21 – Software Development: From writing code to describing outcomes.&lt;/p&gt;&lt;p&gt;3:16 – The Open-Clore Phenomenon: Autonomous agents with root access.&lt;/p&gt;&lt;p&gt;4:19 – Market Impact and OpenAI’s strategic pivot.&lt;/p&gt;&lt;p&gt;4:41 – Three Key Lessons: Local agency, extensibility, and autonomous coordination.&lt;/p&gt;&lt;p&gt;5:52 – The Healthcare &amp;#34;Intelligence Gap&amp;#34;: Data-rich but intelligence-poor.&lt;/p&gt;&lt;p&gt;6:41 – 5 Dimensions of AI Readiness in organizations.&lt;/p&gt;&lt;p&gt;7:21 – Security Risks: The &amp;#34;Lethal Trifecta&amp;#34; and Prompt Injection.&lt;/p&gt;&lt;p&gt;8:04 – The Danger of &amp;#34;Shadow AI&amp;#34; in clinical settings.&lt;/p&gt;&lt;p&gt;8:25 – 3 Strategic Pillars for navigating the Agentic AI transition.&lt;/p&gt;&lt;p&gt;9:53 – Conclusion: The clinician as an orchestrator of agents.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Agentic AI, OpenClaw, Clinical AI, Healthcare Automation, OpenAI GPT-5, Digital Health Strategy, AI Security, LLM Agents, HealthTech 2026 #HealthAI #AgenticAI #MedTech #FutureOfMedicine #OpenClaw #ClinicalInnovation #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="10829322" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/ccb6cdea-22b7-4a6c-b310-9c3fef203c15/stream.mp3"/>
                
                <guid isPermaLink="false">84891921-dd73-4ea9-bf0e-8e4d5a6c244f</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/ccb6cdea-22b7-4a6c-b310-9c3fef203c15</link>
                <pubDate>Wed, 18 Feb 2026 07:00:19 &#43;0000</pubDate>
                <itunes:duration>676</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>058 Regular Expressions (Regex) - The ‘Old School’ Trick That (Sometimes) Still Beats Modern AI</itunes:title>
                <title>058 Regular Expressions (Regex) - The ‘Old School’ Trick That (Sometimes) Still Beats Modern AI</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Before LLMs, there was Regex. Learn why hard-coded rules are still the safest way to find patterns like social security numbers or lab values in a chart.</p><p><br></p><p>#Programming #HealthData #Regex #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Before LLMs, there was Regex. Learn why hard-coded rules are still the safest way to find patterns like social security numbers or lab values in a chart.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#Programming #HealthData #Regex #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1967333" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/fe5e6160-7edd-473f-b55a-e9ac98b1dd72/stream.mp3"/>
                
                <guid isPermaLink="false">544b49a4-3687-446a-bc8c-293acfa41d65</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/fe5e6160-7edd-473f-b55a-e9ac98b1dd72</link>
                <pubDate>Tue, 17 Feb 2026 07:00:17 &#43;0000</pubDate>
                <itunes:duration>122</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI scribe transcripts - Digital Exhaust or Digital Gold</itunes:title>
                <title>AI scribe transcripts - Digital Exhaust or Digital Gold</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Is your hospital deleting &#34;Digital Gold&#34;? AI clinical scribes are transforming documentation, but a hidden liability trap is forcing the destruction of vital patient data.</p><p><br></p><p>In this episode, we break down the NEJM analysis of AI-generated transcripts. While these tools reduce clinician burnout, the systematic deletion of original audio and transcripts to avoid malpractice discovery is creating a &#34;safety &#34;black hole&#34; in modern medicine.</p><p><br></p><p>Key Takeaways:</p><p>• The Flattening Effect: How AI summaries can strip away critical diagnostic clues like &#34;zoster sine herpete&#34; symptoms.</p><p>• The Hallucination Risk: Why deleting transcripts makes it impossible to audit AI-generated medical errors.</p><p>• A Strategic Path Forward: How &#34;safe harbor&#34; laws and de-identification can protect both health systems and patient safety.</p><p><br></p><p>Authors: Katherine Goodman and Daniel Morgan</p><p>Title: Digital Exhaust or Digital Gold? The Value of AI-Generated Clinical Visit Transcripts</p><p>Link: https://www.nejm.org/doi/full/10.1056/NEJMp2514616</p><p><br></p><p>AI Clinical Scribe, Ambient AI, Medical Hallucinations, LLM Healthcare, Clinical Documentation, Patient Safety, Health AI, Medical Malpractice, NEJM Perspective, AI Transcripts.</p><p>#HealthAI #MedicalAI #AIScribe #ClinicianBurnout #PatientSafety</p><p>#aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Is your hospital deleting &amp;#34;Digital Gold&amp;#34;? AI clinical scribes are transforming documentation, but a hidden liability trap is forcing the destruction of vital patient data.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this episode, we break down the NEJM analysis of AI-generated transcripts. While these tools reduce clinician burnout, the systematic deletion of original audio and transcripts to avoid malpractice discovery is creating a &amp;#34;safety &amp;#34;black hole&amp;#34; in modern medicine.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• The Flattening Effect: How AI summaries can strip away critical diagnostic clues like &amp;#34;zoster sine herpete&amp;#34; symptoms.&lt;/p&gt;&lt;p&gt;• The Hallucination Risk: Why deleting transcripts makes it impossible to audit AI-generated medical errors.&lt;/p&gt;&lt;p&gt;• A Strategic Path Forward: How &amp;#34;safe harbor&amp;#34; laws and de-identification can protect both health systems and patient safety.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Authors: Katherine Goodman and Daniel Morgan&lt;/p&gt;&lt;p&gt;Title: Digital Exhaust or Digital Gold? The Value of AI-Generated Clinical Visit Transcripts&lt;/p&gt;&lt;p&gt;Link: https://www.nejm.org/doi/full/10.1056/NEJMp2514616&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;AI Clinical Scribe, Ambient AI, Medical Hallucinations, LLM Healthcare, Clinical Documentation, Patient Safety, Health AI, Medical Malpractice, NEJM Perspective, AI Transcripts.&lt;/p&gt;&lt;p&gt;#HealthAI #MedicalAI #AIScribe #ClinicianBurnout #PatientSafety&lt;/p&gt;&lt;p&gt;#aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5104117" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/b6a3886c-5d80-4fe1-8fc7-3fc297837e31/stream.mp3"/>
                
                <guid isPermaLink="false">8adcda13-06b2-4de9-a53d-d75d2348bea5</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/b6a3886c-5d80-4fe1-8fc7-3fc297837e31</link>
                <pubDate>Mon, 16 Feb 2026 07:00:49 &#43;0000</pubDate>
                <itunes:duration>319</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Are LLMs Reliable for Medical Advice? Nature Medicine Study</itunes:title>
                <title>Are LLMs Reliable for Medical Advice? Nature Medicine Study</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Is AI safe for patient self-diagnosis? New Nature Medicine study reveals the LLM &#34;Interaction Gap.&#34;</p><p>We analyse a randomised controlled study involving 1,298 participants testing GPT-4o, Llama 3, and Command R+ as medical assistants. While these models ace medical exams, their real-world performance with patients tells a different story.</p><p><br></p><p>Link to paper: https://www.nature.com/articles/s41591-025-04074-y#MOESM1</p><p>Title: Reliability of LLMs as medical assistants for the general public: a randomized preregistered study</p><p>Authors: Bean, Payne et al.</p><p><br></p><p>Key Takeaways:</p><p>• The Interaction Failure: Why high LLM exam scores (MedQA) do not translate to accurate patient advice in real-world scenarios.</p><p>• LLM vs. Search: Evidence showing that current AI chatbots provide no significant accuracy advantage over traditional internet searches for health inquiries.</p><p>• The Future of Safety: Why the industry must move from &#34;Model Benchmarking&#34; to &#34;Human-AI Interaction Testing&#34; to ensure clinical safety.</p><p><br></p><p>0:00 The 20-Year-Old Student Scenario</p><p>0:40 The Vision: Democratizing Healthcare</p><p>1:15 The Systemic Failure of Human-AI Interaction</p><p>1:55 Randomized Controlled Trial: Study Methodology</p><p>2:35 The Data: 94% Knowledge vs. 34% Application</p><p>3:20 Why Search Engines Still Match Chatbots</p><p>4:10 Future Proofing: Proactive Clinical Interviews</p><p>4:50 Final Verdict for Clinicians and Managers</p><p><br></p><p>Health AI, LLM medical reliability, GPT-4o healthcare, clinical AI safety, medical chatbot study, patient-facing AI, Nature Medicine AI, digital health triage.</p><p>#HealthAI #MedTech #GenerativeAI #ClinicalSafety #DigitalHealth #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p><p><br></p>]]></description>
                <content:encoded>&lt;p&gt;Is AI safe for patient self-diagnosis? New Nature Medicine study reveals the LLM &amp;#34;Interaction Gap.&amp;#34;&lt;/p&gt;&lt;p&gt;We analyse a randomised controlled study involving 1,298 participants testing GPT-4o, Llama 3, and Command R&#43; as medical assistants. While these models ace medical exams, their real-world performance with patients tells a different story.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Link to paper: https://www.nature.com/articles/s41591-025-04074-y#MOESM1&lt;/p&gt;&lt;p&gt;Title: Reliability of LLMs as medical assistants for the general public: a randomized preregistered study&lt;/p&gt;&lt;p&gt;Authors: Bean, Payne et al.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• The Interaction Failure: Why high LLM exam scores (MedQA) do not translate to accurate patient advice in real-world scenarios.&lt;/p&gt;&lt;p&gt;• LLM vs. Search: Evidence showing that current AI chatbots provide no significant accuracy advantage over traditional internet searches for health inquiries.&lt;/p&gt;&lt;p&gt;• The Future of Safety: Why the industry must move from &amp;#34;Model Benchmarking&amp;#34; to &amp;#34;Human-AI Interaction Testing&amp;#34; to ensure clinical safety.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;0:00 The 20-Year-Old Student Scenario&lt;/p&gt;&lt;p&gt;0:40 The Vision: Democratizing Healthcare&lt;/p&gt;&lt;p&gt;1:15 The Systemic Failure of Human-AI Interaction&lt;/p&gt;&lt;p&gt;1:55 Randomized Controlled Trial: Study Methodology&lt;/p&gt;&lt;p&gt;2:35 The Data: 94% Knowledge vs. 34% Application&lt;/p&gt;&lt;p&gt;3:20 Why Search Engines Still Match Chatbots&lt;/p&gt;&lt;p&gt;4:10 Future Proofing: Proactive Clinical Interviews&lt;/p&gt;&lt;p&gt;4:50 Final Verdict for Clinicians and Managers&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Health AI, LLM medical reliability, GPT-4o healthcare, clinical AI safety, medical chatbot study, patient-facing AI, Nature Medicine AI, digital health triage.&lt;/p&gt;&lt;p&gt;#HealthAI #MedTech #GenerativeAI #ClinicalSafety #DigitalHealth #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="4261093" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/cba0d1a5-b1e9-4276-b991-d5c17ca85467/stream.mp3"/>
                
                <guid isPermaLink="false">0319b660-4d48-4715-a078-3cb6011375dd</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/cba0d1a5-b1e9-4276-b991-d5c17ca85467</link>
                <pubDate>Sat, 14 Feb 2026 07:00:54 &#43;0000</pubDate>
                <itunes:duration>266</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Nature medicine principles to guide AI readiness</itunes:title>
                <title>Nature medicine principles to guide AI readiness</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Is your AI a &#34;Leaderboard Winner&#34; or a Bedside Failure? The new Nature Medicine framework by Azad, Krumholz, and Saria defines the 4 principles of clinical AI readiness.</p><p>In this episode, we break down why retrospective accuracy is a &#34;credibility gap&#34; in health tech. We explore the transition from static benchmarks to real-world evaluation, focusing on task-specific readiness and the &#34;Harm Budget&#34; required for safe deployment.</p><p><br></p><p>Key Takeaways:</p><p>• The Correction Burden: Why &#34;time-to-action&#34; is a more important metric than raw accuracy for busy clinicians.</p><p>• Deferral Awareness: How teaching AI to say &#34;I don&#39;t know&#34; serves as a first-class safety mechanism.</p><p>• The Evidence Scaffold: Why clinical AI needs a &#34;Phase 4&#34; monitoring system similar to post-marketing drug surveillance.</p><p><br></p><p>Clinical AI, Medical AI Evaluation, Nature Medicine AI, Ambient Scribing, AI Triage, Medical Hallucinations, HealthTech Safety, Deferral Awareness, Saria AI Research.</p><p>#HealthAI #DigitalHealth #ClinicalAI #MedicalInnovation #PatientSafety #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Is your AI a &amp;#34;Leaderboard Winner&amp;#34; or a Bedside Failure? The new Nature Medicine framework by Azad, Krumholz, and Saria defines the 4 principles of clinical AI readiness.&lt;/p&gt;&lt;p&gt;In this episode, we break down why retrospective accuracy is a &amp;#34;credibility gap&amp;#34; in health tech. We explore the transition from static benchmarks to real-world evaluation, focusing on task-specific readiness and the &amp;#34;Harm Budget&amp;#34; required for safe deployment.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• The Correction Burden: Why &amp;#34;time-to-action&amp;#34; is a more important metric than raw accuracy for busy clinicians.&lt;/p&gt;&lt;p&gt;• Deferral Awareness: How teaching AI to say &amp;#34;I don&amp;#39;t know&amp;#34; serves as a first-class safety mechanism.&lt;/p&gt;&lt;p&gt;• The Evidence Scaffold: Why clinical AI needs a &amp;#34;Phase 4&amp;#34; monitoring system similar to post-marketing drug surveillance.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Clinical AI, Medical AI Evaluation, Nature Medicine AI, Ambient Scribing, AI Triage, Medical Hallucinations, HealthTech Safety, Deferral Awareness, Saria AI Research.&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #ClinicalAI #MedicalInnovation #PatientSafety #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4022439" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/a0bcf980-ebb2-43b9-9559-a0b9e330a4d5/stream.mp3"/>
                
                <guid isPermaLink="false">6baed987-4869-48ae-9ce7-6ee16dd936db</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/a0bcf980-ebb2-43b9-9559-a0b9e330a4d5</link>
                <pubDate>Fri, 13 Feb 2026 07:00:43 &#43;0000</pubDate>
                <itunes:duration>251</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>International AI Safety Report 2026</itunes:title>
                <title>International AI Safety Report 2026</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>The 2026 International AI Safety Report is here, and it reveals a &#34;jagged frontier&#34; where AI can pass PhD exams but still fails at basic clinical safety.</p><p>In this episode of The Health AI Brief, we analyze the global scientific consensus on AI safety. We break down the 2026 report&#39;s findings on &#34;reasoning models,&#34; the growing risk of AI-led cyberattacks on hospitals, and the &#34;evaluation gap&#34; that prevents clinicians from fully trusting autonomous systems.</p><p><br></p><p>Link to the report: https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026</p><p><br></p><p>Key Takeaways:</p><p>• The Jagged Frontier: Why AI&#39;s high performance on medical boards doesn&#39;t translate to ward safety.</p><p>• Cognitive Offloading: How reliance on AI tools is objectively deskilling clinicians (and how to stop it).</p><p>• Defence-in-Depth: The new 3-tier strategy for clinical AI risk management.</p><p>Medical AI Safety, Clinical AI 2026, AI Hallucinations in Healthcare, International AI Safety Report, Healthcare Cybersecurity, AI Clinical Evidence, NHS AI Policy, Reasoning Models in Medicine.</p><p>#MedicalAI #HealthTech #AISafety #DigitalHealth #TheHealthAIBrief #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p><p><br></p><p><br></p><p><br></p><p><br></p>]]></description>
                <content:encoded>&lt;p&gt;The 2026 International AI Safety Report is here, and it reveals a &amp;#34;jagged frontier&amp;#34; where AI can pass PhD exams but still fails at basic clinical safety.&lt;/p&gt;&lt;p&gt;In this episode of The Health AI Brief, we analyze the global scientific consensus on AI safety. We break down the 2026 report&amp;#39;s findings on &amp;#34;reasoning models,&amp;#34; the growing risk of AI-led cyberattacks on hospitals, and the &amp;#34;evaluation gap&amp;#34; that prevents clinicians from fully trusting autonomous systems.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Link to the report: https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• The Jagged Frontier: Why AI&amp;#39;s high performance on medical boards doesn&amp;#39;t translate to ward safety.&lt;/p&gt;&lt;p&gt;• Cognitive Offloading: How reliance on AI tools is objectively deskilling clinicians (and how to stop it).&lt;/p&gt;&lt;p&gt;• Defence-in-Depth: The new 3-tier strategy for clinical AI risk management.&lt;/p&gt;&lt;p&gt;Medical AI Safety, Clinical AI 2026, AI Hallucinations in Healthcare, International AI Safety Report, Healthcare Cybersecurity, AI Clinical Evidence, NHS AI Policy, Reasoning Models in Medicine.&lt;/p&gt;&lt;p&gt;#MedicalAI #HealthTech #AISafety #DigitalHealth #TheHealthAIBrief #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="5186873" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/601c9aad-26bf-4842-9e08-665165a5d37b/stream.mp3"/>
                
                <guid isPermaLink="false">a42f1251-631f-4373-8fea-d72524e53fe9</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/601c9aad-26bf-4842-9e08-665165a5d37b</link>
                <pubDate>Thu, 12 Feb 2026 07:00:53 &#43;0000</pubDate>
                <itunes:duration>324</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Large Language Models for Complex Cardiology? Google AMIE/Stanford Study Review</itunes:title>
                <title>Large Language Models for Complex Cardiology? Google AMIE/Stanford Study Review</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Is AI ready for complex cardiology? A deep dive into the Google and Stanford AMIE RCT results.</p><p>We analyse the bold claims behind &#34;A large language model for complex cardiology care.&#34; Despite a 46.7% preference rating, did the AI actually change patient outcomes?</p><p><br></p><p>Link: https://www.nature.com/articles/s41591-025-04190-9</p><p>Title: A large language model for complex cardiology care</p><p>Authors: O&#39;Sullivan, Palepu et al.</p><p><br></p><p>Key Takeaways:</p><p>• Preference vs. Performance: Why a subspecialist&#39;s &#34;preference&#34; for an AI report didn&#39;t lead to a single change in the actual clinical plan.</p><p>• The Missing Data Gap: The critical role of family history and physical exams that current LLM studies are failing to account for.</p><p><br></p><p>0:00 Introduction</p><p>0:09 Google&#39;s AMIE: A Subspecialist in Your Pocket?</p><p>0:47 The Data Gap: Preference vs. Actual Clinical Decisions</p><p>1:32 Why AI Can’t Replace the Patient History</p><p>2:54 Final Verdict: Documentation vs. Decision Making</p><p><br></p><p>Cardiology AI, AMIE model, LLM clinical safety, medical AI hallucinations, heart disease AI, Gemini 2.0 Flash, medical RCT, digital health triage.</p><p>#Cardiology #HealthAI #MedTech #ClinicalResearch #LLM#aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Is AI ready for complex cardiology? A deep dive into the Google and Stanford AMIE RCT results.&lt;/p&gt;&lt;p&gt;We analyse the bold claims behind &amp;#34;A large language model for complex cardiology care.&amp;#34; Despite a 46.7% preference rating, did the AI actually change patient outcomes?&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Link: https://www.nature.com/articles/s41591-025-04190-9&lt;/p&gt;&lt;p&gt;Title: A large language model for complex cardiology care&lt;/p&gt;&lt;p&gt;Authors: O&amp;#39;Sullivan, Palepu et al.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• Preference vs. Performance: Why a subspecialist&amp;#39;s &amp;#34;preference&amp;#34; for an AI report didn&amp;#39;t lead to a single change in the actual clinical plan.&lt;/p&gt;&lt;p&gt;• The Missing Data Gap: The critical role of family history and physical exams that current LLM studies are failing to account for.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;0:00 Introduction&lt;/p&gt;&lt;p&gt;0:09 Google&amp;#39;s AMIE: A Subspecialist in Your Pocket?&lt;/p&gt;&lt;p&gt;0:47 The Data Gap: Preference vs. Actual Clinical Decisions&lt;/p&gt;&lt;p&gt;1:32 Why AI Can’t Replace the Patient History&lt;/p&gt;&lt;p&gt;2:54 Final Verdict: Documentation vs. Decision Making&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Cardiology AI, AMIE model, LLM clinical safety, medical AI hallucinations, heart disease AI, Gemini 2.0 Flash, medical RCT, digital health triage.&lt;/p&gt;&lt;p&gt;#Cardiology #HealthAI #MedTech #ClinicalResearch #LLM#aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3902902" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/47502069-0e5a-4736-8239-a177c2379ed4/stream.mp3"/>
                
                <guid isPermaLink="false">1a13fc08-046f-4949-97d8-774401e000db</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/47502069-0e5a-4736-8239-a177c2379ed4</link>
                <pubDate>Wed, 11 Feb 2026 07:00:32 &#43;0000</pubDate>
                <itunes:duration>243</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>057 Named Entity Recognition (NER) - How AI Spots Drugs and Dosages Instantly</itunes:title>
                <title>057 Named Entity Recognition (NER) - How AI Spots Drugs and Dosages Instantly</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Teaching AI to &#34;highlight&#34; the drugs, symptoms, and diagnoses in a wall of text. We explore NER—the tech behind the world’s best AI scribes.</p><p><br></p><p>#MedTech #NER #Automation #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Teaching AI to &amp;#34;highlight&amp;#34; the drugs, symptoms, and diagnoses in a wall of text. We explore NER—the tech behind the world’s best AI scribes.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#MedTech #NER #Automation #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1882906" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/bcbe8484-3577-4b5b-8b5d-79183a8eda41/stream.mp3"/>
                
                <guid isPermaLink="false">7dde077d-04a2-48cb-904f-4e8f7b7d96c6</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/bcbe8484-3577-4b5b-8b5d-79183a8eda41</link>
                <pubDate>Tue, 10 Feb 2026 07:00:36 &#43;0000</pubDate>
                <itunes:duration>117</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Lotus Health AI - AI Aimed at Primary Care</itunes:title>
                <title>Lotus Health AI - AI Aimed at Primary Care</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Is the &#34;AI Doctor&#34; finally here? Lotus Health AI just secured $35M to provide free, 24/7 primary care at scale.</p><p><br></p><p>We analyse the clinical and economic implications of Lotus Health&#39;s $41M &#34;AI-first&#34; medical practice and the ethics of a sponsorship-based revenue model in healthcare.</p><p><br></p><p>Key Takeaways:</p><p>- The 10x Physician: How Lotus aims to use AI to handle 90% of the clinical workload, leaving humans for the final sign-off.</p><p>- The Sponsorship Dilemma: Analyzing the risks of a &#34;free&#34; model funded by in-app ads and how it impacts clinical neutrality.</p><p>- The &#34;Low Data&#34; Challenge: Why primary care is a unique hurdle for LLMs compared to data-heavy specialties like radiology.</p><p><br></p><p>Keywords: Health AI, Lotus Health, Primary Care AI, Digital Health, Telemedicine, HealthTech Startups, Clinical AI, Medical Ethics, Medical LLMs, Healthcare Funding.</p><p><br></p><p>#HealthAI #DigitalHealth #MedTech #PrimaryCare #HealthTech #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Is the &amp;#34;AI Doctor&amp;#34; finally here? Lotus Health AI just secured $35M to provide free, 24/7 primary care at scale.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;We analyse the clinical and economic implications of Lotus Health&amp;#39;s $41M &amp;#34;AI-first&amp;#34; medical practice and the ethics of a sponsorship-based revenue model in healthcare.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;- The 10x Physician: How Lotus aims to use AI to handle 90% of the clinical workload, leaving humans for the final sign-off.&lt;/p&gt;&lt;p&gt;- The Sponsorship Dilemma: Analyzing the risks of a &amp;#34;free&amp;#34; model funded by in-app ads and how it impacts clinical neutrality.&lt;/p&gt;&lt;p&gt;- The &amp;#34;Low Data&amp;#34; Challenge: Why primary care is a unique hurdle for LLMs compared to data-heavy specialties like radiology.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Keywords: Health AI, Lotus Health, Primary Care AI, Digital Health, Telemedicine, HealthTech Startups, Clinical AI, Medical Ethics, Medical LLMs, Healthcare Funding.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #MedTech #PrimaryCare #HealthTech #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4355134" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/3ef0b7e9-2a33-41ad-97fa-d0bf60389657/stream.mp3"/>
                
                <guid isPermaLink="false">ecfde291-1130-4967-ace6-910687c604ee</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/3ef0b7e9-2a33-41ad-97fa-d0bf60389657</link>
                <pubDate>Mon, 09 Feb 2026 07:00:10 &#43;0000</pubDate>
                <itunes:duration>272</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI Stethoscope TRICORDER Trial Results - Why Good AI Doesn&#39;t Guarantee Clinical Impact</itunes:title>
                <title>AI Stethoscope TRICORDER Trial Results - Why Good AI Doesn&#39;t Guarantee Clinical Impact</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Can AI Stethoscopes Fix the Heart Failure Crisis? TRICORDER Trial Results</p><p>Despite 70% of heart failure cases being diagnosed in ERs, a major UK trial shows that AI stethoscopes face a massive &#34;Implementation Gap&#34; in primary care.</p><p>In this episode of The Health AI Brief, we break down the TRICORDER trial—a cluster-randomized study of 205 NHS GP practices. While the AI algorithms showed a 2x-3x increase in disease detection when used, the overall population impact was null due to workflow friction and declining uptake. We analyze why high algorithmic accuracy isn&#39;t enough to change clinical outcomes and what the NHS must do to make Health AI truly scalable.</p><p><br></p><p>Key Takeaways for Clinicians:</p><p>• The ITT Paradox: Why the trial was a &#34;success&#34; for the algorithm but a &#34;null&#34; for the population.</p><p>• Workflow is King: Identifying the specific barriers that led 40% of practices to stop using the AI device.</p><p>• The Next Step: How EHR integration and risk-stratified screening could bridge the implementation gap.</p><p><br></p><p>Health AI, Heart Failure Detection, NHS Primary Care, TRICORDER Trial, AI Stethoscope, Eko DUO, Clinical Workflow, Digital Health Implementation, Medical AI, Health Tech.</p><p>#HealthAI #HeartFailure #NHS #DigitalHealth #MedTech #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Can AI Stethoscopes Fix the Heart Failure Crisis? TRICORDER Trial Results&lt;/p&gt;&lt;p&gt;Despite 70% of heart failure cases being diagnosed in ERs, a major UK trial shows that AI stethoscopes face a massive &amp;#34;Implementation Gap&amp;#34; in primary care.&lt;/p&gt;&lt;p&gt;In this episode of The Health AI Brief, we break down the TRICORDER trial—a cluster-randomized study of 205 NHS GP practices. While the AI algorithms showed a 2x-3x increase in disease detection when used, the overall population impact was null due to workflow friction and declining uptake. We analyze why high algorithmic accuracy isn&amp;#39;t enough to change clinical outcomes and what the NHS must do to make Health AI truly scalable.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways for Clinicians:&lt;/p&gt;&lt;p&gt;• The ITT Paradox: Why the trial was a &amp;#34;success&amp;#34; for the algorithm but a &amp;#34;null&amp;#34; for the population.&lt;/p&gt;&lt;p&gt;• Workflow is King: Identifying the specific barriers that led 40% of practices to stop using the AI device.&lt;/p&gt;&lt;p&gt;• The Next Step: How EHR integration and risk-stratified screening could bridge the implementation gap.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Health AI, Heart Failure Detection, NHS Primary Care, TRICORDER Trial, AI Stethoscope, Eko DUO, Clinical Workflow, Digital Health Implementation, Medical AI, Health Tech.&lt;/p&gt;&lt;p&gt;#HealthAI #HeartFailure #NHS #DigitalHealth #MedTech #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5369103" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/b2dd2753-ba83-44d2-ac34-6eb702bf06d4/stream.mp3"/>
                
                <guid isPermaLink="false">e0a426ee-7315-4b2d-ba72-36e7bac11959</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/b2dd2753-ba83-44d2-ac34-6eb702bf06d4</link>
                <pubDate>Fri, 06 Feb 2026 07:00:27 &#43;0000</pubDate>
                <itunes:duration>335</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>056 TF-IDF - How to Find a Needle in a Digital Haystack</itunes:title>
                <title>056 TF-IDF - How to Find a Needle in a Digital Haystack</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Out of 10,000 patient charts, how does AI find the most important terms? We break down the maths behind TF-IDF and why it’s the backbone of clinical search.</p><p><br></p><p>#InformationRetrieval #ClinicalResearch #AI #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Out of 10,000 patient charts, how does AI find the most important terms? We break down the maths behind TF-IDF and why it’s the backbone of clinical search.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#InformationRetrieval #ClinicalResearch #AI #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2024176" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/6dbb90fc-c78a-4e47-8acd-b393b50303ee/stream.mp3"/>
                
                <guid isPermaLink="false">db7e2acd-1010-4aa8-9bed-dc0b4c3a676f</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/6dbb90fc-c78a-4e47-8acd-b393b50303ee</link>
                <pubDate>Thu, 05 Feb 2026 07:00:50 &#43;0000</pubDate>
                <itunes:duration>126</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI For Mammography Screening - Results of Lancet RCT</itunes:title>
                <title>AI For Mammography Screening - Results of Lancet RCT</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Will AI replace radiologists? The Lancet just published the final results of the MASAI trial, providing a definitive answer on AI safety in breast cancer screening.</p><p>We analyse the Gommers et al. (2026) MASAI study, the first randomized controlled trial to prove that AI-supported mammography triage is non-inferior to human double-reading regarding interval cancer rates. The study demonstrates a 44% reduction in workload while significantly increasing screening sensitivity.</p><p><br></p><p>Link to paper: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(26)00092-9/fulltext</p><p>Link to comment: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(25)02464-X/fulltext </p><p><br></p><p>Key Takeaways:</p><p>• Safety Proven: AI-supported screening met non-inferiority margins for interval cancer rates (1.55 vs 1.76 per 1000).</p><p>• Efficiency Gains: AI triage enabled a 44% reduction in radiologist screen-reading workload.</p><p>• Superior Sensitivity: The AI-supported group caught more invasive cancers (80.5% sensitivity) without increasing false positives.</p><p><br></p><p>AI mammography, MASAI study results, Lancet breast cancer AI, radiologist workforce shortage, interval cancer rate, Transpara AI, breast cancer screening sensitivity, health AI implementation, double reading mammography.</p><p>#HealthAI #Radiology #BreastCancer #MedTech #TheHealthAIBrief #LancetMedical #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Will AI replace radiologists? The Lancet just published the final results of the MASAI trial, providing a definitive answer on AI safety in breast cancer screening.&lt;/p&gt;&lt;p&gt;We analyse the Gommers et al. (2026) MASAI study, the first randomized controlled trial to prove that AI-supported mammography triage is non-inferior to human double-reading regarding interval cancer rates. The study demonstrates a 44% reduction in workload while significantly increasing screening sensitivity.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Link to paper: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(26)00092-9/fulltext&lt;/p&gt;&lt;p&gt;Link to comment: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(25)02464-X/fulltext &lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;• Safety Proven: AI-supported screening met non-inferiority margins for interval cancer rates (1.55 vs 1.76 per 1000).&lt;/p&gt;&lt;p&gt;• Efficiency Gains: AI triage enabled a 44% reduction in radiologist screen-reading workload.&lt;/p&gt;&lt;p&gt;• Superior Sensitivity: The AI-supported group caught more invasive cancers (80.5% sensitivity) without increasing false positives.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;AI mammography, MASAI study results, Lancet breast cancer AI, radiologist workforce shortage, interval cancer rate, Transpara AI, breast cancer screening sensitivity, health AI implementation, double reading mammography.&lt;/p&gt;&lt;p&gt;#HealthAI #Radiology #BreastCancer #MedTech #TheHealthAIBrief #LancetMedical #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4133198" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/9c591b47-08c9-49b3-9d28-dee86314ee73/stream.mp3"/>
                
                <guid isPermaLink="false">3698ec05-d5d9-4a2e-956b-023a32206b3b</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/9c591b47-08c9-49b3-9d28-dee86314ee73</link>
                <pubDate>Wed, 04 Feb 2026 07:00:04 &#43;0000</pubDate>
                <itunes:duration>258</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>055 N-grams - Why Blood Pressure is One Word, Not Two</itunes:title>
                <title>055 N-grams - Why Blood Pressure is One Word, Not Two</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>If your AI treats &#34;High&#34; and &#34;Pressure&#34; separately, you have a problem. Discover how N-grams help machines understand phrases and medical context better than ever.</p><p><br></p><p>#MachineLearning #Bioinformatics #HealthTech #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;If your AI treats &amp;#34;High&amp;#34; and &amp;#34;Pressure&amp;#34; separately, you have a problem. Discover how N-grams help machines understand phrases and medical context better than ever.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#MachineLearning #Bioinformatics #HealthTech #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1926791" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/9b28ecc4-6911-4e7f-a464-bf6b36cee450/stream.mp3"/>
                
                <guid isPermaLink="false">ee6e9e7c-3341-4f4d-8d4d-e8702d2f1ff7</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/9b28ecc4-6911-4e7f-a464-bf6b36cee450</link>
                <pubDate>Tue, 03 Feb 2026 07:00:07 &#43;0000</pubDate>
                <itunes:duration>120</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>OpenEvidence: The $12B AI Changing Clinical Decisions</itunes:title>
                <title>OpenEvidence: The $12B AI Changing Clinical Decisions</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Is OpenEvidence worth $12 Billion? Learn why this medical AI is outperforming GPT-4 in clinical settings and what it means for the future of healthcare.</p><p>In this episode of The Health AI Brief, we break down the rapid rise of OpenEvidence, a specialized generative AI for clinical decision support. We analyse its $12 billion valuation, its unique training data partnership with the New England Journal of Medicine, and its strategy to disrupt the $30 billion pharmaceutical marketing industry. We look beyond the hype to identify the critical hurdles of EHR integration and the necessity of peer-reviewed utility trials.</p><p>Link to the 20VC podcast episode: https://www.youtube.com/watch?v=AkObfvN3ArI</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Specialised Grounding:</strong> Why direct journal relationships make OpenEvidence safer for doctors than general AI.</li><li><strong>The Pharma Disruptor:</strong> How the $150M revenue model plans to capture traditional pharmaceutical marketing budgets.</li><li><strong>The Trust Gap:</strong> The specific evidence clinicians need to see before AI becomes an indispensable bedside tool.</li></ul><p>Medical AI, OpenEvidence, Clinical Decision Support, HealthTech Valuation, Generative AI in Healthcare, Pharmaceutical Marketing, Daniel Nadler, EHR Integration, Evidence-Based Medicine.</p><p>#HealthAI #OpenEvidence #MedicalAI #HealthTech #ClinicalInnovation</p>]]></description>
                <content:encoded>&lt;p&gt;Is OpenEvidence worth $12 Billion? Learn why this medical AI is outperforming GPT-4 in clinical settings and what it means for the future of healthcare.&lt;/p&gt;&lt;p&gt;In this episode of The Health AI Brief, we break down the rapid rise of OpenEvidence, a specialized generative AI for clinical decision support. We analyse its $12 billion valuation, its unique training data partnership with the New England Journal of Medicine, and its strategy to disrupt the $30 billion pharmaceutical marketing industry. We look beyond the hype to identify the critical hurdles of EHR integration and the necessity of peer-reviewed utility trials.&lt;/p&gt;&lt;p&gt;Link to the 20VC podcast episode: https://www.youtube.com/watch?v=AkObfvN3ArI&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Key Takeaways:&lt;/strong&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Specialised Grounding:&lt;/strong&gt; Why direct journal relationships make OpenEvidence safer for doctors than general AI.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;The Pharma Disruptor:&lt;/strong&gt; How the $150M revenue model plans to capture traditional pharmaceutical marketing budgets.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;The Trust Gap:&lt;/strong&gt; The specific evidence clinicians need to see before AI becomes an indispensable bedside tool.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Medical AI, OpenEvidence, Clinical Decision Support, HealthTech Valuation, Generative AI in Healthcare, Pharmaceutical Marketing, Daniel Nadler, EHR Integration, Evidence-Based Medicine.&lt;/p&gt;&lt;p&gt;#HealthAI #OpenEvidence #MedicalAI #HealthTech #ClinicalInnovation&lt;/p&gt;</content:encoded>
                
                <enclosure length="5460218" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/ac406521-a2e2-4a35-8e93-c65037fd09f9/stream.mp3"/>
                
                <guid isPermaLink="false">74250cbc-6d7a-46e9-afa6-5ea64fbcd3b6</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/ac406521-a2e2-4a35-8e93-c65037fd09f9</link>
                <pubDate>Mon, 02 Feb 2026 07:00:45 &#43;0000</pubDate>
                <itunes:duration>341</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AlphaGenome Explained - Decoding the Non-Coding Genome</itunes:title>
                <title>AlphaGenome Explained - Decoding the Non-Coding Genome</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>AlphaGenome is here. Can this new AI from DeepMind&#39;s alumni and collaborators finally decode the 98% of our DNA that doesn&#39;t code for proteins?</p><p><br></p><p>We break down the &#34;AlphaGenome&#34; breakthrough published in Nature. This unified DNA sequence model processes 1-megabase contexts at single-base-pair resolution to predict gene expression, splicing, and chromatin state. For clinicians and researchers, this means a massive leap in predicting how non-coding variants drive disease.</p><p><br></p><p>Paper title: Advancing regulatory variant effect prediction with AlphaGenome</p><p>Authors: Avsec et al</p><p>Link: https://www.nature.com/articles/s41586-025-10014-0</p><p><br></p><p>Key Takeaways:</p><p>- Unified Prediction: How AlphaGenome replaces dozens of specialised models with one multimodal framework.</p><p>- The Splicing Breakthrough: Moving beyond splice sites to predict complex splice junction usage.</p><p>- Clinical Utility vs. Limits: Why predicting molecular tracks isn&#39;t the same as predicting a disease, and what&#39;s needed next.</p><p><br></p><p>00:00 Intro</p><p>00:09 Decoding Non-Coding DNA: The Interpretation Bottleneck</p><p>00:43 Introducing AlphaGenome: A Unified Regulatory &#34;Oracle&#34;</p><p>01:05 Breaking the Resolution Barrier: The 1-Megabase Window</p><p>01:47 How it Works: U-Net Architecture &amp; Model Distillation</p><p>02:46 Performance Benchmarks: A New Standard for Splicing and eQTLs</p><p>03:26 Current Limitations and the &#34;Phenotype Gap&#34;</p><p>04:14 Building Trust: Prospective Validation and VUS Resolution</p><p>04:36 Final Verdict: A Shift Toward Generalist Genomic Models</p><p><br></p><p>AlphaGenome, Variant Effect Prediction, Functional Genomics, Deep Learning in Healthcare, Non-coding DNA, Splicing Prediction, eQTL, Nature Portfolio, AI Genomics.</p><p>#AlphaGenome #Genomics #HealthAI #DeepLearning #PrecisionMedicine #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;AlphaGenome is here. Can this new AI from DeepMind&amp;#39;s alumni and collaborators finally decode the 98% of our DNA that doesn&amp;#39;t code for proteins?&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;We break down the &amp;#34;AlphaGenome&amp;#34; breakthrough published in Nature. This unified DNA sequence model processes 1-megabase contexts at single-base-pair resolution to predict gene expression, splicing, and chromatin state. For clinicians and researchers, this means a massive leap in predicting how non-coding variants drive disease.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Paper title: Advancing regulatory variant effect prediction with AlphaGenome&lt;/p&gt;&lt;p&gt;Authors: Avsec et al&lt;/p&gt;&lt;p&gt;Link: https://www.nature.com/articles/s41586-025-10014-0&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;- Unified Prediction: How AlphaGenome replaces dozens of specialised models with one multimodal framework.&lt;/p&gt;&lt;p&gt;- The Splicing Breakthrough: Moving beyond splice sites to predict complex splice junction usage.&lt;/p&gt;&lt;p&gt;- Clinical Utility vs. Limits: Why predicting molecular tracks isn&amp;#39;t the same as predicting a disease, and what&amp;#39;s needed next.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;00:00 Intro&lt;/p&gt;&lt;p&gt;00:09 Decoding Non-Coding DNA: The Interpretation Bottleneck&lt;/p&gt;&lt;p&gt;00:43 Introducing AlphaGenome: A Unified Regulatory &amp;#34;Oracle&amp;#34;&lt;/p&gt;&lt;p&gt;01:05 Breaking the Resolution Barrier: The 1-Megabase Window&lt;/p&gt;&lt;p&gt;01:47 How it Works: U-Net Architecture &amp;amp; Model Distillation&lt;/p&gt;&lt;p&gt;02:46 Performance Benchmarks: A New Standard for Splicing and eQTLs&lt;/p&gt;&lt;p&gt;03:26 Current Limitations and the &amp;#34;Phenotype Gap&amp;#34;&lt;/p&gt;&lt;p&gt;04:14 Building Trust: Prospective Validation and VUS Resolution&lt;/p&gt;&lt;p&gt;04:36 Final Verdict: A Shift Toward Generalist Genomic Models&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;AlphaGenome, Variant Effect Prediction, Functional Genomics, Deep Learning in Healthcare, Non-coding DNA, Splicing Prediction, eQTL, Nature Portfolio, AI Genomics.&lt;/p&gt;&lt;p&gt;#AlphaGenome #Genomics #HealthAI #DeepLearning #PrecisionMedicine #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4904751" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/4ccdbba9-71e7-46eb-8ea4-b79ab39b8c92/stream.mp3"/>
                
                <guid isPermaLink="false">c864976a-8653-492d-830d-b5d6071f0f47</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/4ccdbba9-71e7-46eb-8ea4-b79ab39b8c92</link>
                <pubDate>Fri, 30 Jan 2026 07:00:37 &#43;0000</pubDate>
                <itunes:duration>306</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>054 Stop Words - The ‘Invisible’ Words That Are Troubling Your AI Models</itunes:title>
                <title>054 Stop Words - The ‘Invisible’ Words That Are Troubling Your AI Models</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Why words like &#34;the,&#34; &#34;is,&#34; and &#34;and&#34; are the enemies of efficient NLP. Learn how stripping the &#34;fluff&#34; helps AI focus on the life-saving medical facts.</p><p><br></p><p>#NLP #DataCleaning #HealthAI #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Why words like &amp;#34;the,&amp;#34; &amp;#34;is,&amp;#34; and &amp;#34;and&amp;#34; are the enemies of efficient NLP. Learn how stripping the &amp;#34;fluff&amp;#34; helps AI focus on the life-saving medical facts.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#NLP #DataCleaning #HealthAI #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1842364" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/2a9edb2f-95b0-4b06-834b-c9ea1f54390c/stream.mp3"/>
                
                <guid isPermaLink="false">1ebed310-a0c0-451d-9d7a-714ee4b020c1</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/2a9edb2f-95b0-4b06-834b-c9ea1f54390c</link>
                <pubDate>Thu, 29 Jan 2026 07:00:14 &#43;0000</pubDate>
                <itunes:duration>115</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Sued for using AI scribe</itunes:title>
                <title>Sued for using AI scribe</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Is your AI scribe a $500M legal trap? We’re analysing the Sharp HealthCare class-action lawsuit and the &#34;Consent Hallucination&#34; putting clinicians at risk.</p><p><br></p><p>Ambient AI documentation is the leading solution for clinician burnout, but secret recordings and &#34;hallucinated&#34; consent are leading to massive legal exposure. This deep dive covers the Sharp HealthCare and Heartland Dental lawsuits, the impact of California’s CIPA and AB3030 laws, and the hidden risks of AI-driven upcoding. We consider 7-steps which could help protect your practice and your patients.</p><p><br></p><p>Key Takeaways:</p><p>- Why &#34;Boilerplate Consent&#34; in AI notes is a primary target for class-action attorneys.</p><p>- The difference between Federal &#34;Ordinary Course of Business&#34; and State &#34;Two-Party Consent&#34; laws.</p><p>- 7 Actionable steps to build a legally-defensible AI documentation workflow.</p><p><br></p><p>Ambient AI Scribe, Sharp HealthCare Lawsuit, Abridge AI, Patient Consent, HIPAA Compliance, Medical AI Ethics, Physician Burnout, EHR Automation, California Privacy Law, AB3030.</p><p>#HealthAI #MedicalLaw #AmbientAI #DigitalHealth #PatientPrivacy #HealthTechEthics #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Is your AI scribe a $500M legal trap? We’re analysing the Sharp HealthCare class-action lawsuit and the &amp;#34;Consent Hallucination&amp;#34; putting clinicians at risk.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Ambient AI documentation is the leading solution for clinician burnout, but secret recordings and &amp;#34;hallucinated&amp;#34; consent are leading to massive legal exposure. This deep dive covers the Sharp HealthCare and Heartland Dental lawsuits, the impact of California’s CIPA and AB3030 laws, and the hidden risks of AI-driven upcoding. We consider 7-steps which could help protect your practice and your patients.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key Takeaways:&lt;/p&gt;&lt;p&gt;- Why &amp;#34;Boilerplate Consent&amp;#34; in AI notes is a primary target for class-action attorneys.&lt;/p&gt;&lt;p&gt;- The difference between Federal &amp;#34;Ordinary Course of Business&amp;#34; and State &amp;#34;Two-Party Consent&amp;#34; laws.&lt;/p&gt;&lt;p&gt;- 7 Actionable steps to build a legally-defensible AI documentation workflow.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Ambient AI Scribe, Sharp HealthCare Lawsuit, Abridge AI, Patient Consent, HIPAA Compliance, Medical AI Ethics, Physician Burnout, EHR Automation, California Privacy Law, AB3030.&lt;/p&gt;&lt;p&gt;#HealthAI #MedicalLaw #AmbientAI #DigitalHealth #PatientPrivacy #HealthTechEthics #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="8831059" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/d28b138a-38a6-49fc-8231-6f97432694be/stream.mp3"/>
                
                <guid isPermaLink="false">e429ce65-c6cd-4b6e-a334-2377c6732dd1</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/d28b138a-38a6-49fc-8231-6f97432694be</link>
                <pubDate>Wed, 28 Jan 2026 07:00:01 &#43;0000</pubDate>
                <itunes:duration>551</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>053 Stemming &amp; Lemmatization - How AI Groups Healing and Healed Automatically</itunes:title>
                <title>053 Stemming &amp; Lemmatization - How AI Groups Healing and Healed Automatically</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Does your AI know that &#34;diagnose,&#34; &#34;diagnosing,&#34; and &#34;diagnosis&#34; are the same thing? We dive into Stemming and Lemmatization, the secret to cleaning up messy medical records.</p><p><br></p><p>#DataScience #ClinicalData #MedicalCoding #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Does your AI know that &amp;#34;diagnose,&amp;#34; &amp;#34;diagnosing,&amp;#34; and &amp;#34;diagnosis&amp;#34; are the same thing? We dive into Stemming and Lemmatization, the secret to cleaning up messy medical records.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#DataScience #ClinicalData #MedicalCoding #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="1979036" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/2556c564-881d-46be-b678-e9a6dfcde815/stream.mp3"/>
                
                <guid isPermaLink="false">4d9ec908-16a3-48f4-aeef-804bbbc16078</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/2556c564-881d-46be-b678-e9a6dfcde815</link>
                <pubDate>Tue, 27 Jan 2026 07:00:24 &#43;0000</pubDate>
                <itunes:duration>123</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Google MedGemma 1.5 and MedASR - 3D Scans &amp; Medical Speech</itunes:title>
                <title>Google MedGemma 1.5 and MedASR - 3D Scans &amp; Medical Speech</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Google has released a significant update to its health AI ecosystem: MedGemma 1.5 and MedASR. This episode considers the shift toward &#34;edge-ready&#34; 4B parameter models that can handle 3D volumetric scans, longitudinal patient history, and specialised medical speech. We analyse the impressive benchmarks, beating Whisper V3 on medical dictation, and the critical safety gaps revealed in the technical report. Specifically, we look at why the model struggles to identify the absence of disease, a cornerstone of clinical diagnosis. A must-listen for health tech developers and clinical leaders navigating the open-source AI landscape.</p><p>Link: https://research.google/blog/next-generation-medical-image-interpretation-with-medgemma-15-and-medical-speech-to-text-with-medasr/</p><p>#HealthAI #MedGemma #GoogleHealth #MedicalImaging #GenerativeAI #DigitalHealth #RadiologyAI #ClinicalSafety #OpenSourceAI #MedASR #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Google has released a significant update to its health AI ecosystem: MedGemma 1.5 and MedASR. This episode considers the shift toward &amp;#34;edge-ready&amp;#34; 4B parameter models that can handle 3D volumetric scans, longitudinal patient history, and specialised medical speech. We analyse the impressive benchmarks, beating Whisper V3 on medical dictation, and the critical safety gaps revealed in the technical report. Specifically, we look at why the model struggles to identify the absence of disease, a cornerstone of clinical diagnosis. A must-listen for health tech developers and clinical leaders navigating the open-source AI landscape.&lt;/p&gt;&lt;p&gt;Link: https://research.google/blog/next-generation-medical-image-interpretation-with-medgemma-15-and-medical-speech-to-text-with-medasr/&lt;/p&gt;&lt;p&gt;#HealthAI #MedGemma #GoogleHealth #MedicalImaging #GenerativeAI #DigitalHealth #RadiologyAI #ClinicalSafety #OpenSourceAI #MedASR #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5728130" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/b0da767e-e587-487b-89e8-ab7b5d056308/stream.mp3"/>
                
                <guid isPermaLink="false">13989fd5-40ea-4691-b5f1-1ed9b5dc807c</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/b0da767e-e587-487b-89e8-ab7b5d056308</link>
                <pubDate>Mon, 26 Jan 2026 07:00:55 &#43;0000</pubDate>
                <itunes:duration>358</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>052 Tokenization - How AI Slices Symptoms into Tokens</itunes:title>
                <title>052 Tokenization - How AI Slices Symptoms into Tokens</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>AI doesn&#39;t see words; it sees tokens. We explain why &#34;Appendicectomy&#34; is broken into pieces and how this &#34;Lego-block&#34; approach allows LLMs to understand complex medical terminology.</p><p><br></p><p>#MachineLearning #Tokenization #HealthIT #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;AI doesn&amp;#39;t see words; it sees tokens. We explain why &amp;#34;Appendicectomy&amp;#34; is broken into pieces and how this &amp;#34;Lego-block&amp;#34; approach allows LLMs to understand complex medical terminology.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#MachineLearning #Tokenization #HealthIT #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2190942" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/f6173651-9353-4188-a7e8-360c7d0682dc/stream.mp3"/>
                
                <guid isPermaLink="false">214765fd-90b5-4329-ae2f-b0105dee5c07</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/f6173651-9353-4188-a7e8-360c7d0682dc</link>
                <pubDate>Sat, 24 Jan 2026 07:00:30 &#43;0000</pubDate>
                <itunes:duration>136</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Amazon&#39;s New &#39;Agentic&#39; Health AI</itunes:title>
                <title>Amazon&#39;s New &#39;Agentic&#39; Health AI</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Amazon has launched an &#34;agentic&#34; Health AI within One Medical. Unlike standalone LLMs, this system integrates directly with patient records to book appointments and manage pharmacy needs. In this episode, we analyze the shift from informational AI to operational execution and the critical importance of safety in automated triage.</p><p><br></p><p>#HealthAI #AmazonOneMedical #DigitalHealth #ClinicalTriage #HealthTech #AgenticAI #PrimaryCare #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Amazon has launched an &amp;#34;agentic&amp;#34; Health AI within One Medical. Unlike standalone LLMs, this system integrates directly with patient records to book appointments and manage pharmacy needs. In this episode, we analyze the shift from informational AI to operational execution and the critical importance of safety in automated triage.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #AmazonOneMedical #DigitalHealth #ClinicalTriage #HealthTech #AgenticAI #PrimaryCare #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4968280" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/7583be0c-7d56-4718-9f61-b0dc9660a352/stream.mp3"/>
                
                <guid isPermaLink="false">44deacfa-4f32-4921-8602-ef9e8fdcbea9</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/7583be0c-7d56-4718-9f61-b0dc9660a352</link>
                <pubDate>Fri, 23 Jan 2026 07:00:17 &#43;0000</pubDate>
                <itunes:duration>310</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Gates and OpenAI Deploying AI in Rwanda</itunes:title>
                <title>Gates and OpenAI Deploying AI in Rwanda</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Sub-Saharan Africa is missing 5.6 million health workers. Can a $50M partnership between the Gates Foundation and OpenAI bridge that gap, or is this just Silicon Valley optimism colliding with infrastructure reality?</p><p><br></p><p>We dissect the new &#34;Horizon 1000&#34; initiative. We look beyond the headlines to analyze the critical tension between cutting-edge models and the need for basic clinical tools. We explore why &#34;language bias&#34; is the new safety hurdle, why we need a &#34;digital cold chain,&#34; and how to prevent this from becoming a data extraction exercise.</p><p><br></p><p>The ambition is massive. The risks are real. Here is the path to making it work.</p><p><br></p><p>#HealthAI #GlobalHealth #GenerativeAI #GatesFoundation #OpenAI #DigitalHealth #PublicHealth #Rwanda #ClinicalSafety #HealthTech #Horizon1000 #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Sub-Saharan Africa is missing 5.6 million health workers. Can a $50M partnership between the Gates Foundation and OpenAI bridge that gap, or is this just Silicon Valley optimism colliding with infrastructure reality?&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;We dissect the new &amp;#34;Horizon 1000&amp;#34; initiative. We look beyond the headlines to analyze the critical tension between cutting-edge models and the need for basic clinical tools. We explore why &amp;#34;language bias&amp;#34; is the new safety hurdle, why we need a &amp;#34;digital cold chain,&amp;#34; and how to prevent this from becoming a data extraction exercise.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;The ambition is massive. The risks are real. Here is the path to making it work.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #GlobalHealth #GenerativeAI #GatesFoundation #OpenAI #DigitalHealth #PublicHealth #Rwanda #ClinicalSafety #HealthTech #Horizon1000 #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5012166" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/7b7db432-7d1f-4afc-8a6f-5fd8d7637fa2/stream.mp3"/>
                
                <guid isPermaLink="false">de60d262-415f-4d73-81c0-1d23bde5637f</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/7b7db432-7d1f-4afc-8a6f-5fd8d7637fa2</link>
                <pubDate>Thu, 22 Jan 2026 07:00:19 &#43;0000</pubDate>
                <itunes:duration>313</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>How to Use Ambient AI Scribes - The Ultimate Clinical Guide to Stop Editing AI Notes</itunes:title>
                <title>How to Use Ambient AI Scribes - The Ultimate Clinical Guide to Stop Editing AI Notes</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Are you using ambient AI scribes like Heidi, DeepScribe, or Nuance DAX but still spending hours editing notes? The problem might not be the AI—it might be how you’re speaking. In this episode of The Health AI Brief, we breakdown the 5 essential communication modifications every clinician needs to make to master AI documentation.</p><p>Learn how to prevent &#34;hallucinations,&#34; ensure physical exams are recorded accurately, and use &#34;distinctive negation&#34; to avoid dangerous clinical errors. Perfect for GPs/primary care physicians, hospitalists, and specialists looking to maximise efficiency.</p><p>Link the Youtube video for screenshot of &#34;cheat sheet&#34; referenced at the end: https://youtu.be/Wjt186bmMP0</p><p>Keywords: AI Scribe, Ambient Clinical Intelligence, Medical Documentation, Digital Health, Doctor Productivity, Electronic Health Records, Clinical Communication Skills. Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Are you using ambient AI scribes like Heidi, DeepScribe, or Nuance DAX but still spending hours editing notes? The problem might not be the AI—it might be how you’re speaking. In this episode of The Health AI Brief, we breakdown the 5 essential communication modifications every clinician needs to make to master AI documentation.&lt;/p&gt;&lt;p&gt;Learn how to prevent &amp;#34;hallucinations,&amp;#34; ensure physical exams are recorded accurately, and use &amp;#34;distinctive negation&amp;#34; to avoid dangerous clinical errors. Perfect for GPs/primary care physicians, hospitalists, and specialists looking to maximise efficiency.&lt;/p&gt;&lt;p&gt;Link the Youtube video for screenshot of &amp;#34;cheat sheet&amp;#34; referenced at the end: https://youtu.be/Wjt186bmMP0&lt;/p&gt;&lt;p&gt;Keywords: AI Scribe, Ambient Clinical Intelligence, Medical Documentation, Digital Health, Doctor Productivity, Electronic Health Records, Clinical Communication Skills. Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="7132891" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/9a273a40-2c0d-4ec9-ae71-59c4d124deca/stream.mp3"/>
                
                <guid isPermaLink="false">96baff1e-aadd-4c2a-b0bb-202614fee65a</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/9a273a40-2c0d-4ec9-ae71-59c4d124deca</link>
                <pubDate>Wed, 21 Jan 2026 07:00:51 &#43;0000</pubDate>
                <itunes:duration>445</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>051 Natural Language Processing (NLP) - teaching a computer to read a chart</itunes:title>
                <title>051 Natural Language Processing (NLP) - teaching a computer to read a chart</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Ever wonder how a machine &#34;reads&#34; a messy clinical note? In the next set of episodes, we&#39;re aiming to demystify Natural Language Processing (NLP) in healthcare. Learn how AI transforms unstructured doctor’s notes into actionable medical data.</p><p>#MedicalAI #HealthTech #NLP #DigitalHealth #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Ever wonder how a machine &amp;#34;reads&amp;#34; a messy clinical note? In the next set of episodes, we&amp;#39;re aiming to demystify Natural Language Processing (NLP) in healthcare. Learn how AI transforms unstructured doctor’s notes into actionable medical data.&lt;/p&gt;&lt;p&gt;#MedicalAI #HealthTech #NLP #DigitalHealth #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2518204" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/2a84bb8a-fdec-42c7-b946-b567533f1446/stream.mp3"/>
                
                <guid isPermaLink="false">ff15cece-788a-4fd9-8a08-c9e6d4c531c0</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/2a84bb8a-fdec-42c7-b946-b567533f1446</link>
                <pubDate>Tue, 20 Jan 2026 07:00:40 &#43;0000</pubDate>
                <itunes:duration>157</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>NVIDIA’s SurgWorld - Solving the Surgical Robotics Data Gap with AI Synthesis</itunes:title>
                <title>NVIDIA’s SurgWorld - Solving the Surgical Robotics Data Gap with AI Synthesis</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Autonomous surgery has been hindered by a lack of &#34;paired&#34; data - visuals linked to robot actions. In this episode of The Health AI Brief, we analyze NVIDIA’s &#34;SurgWorld,&#34; a generative world model that creates photorealistic surgery videos and uses an Inverse Dynamics Model to infer robot kinematics. We discuss the SATA dataset’s 300,000 expert-annotated frames, the GR00T VLA policy’s 73% success rate, and how this &#34;synthesis strategy&#34; provides a scalable path to surgical autonomy without the need for prohibitively expensive in-vivo data collection.</p><p><br></p><p>Preprint link: https://arxiv.org/abs/2512.23162</p><p><br></p><p>#HealthAI #SurgicalRobotics #NVIDIA #SurgWorld #MachineLearning #MedicalInnovation #RobotSurgery #ComputerVision #VLA #TheHealthAIBrief #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Autonomous surgery has been hindered by a lack of &amp;#34;paired&amp;#34; data - visuals linked to robot actions. In this episode of The Health AI Brief, we analyze NVIDIA’s &amp;#34;SurgWorld,&amp;#34; a generative world model that creates photorealistic surgery videos and uses an Inverse Dynamics Model to infer robot kinematics. We discuss the SATA dataset’s 300,000 expert-annotated frames, the GR00T VLA policy’s 73% success rate, and how this &amp;#34;synthesis strategy&amp;#34; provides a scalable path to surgical autonomy without the need for prohibitively expensive in-vivo data collection.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Preprint link: https://arxiv.org/abs/2512.23162&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #SurgicalRobotics #NVIDIA #SurgWorld #MachineLearning #MedicalInnovation #RobotSurgery #ComputerVision #VLA #TheHealthAIBrief #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4999627" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/b6c4c53b-c4a9-4bd8-a675-6f953ab228a4/stream.mp3"/>
                
                <guid isPermaLink="false">f7d256b4-ea41-4d88-b9d5-3ce228cea052</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/b6c4c53b-c4a9-4bd8-a675-6f953ab228a4</link>
                <pubDate>Mon, 19 Jan 2026 07:00:46 &#43;0000</pubDate>
                <itunes:duration>312</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>050 Diffusion Models - Sculpting from Noise</itunes:title>
                <title>050 Diffusion Models - Sculpting from Noise</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Beyond GANs: We explain Diffusion Models, the tech behind DALL-E, and how &#34;denoising&#34; is being used to design new drugs and create synthetic medical imaging.</p><p><br></p><p>#DiffusionModels #GenerativeAI #DrugDiscovery #FutureTech #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Beyond GANs: We explain Diffusion Models, the tech behind DALL-E, and how &amp;#34;denoising&amp;#34; is being used to design new drugs and create synthetic medical imaging.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#DiffusionModels #GenerativeAI #DrugDiscovery #FutureTech #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2134935" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/1e6852d6-b52e-4ef5-8a20-a83efd9fc37e/stream.mp3"/>
                
                <guid isPermaLink="false">eedde000-afb0-4cde-b35a-90d71cfa59e0</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/1e6852d6-b52e-4ef5-8a20-a83efd9fc37e</link>
                <pubDate>Fri, 16 Jan 2026 07:00:02 &#43;0000</pubDate>
                <itunes:duration>133</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>049 The Transformer - The Architecture Behind the Revolution</itunes:title>
                <title>049 The Transformer - The Architecture Behind the Revolution</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>What is the engine behind ChatGPT and AlphaFold? We explain The Transformer, the architecture that ditched sequential reading for parallel processing and changed medical AI forever.</p><p><br></p><p>#Transformer #ChatGPT #LLM #GenerativeAI #MedTechRevolution #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;What is the engine behind ChatGPT and AlphaFold? We explain The Transformer, the architecture that ditched sequential reading for parallel processing and changed medical AI forever.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#Transformer #ChatGPT #LLM #GenerativeAI #MedTechRevolution #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2582987" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/0ff9d279-81ef-4034-8fb2-515d56702ba6/stream.mp3"/>
                
                <guid isPermaLink="false">16f44ced-4db9-442b-9abc-1a57fcfbc318</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/0ff9d279-81ef-4034-8fb2-515d56702ba6</link>
                <pubDate>Thu, 15 Jan 2026 07:00:46 &#43;0000</pubDate>
                <itunes:duration>161</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Google AI Overviews vs Clinical Safety - Why Reputable Sources Aren&#39;t Always Enough for AI</itunes:title>
                <title>Google AI Overviews vs Clinical Safety - Why Reputable Sources Aren&#39;t Always Enough for AI</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>This week, a Guardian investigation forced Google to pull back its AI health summaries. But is the problem just &#34;missing context,&#34; or is it a fundamental flaw in how AI handles medical data? From incorrect liver test ranges to &#34;cool cloth&#34; advice for serious conditions, we break down the &#34;Synthesis Gap.&#34; We explore the tension between ad-based business models and clinical authority, compare Google&#39;s approach to &#34;Open Evidence,&#34; and outline the 3 steps needed to make Health AI safe for the public.</p><p><br></p><p>Link to the Guardian&#39;s content: https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation</p><p><br></p><p>#HealthAI #GoogleAI #DigitalHealth #PatientSafety #MedTech #TheHealthAIBrief #AIOverviews #HealthInnovation #ClinicalSafety #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;This week, a Guardian investigation forced Google to pull back its AI health summaries. But is the problem just &amp;#34;missing context,&amp;#34; or is it a fundamental flaw in how AI handles medical data? From incorrect liver test ranges to &amp;#34;cool cloth&amp;#34; advice for serious conditions, we break down the &amp;#34;Synthesis Gap.&amp;#34; We explore the tension between ad-based business models and clinical authority, compare Google&amp;#39;s approach to &amp;#34;Open Evidence,&amp;#34; and outline the 3 steps needed to make Health AI safe for the public.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Link to the Guardian&amp;#39;s content: https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #GoogleAI #DigitalHealth #PatientSafety #MedTech #TheHealthAIBrief #AIOverviews #HealthInnovation #ClinicalSafety #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5187709" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/6a491d62-769c-4ceb-96d4-fd6a3607316f/stream.mp3"/>
                
                <guid isPermaLink="false">19c6537f-4e68-4678-8fcf-954f2163b761</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/6a491d62-769c-4ceb-96d4-fd6a3607316f</link>
                <pubDate>Wed, 14 Jan 2026 07:00:38 &#43;0000</pubDate>
                <itunes:duration>324</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Claude for Healthcare - The AI Arms Race Hits the Clinic</itunes:title>
                <title>Claude for Healthcare - The AI Arms Race Hits the Clinic</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>The frontier AI wars have moved from the laboratory to the bedside. In this episode of The Health AI Brief, we dive into Anthropic’s massive expansion of Claude for Healthcare and Life Sciences. From automated prior authorisations to deep integrations with Medidata and CMS, how does Claude’s &#34;infrastructure-first&#34; approach stack up against OpenAI’s &#34;Health Ally&#34;? We break down the ambition, the interoperability hurdles, and the strategic path forward for clinical leaders.</p><p>Link: https://www.anthropic.com/news/healthcare-life-sciences</p><p>#HealthAI #DigitalHealth #Anthropic #Claude #NHS #HealthTech #AIinHealthcare #ClinicalInnovation #MedTech #Interoperability #aiinmedicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;The frontier AI wars have moved from the laboratory to the bedside. In this episode of The Health AI Brief, we dive into Anthropic’s massive expansion of Claude for Healthcare and Life Sciences. From automated prior authorisations to deep integrations with Medidata and CMS, how does Claude’s &amp;#34;infrastructure-first&amp;#34; approach stack up against OpenAI’s &amp;#34;Health Ally&amp;#34;? We break down the ambition, the interoperability hurdles, and the strategic path forward for clinical leaders.&lt;/p&gt;&lt;p&gt;Link: https://www.anthropic.com/news/healthcare-life-sciences&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #Anthropic #Claude #NHS #HealthTech #AIinHealthcare #ClinicalInnovation #MedTech #Interoperability #aiinmedicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5386240" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/87775f1f-7f03-4d35-9223-aae6aab4344c/stream.mp3"/>
                
                <guid isPermaLink="false">58080dc2-7132-4ba7-b1bd-ed256eb2e432</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/87775f1f-7f03-4d35-9223-aae6aab4344c</link>
                <pubDate>Tue, 13 Jan 2026 07:00:37 &#43;0000</pubDate>
                <itunes:duration>336</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>048 Attention - How AI Focuses on What Matters</itunes:title>
                <title>048 Attention - How AI Focuses on What Matters</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>You don&#39;t read every word of a patient&#39;s records with equal focus, and neither should AI. We explain the Attention Mechanism, the breakthrough that allows algorithms to prioritize the most clinically relevant data.</p><p><br></p><p>#AttentionMechanism #AIInterpretability #MedicalNotes #DeepLearning #NLP #LLMs #ChatGPT #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;You don&amp;#39;t read every word of a patient&amp;#39;s records with equal focus, and neither should AI. We explain the Attention Mechanism, the breakthrough that allows algorithms to prioritize the most clinically relevant data.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#AttentionMechanism #AIInterpretability #MedicalNotes #DeepLearning #NLP #LLMs #ChatGPT #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2233155" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/b4491ba7-da17-410e-a9f9-fbebc96d45d3/stream.mp3"/>
                
                <guid isPermaLink="false">6d54c74c-18f1-4859-a22c-b99556d97b1f</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/b4491ba7-da17-410e-a9f9-fbebc96d45d3</link>
                <pubDate>Mon, 12 Jan 2026 07:00:55 &#43;0000</pubDate>
                <itunes:duration>139</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>ChatGPT Health - Consolidating Health Data - Diagnosis or Data-Mine</itunes:title>
                <title>ChatGPT Health - Consolidating Health Data - Diagnosis or Data-Mine</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Is the &#34;Digital Front Door&#34; finally open? 🩺 We deconstruct the launch of ChatGPT Health.</p><p><br></p><p>OpenAI is moving to consolidate the &#34;data diaspora&#34;, integrating your medical records, Apple Health, and even MyFitnessPal into a single AI sandbox. It’s an ambitious attempt to move from the panic of &#34;Dr. Google&#34; to a personalized health ally.</p><p><br></p><p>But we’re asking the tough questions:</p><p>✅ The Vision: Can AI-generated &#34;GP Briefs&#34; actually save the 10-minute consultation?</p><p>✅ The Duck Test: If it acts like a medical device, why isn&#39;t it regulated as one?</p><p>✅ The Value Gap: What are you getting in exchange for your most sensitive clinical data?</p><p>✅ The Trust Factor: Moving from internal benchmarks to peer-reviewed safety stats.</p><p><br></p><p>Join us as we explore whether this is a revolutionary clinical bridge or a high-stakes data gamble.</p><p><br></p><p>#HealthAI #OpenAI #ChatGPTHealth #DigitalHealth #MedTech #AIinHealthcare #MedicalRecords #DataPrivacy #HealthAIBrief #FutureOfMedicine #HealthIT #bwell</p><p>#ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Is the &amp;#34;Digital Front Door&amp;#34; finally open? 🩺 We deconstruct the launch of ChatGPT Health.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;OpenAI is moving to consolidate the &amp;#34;data diaspora&amp;#34;, integrating your medical records, Apple Health, and even MyFitnessPal into a single AI sandbox. It’s an ambitious attempt to move from the panic of &amp;#34;Dr. Google&amp;#34; to a personalized health ally.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;But we’re asking the tough questions:&lt;/p&gt;&lt;p&gt;✅ The Vision: Can AI-generated &amp;#34;GP Briefs&amp;#34; actually save the 10-minute consultation?&lt;/p&gt;&lt;p&gt;✅ The Duck Test: If it acts like a medical device, why isn&amp;#39;t it regulated as one?&lt;/p&gt;&lt;p&gt;✅ The Value Gap: What are you getting in exchange for your most sensitive clinical data?&lt;/p&gt;&lt;p&gt;✅ The Trust Factor: Moving from internal benchmarks to peer-reviewed safety stats.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Join us as we explore whether this is a revolutionary clinical bridge or a high-stakes data gamble.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #OpenAI #ChatGPTHealth #DigitalHealth #MedTech #AIinHealthcare #MedicalRecords #DataPrivacy #HealthAIBrief #FutureOfMedicine #HealthIT #bwell&lt;/p&gt;&lt;p&gt;#ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5272555" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/27083d94-4def-4425-8514-18126631f5e7/stream.mp3"/>
                
                <guid isPermaLink="false">907845b9-495d-4a09-ac75-4b0da84367e2</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/27083d94-4def-4425-8514-18126631f5e7</link>
                <pubDate>Fri, 09 Jan 2026 07:00:12 &#43;0000</pubDate>
                <itunes:duration>329</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>047 Embeddings - The Medical Dictionary for AI</itunes:title>
                <title>047 Embeddings - The Medical Dictionary for AI</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>How does AI know that &#34;Paracetamol&#34; and &#34;Acetaminophen&#34; are related? We explain Embeddings, the vector space that turns clinical language into a mathematical map of meaning.</p><p><br></p><p>#NLP #HealthIT #Embeddings #ClinicalCoding #AI #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;How does AI know that &amp;#34;Paracetamol&amp;#34; and &amp;#34;Acetaminophen&amp;#34; are related? We explain Embeddings, the vector space that turns clinical language into a mathematical map of meaning.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#NLP #HealthIT #Embeddings #ClinicalCoding #AI #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2168372" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/eeb0e859-7c2e-4ad2-8554-4bfbeb056aef/stream.mp3"/>
                
                <guid isPermaLink="false">cda32325-e23c-4543-a27e-a7a1430d9894</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/eeb0e859-7c2e-4ad2-8554-4bfbeb056aef</link>
                <pubDate>Thu, 08 Jan 2026 07:00:00 &#43;0000</pubDate>
                <itunes:duration>135</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI vs Pancreatic Cancer - Can PANDA Solve the Early Detection Problem</itunes:title>
                <title>AI vs Pancreatic Cancer - Can PANDA Solve the Early Detection Problem</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Pancreatic cancer is notoriously difficult to detect before it reaches an advanced stage. There&#39;s been a lot of recent coverage about the &#34;PANDA&#34; system, a deep-learning model previously published in Nature Medicine in 2023. By utilising routine, non-contrast CT scans, originally taken for unrelated issues, this AI achieved 92.9% sensitivity in real-world trials. We break down the three-stage architecture of the model, the significance of its recent FDA Breakthrough Device status, and what the data from 180,000 clinical scans tells us about the future of opportunistic screening.</p><p><br></p><p>#HealthAI #Oncology #PancreaticCancer #Radiology #DeepLearning #ClinicalAI #MedicalInnovation #TheHealthAIBrief #MedTech #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Pancreatic cancer is notoriously difficult to detect before it reaches an advanced stage. There&amp;#39;s been a lot of recent coverage about the &amp;#34;PANDA&amp;#34; system, a deep-learning model previously published in Nature Medicine in 2023. By utilising routine, non-contrast CT scans, originally taken for unrelated issues, this AI achieved 92.9% sensitivity in real-world trials. We break down the three-stage architecture of the model, the significance of its recent FDA Breakthrough Device status, and what the data from 180,000 clinical scans tells us about the future of opportunistic screening.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #Oncology #PancreaticCancer #Radiology #DeepLearning #ClinicalAI #MedicalInnovation #TheHealthAIBrief #MedTech #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5061903" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/819f8a68-ade3-43a1-a286-59782d214557/stream.mp3"/>
                
                <guid isPermaLink="false">571802b3-ce41-4bd4-8a65-b138e1c169be</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/819f8a68-ade3-43a1-a286-59782d214557</link>
                <pubDate>Wed, 07 Jan 2026 07:00:07 &#43;0000</pubDate>
                <itunes:duration>316</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The Wealth of Health - Avoiding an Inequality Spiral in the AI Age</itunes:title>
                <title>The Wealth of Health - Avoiding an Inequality Spiral in the AI Age</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Who really owns the future of healthcare? 🏥</p><p><br></p><p>We explore the unsettling intersection of Artificial General Intelligence (AGI) and Thomas Piketty’s economic theories. Authors Philip Trammell and Dwarkesh Patel warn of a &#34;Piketty Trap&#34;: a future where the return on AI capital skyrockets while the economic value of clinical labor drops toward zero.</p><p>Article link: https://substack.com/inbox/post/182789127</p><p>As we shift from AI as a &#34;tool&#34; to AI as a &#34;substitute,&#34; we face a massive hurdle. Is your clinical expertise being &#34;mined&#34; to enrich private markets? We discuss the &#34;Dead Patient&#34; Paradox - the dangerous reality of optimising healthcare for &#34;throughput&#34; and &#34;efficiency&#34; over human outcomes.</p><p><br></p><p>Key insights in this episode:</p><p>- The Privatisation of Returns: Why AI wealth is being locked in private markets and what it means for the NHS and global health systems.</p><p>- The Efficiency Trap: Why the &#34;business case&#34; for AI might lead to &#34;algorithmic extraction&#34; instead of better patient care.</p><p>- The Bridge Forward: Why the clinical community must move from being &#34;users&#34; to &#34;equity stakeholders&#34; in the AI models of the 22nd Century.</p><p><br></p><p>It’s time to reclaim the human-centric soul of medicine.</p><p><br></p><p>Listen now for a pragmatic guide to surviving the AI wealth shift.</p><p><br></p><p>#HealthAI #AGI #HealthEconomics #FutureOfMedicine #NHS #DigitalHealth #MedTech #Piketty #HealthEquity #TheHealthAIBrief #PhysicianLeadership #GlobalHealth Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Who really owns the future of healthcare? 🏥&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;We explore the unsettling intersection of Artificial General Intelligence (AGI) and Thomas Piketty’s economic theories. Authors Philip Trammell and Dwarkesh Patel warn of a &amp;#34;Piketty Trap&amp;#34;: a future where the return on AI capital skyrockets while the economic value of clinical labor drops toward zero.&lt;/p&gt;&lt;p&gt;Article link: https://substack.com/inbox/post/182789127&lt;/p&gt;&lt;p&gt;As we shift from AI as a &amp;#34;tool&amp;#34; to AI as a &amp;#34;substitute,&amp;#34; we face a massive hurdle. Is your clinical expertise being &amp;#34;mined&amp;#34; to enrich private markets? We discuss the &amp;#34;Dead Patient&amp;#34; Paradox - the dangerous reality of optimising healthcare for &amp;#34;throughput&amp;#34; and &amp;#34;efficiency&amp;#34; over human outcomes.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Key insights in this episode:&lt;/p&gt;&lt;p&gt;- The Privatisation of Returns: Why AI wealth is being locked in private markets and what it means for the NHS and global health systems.&lt;/p&gt;&lt;p&gt;- The Efficiency Trap: Why the &amp;#34;business case&amp;#34; for AI might lead to &amp;#34;algorithmic extraction&amp;#34; instead of better patient care.&lt;/p&gt;&lt;p&gt;- The Bridge Forward: Why the clinical community must move from being &amp;#34;users&amp;#34; to &amp;#34;equity stakeholders&amp;#34; in the AI models of the 22nd Century.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;It’s time to reclaim the human-centric soul of medicine.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Listen now for a pragmatic guide to surviving the AI wealth shift.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #AGI #HealthEconomics #FutureOfMedicine #NHS #DigitalHealth #MedTech #Piketty #HealthEquity #TheHealthAIBrief #PhysicianLeadership #GlobalHealth Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="6861217" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/7acc3cd6-9502-45c9-93bd-a03bec582dd3/stream.mp3"/>
                
                <guid isPermaLink="false">71752621-cb8b-4482-9e7a-3aaa089b97d5</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/7acc3cd6-9502-45c9-93bd-a03bec582dd3</link>
                <pubDate>Tue, 06 Jan 2026 07:00:02 &#43;0000</pubDate>
                <itunes:duration>428</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>046 Generative Adversarial Networks - The Forger and the Detective</itunes:title>
                <title>046 Generative Adversarial Networks - The Forger and the Detective</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>AI that creates fake patients? We explore Generative Adversarial Networks (GANs), how &#34;The Forger&#34; and &#34;The Detective&#34; compete to create synthetic medical data and high-res imaging.</p><p><br></p><p>#GenerativeAI #GANs #Privacy #MedicalResearch #SyntheticData #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;AI that creates fake patients? We explore Generative Adversarial Networks (GANs), how &amp;#34;The Forger&amp;#34; and &amp;#34;The Detective&amp;#34; compete to create synthetic medical data and high-res imaging.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#GenerativeAI #GANs #Privacy #MedicalResearch #SyntheticData #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2503575" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/183ae33c-ca9b-4fe9-935d-fae57f3a7948/stream.mp3"/>
                
                <guid isPermaLink="false">636d819c-ec21-4f75-bc6f-8b959db1fdda</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/183ae33c-ca9b-4fe9-935d-fae57f3a7948</link>
                <pubDate>Wed, 31 Dec 2025 07:00:44 &#43;0000</pubDate>
                <itunes:duration>156</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>045 LSTMs - Fixing the AI&#39;s Short-Term Memory</itunes:title>
                <title>045 LSTMs - Fixing the AI&#39;s Short-Term Memory</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>How does AI remember a drug allergy from 10 years ago? We explain Long Short-Term Memory networks (LSTMs), the &#34;Gating&#34; mechanism, and why they are crucial for analyzing patient history.</p><p><br></p><p>#EHR #HealthData #LSTM #DeepLearning #ClinicalInformatics #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;How does AI remember a drug allergy from 10 years ago? We explain Long Short-Term Memory networks (LSTMs), the &amp;#34;Gating&amp;#34; mechanism, and why they are crucial for analyzing patient history.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#EHR #HealthData #LSTM #DeepLearning #ClinicalInformatics #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2269936" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/673c5efe-5635-462c-ad27-7457f1aee8d8/stream.mp3"/>
                
                <guid isPermaLink="false">538c6b24-567d-413a-a20f-ab419260edca</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/673c5efe-5635-462c-ad27-7457f1aee8d8</link>
                <pubDate>Tue, 30 Dec 2025 07:00:40 &#43;0000</pubDate>
                <itunes:duration>141</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Hugging a PowerPoint - Why ChatGPT makes a Terrible (but Addictive) Therapist</itunes:title>
                <title>Hugging a PowerPoint - Why ChatGPT makes a Terrible (but Addictive) Therapist</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Are AI therapists helping you, or just enabling you? 🤖🛋️</p><p><br></p><p>In this week&#39;s episode, we break down a fascinating Financial Times experiment where the author Henry mance treated his &#34;midlife crisis&#34; using only ChatGPT. The result? A validation loop that feels good but misses the point of therapy entirely.</p><p><br></p><p>We discuss:</p><p>🔹 The danger of the &#34;Obsequious AI&#34; that acts like a people-pleaser.</p><p>🔹 Why &#34;Rehearsal Spaces&#34; might be the best clinical use case for LLMs.</p><p>🔹 The difference between &#34;Customer Service&#34; and &#34;Clinical Care.&#34;</p><p>🔹 Why a real therapist wants you to leave, but an AI wants you to stay.</p><p><br></p><p>Link: https://www.ft.com/content/8b6e0a41-f3d1-474d-9d69-d5e0b897907b</p><p><br></p><p>#HealthAI #MentalHealth #ChatGPT #DigitalHealth #Psychotherapy #GenAI #MedTech #ClinicalInnovation #TheHealthAIBrief #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Are AI therapists helping you, or just enabling you? 🤖🛋️&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this week&amp;#39;s episode, we break down a fascinating Financial Times experiment where the author Henry mance treated his &amp;#34;midlife crisis&amp;#34; using only ChatGPT. The result? A validation loop that feels good but misses the point of therapy entirely.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;We discuss:&lt;/p&gt;&lt;p&gt;🔹 The danger of the &amp;#34;Obsequious AI&amp;#34; that acts like a people-pleaser.&lt;/p&gt;&lt;p&gt;🔹 Why &amp;#34;Rehearsal Spaces&amp;#34; might be the best clinical use case for LLMs.&lt;/p&gt;&lt;p&gt;🔹 The difference between &amp;#34;Customer Service&amp;#34; and &amp;#34;Clinical Care.&amp;#34;&lt;/p&gt;&lt;p&gt;🔹 Why a real therapist wants you to leave, but an AI wants you to stay.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Link: https://www.ft.com/content/8b6e0a41-f3d1-474d-9d69-d5e0b897907b&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #MentalHealth #ChatGPT #DigitalHealth #Psychotherapy #GenAI #MedTech #ClinicalInnovation #TheHealthAIBrief #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4828264" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/9d42a1f5-9476-4c36-b542-a3bd2c1b1549/stream.mp3"/>
                
                <guid isPermaLink="false">b17f9b4b-c98e-4aba-99d1-d1c07dba9e24</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/9d42a1f5-9476-4c36-b542-a3bd2c1b1549</link>
                <pubDate>Tue, 23 Dec 2025 07:00:07 &#43;0000</pubDate>
                <itunes:duration>301</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The &#39;Duck Test&#39; for AI And Why Your Chatbot Might Be an Illegal Medical Device</itunes:title>
                <title>The &#39;Duck Test&#39; for AI And Why Your Chatbot Might Be an Illegal Medical Device</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Are we letting &#34;entertainment&#34; bots practice medicine without a license? 🦆🏥</p><p><br></p><p>In this week&#39;s episode, we break down a provocative new paper from npj Digital Medicine that argues if an AI walks and talks like a therapist, it should be regulated like one, regardless of the disclaimer in the Terms of Service.</p><p><br></p><p>We discuss:</p><p>- The tragic real-world consequences of unregulated AI therapy.</p><p>- The &#34;Intended Purpose&#34; loophole that tech giants use.</p><p>- How hidden system prompts prove medical intent.</p><p>- The pragmatic path forward for safe Mental Health AI.</p><p><br></p><p>Reference:</p><p>- Link: https://www.nature.com/articles/s41746-025-02175-z</p><p>- Title: If a therapy bot walks like a duck and talks like a duck then it is a medically regulated duck</p><p>- Authors: Max Ostermann, Oscar Freyer, F. Gerrik Verhees, Jakob Nikolas Kather &amp; Stephen Gilbert </p><p><br></p><p>If you&#39;re building in Health AI, you can&#39;t afford to miss this regulatory shift.</p><p><br></p><p>#HealthAI #DigitalHealth #MedTech #AIRegulation #MentalHealth #ChatGPT #GenerativeAI #ClinicalSafety #TheHealthAIBrief #MedicalDevices #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Are we letting &amp;#34;entertainment&amp;#34; bots practice medicine without a license? 🦆🏥&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this week&amp;#39;s episode, we break down a provocative new paper from npj Digital Medicine that argues if an AI walks and talks like a therapist, it should be regulated like one, regardless of the disclaimer in the Terms of Service.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;We discuss:&lt;/p&gt;&lt;p&gt;- The tragic real-world consequences of unregulated AI therapy.&lt;/p&gt;&lt;p&gt;- The &amp;#34;Intended Purpose&amp;#34; loophole that tech giants use.&lt;/p&gt;&lt;p&gt;- How hidden system prompts prove medical intent.&lt;/p&gt;&lt;p&gt;- The pragmatic path forward for safe Mental Health AI.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Reference:&lt;/p&gt;&lt;p&gt;- Link: https://www.nature.com/articles/s41746-025-02175-z&lt;/p&gt;&lt;p&gt;- Title: If a therapy bot walks like a duck and talks like a duck then it is a medically regulated duck&lt;/p&gt;&lt;p&gt;- Authors: Max Ostermann, Oscar Freyer, F. Gerrik Verhees, Jakob Nikolas Kather &amp;amp; Stephen Gilbert &lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;If you&amp;#39;re building in Health AI, you can&amp;#39;t afford to miss this regulatory shift.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #MedTech #AIRegulation #MentalHealth #ChatGPT #GenerativeAI #ClinicalSafety #TheHealthAIBrief #MedicalDevices #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5088653" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/bb51dc24-851d-459f-a476-99e1c0c01de2/stream.mp3"/>
                
                <guid isPermaLink="false">5866813d-ce47-4356-8770-30b0477f5902</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/bb51dc24-851d-459f-a476-99e1c0c01de2</link>
                <pubDate>Mon, 22 Dec 2025 07:00:01 &#43;0000</pubDate>
                <itunes:duration>318</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>044 RNNs - AI With a Memory for Sequences</itunes:title>
                <title>044 RNNs - AI With a Memory for Sequences</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Medicine is a story, not a snapshot. We explain Recurrent Neural Networks (RNNs), the algorithms that can read ECGs, track vital trends, and understand sequential clinical data.</p><p><br></p><p>#TimeSeries #ECG #DigitalHealth #RNN #PredictiveAnalytics #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Medicine is a story, not a snapshot. We explain Recurrent Neural Networks (RNNs), the algorithms that can read ECGs, track vital trends, and understand sequential clinical data.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#TimeSeries #ECG #DigitalHealth #RNN #PredictiveAnalytics #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2472646" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/a6b97338-1333-48a4-b6ce-00d14ae18a53/stream.mp3"/>
                
                <guid isPermaLink="false">ce12cdeb-2766-43f7-8f02-c0928583ba09</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/a6b97338-1333-48a4-b6ce-00d14ae18a53</link>
                <pubDate>Thu, 18 Dec 2025 07:00:03 &#43;0000</pubDate>
                <itunes:duration>154</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>043 CNNs - How AI See Medical Images</itunes:title>
                <title>043 CNNs - How AI See Medical Images</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>From X-rays to dermatology, Convolutional Neural Networks (CNNs) are the eyes of medical AI. We explain the &#34;sliding window&#34; mechanism and how filters turn pixels into pathology.</p><p><br></p><p>#RadiologyAI #CNN #DeepLearning #MedicalImaging #DermAI #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;From X-rays to dermatology, Convolutional Neural Networks (CNNs) are the eyes of medical AI. We explain the &amp;#34;sliding window&amp;#34; mechanism and how filters turn pixels into pathology.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#RadiologyAI #CNN #DeepLearning #MedicalImaging #DermAI #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2848809" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/a79d58df-9960-4b66-990c-f45571fa3bd3/stream.mp3"/>
                
                <guid isPermaLink="false">f4a9b1e8-f205-420a-9e93-bcd68c1fe475</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/a79d58df-9960-4b66-990c-f45571fa3bd3</link>
                <pubDate>Tue, 16 Dec 2025 07:05:05 &#43;0000</pubDate>
                <itunes:duration>178</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>042 Activation Functions - To Fire or Not To Fire</itunes:title>
                <title>042 Activation Functions - To Fire or Not To Fire</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>How does AI capture the complex, non-linear reality of human biology? Today we break down Activation Functions, the mathematical trigger that decides if a neuron fires, enabling deep learning to see patterns.</p><p><br></p><p>#DeepLearning #CardiologyAI #ECG #MedTech #FutureOfMedicine #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;How does AI capture the complex, non-linear reality of human biology? Today we break down Activation Functions, the mathematical trigger that decides if a neuron fires, enabling deep learning to see patterns.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#DeepLearning #CardiologyAI #ECG #MedTech #FutureOfMedicine #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2483095" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/64061b80-eba8-4346-a290-cdab4f464051/stream.mp3"/>
                
                <guid isPermaLink="false">160716d4-cc23-4130-ac71-c5fea829a9d7</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/64061b80-eba8-4346-a290-cdab4f464051</link>
                <pubDate>Fri, 12 Dec 2025 07:05:02 &#43;0000</pubDate>
                <itunes:duration>155</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>041 Neurons &amp; Weights - The Synapse of AI</itunes:title>
                <title>041 Neurons &amp; Weights - The Synapse of AI</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Deep learning isn&#39;t magic; it&#39;s math. Today we explain the &#34;Neuron&#34; and &#34;Weight&#34;, the basic building blocks that allow AI models to learn, and the concept of Backpropagation.</p><p><br></p><p>#DeepLearning #NeuralNetworks #AIExplained #MedicalAI #Doctors #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Deep learning isn&amp;#39;t magic; it&amp;#39;s math. Today we explain the &amp;#34;Neuron&amp;#34; and &amp;#34;Weight&amp;#34;, the basic building blocks that allow AI models to learn, and the concept of Backpropagation.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#DeepLearning #NeuralNetworks #AIExplained #MedicalAI #Doctors #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2307552" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/62684bd6-d0fd-4bfa-9ed2-16cb56176a8f/stream.mp3"/>
                
                <guid isPermaLink="false">62bea375-49a6-4118-81ba-2bca9bd9dd20</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/62684bd6-d0fd-4bfa-9ed2-16cb56176a8f</link>
                <pubDate>Thu, 11 Dec 2025 07:35:43 &#43;0000</pubDate>
                <itunes:duration>144</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>040 K-Means - The Automated Triage Nurse</itunes:title>
                <title>040 K-Means - The Automated Triage Nurse</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>How do we find new disease subtypes in massive datasets? We explain K-Means Clustering and the power of unsupervised learning in discovering new patient phenotypes.</p><p><br></p><p>#PrecisionMedicine #BigData #Clustering #Sepsis #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;How do we find new disease subtypes in massive datasets? We explain K-Means Clustering and the power of unsupervised learning in discovering new patient phenotypes.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#PrecisionMedicine #BigData #Clustering #Sepsis #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2398667" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/a0c3b381-2413-4b9a-983d-2321f164cf8e/stream.mp3"/>
                
                <guid isPermaLink="false">3390d1a6-f065-42fb-9b3d-e8a03b615015</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/a0c3b381-2413-4b9a-983d-2321f164cf8e</link>
                <pubDate>Tue, 09 Dec 2025 07:05:03 &#43;0000</pubDate>
                <itunes:duration>149</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Beyond the Benchmark - How do we test a &#39;superhuman&#39; doctor</itunes:title>
                <title>Beyond the Benchmark - How do we test a &#39;superhuman&#39; doctor</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Reference: Gallifant, J. &amp; Bitterman, D.S. (2025). Humanity’s Next Medical Exam: Preparing to Evaluate Superhuman Systems. NEJM AI, 2(11). DOI: 10.1056/AIe2501008</p><p><br></p><p>When an AI scores 100% on a medical exam but can&#39;t navigate a hospital ward, is it really a doctor?</p><p><br></p><p>Today, we break down a new editorial from NEJM AI by Gallifant and Bitterman. We explore the transition from &#34;recall&#34; to &#34;reasoning&#34; and why the future of AI safety lies in &#34;Interactive Interrogation&#34; and high-fidelity sandboxes.</p><p><br></p><p>The models are becoming superhuman. It’s time our tests caught up.</p><p><br></p><p>Further recommended listening: https://www.youtube.com/watch?v=yQLOicn2vPU</p><p><br></p><p>#ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Reference: Gallifant, J. &amp;amp; Bitterman, D.S. (2025). Humanity’s Next Medical Exam: Preparing to Evaluate Superhuman Systems. NEJM AI, 2(11). DOI: 10.1056/AIe2501008&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;When an AI scores 100% on a medical exam but can&amp;#39;t navigate a hospital ward, is it really a doctor?&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Today, we break down a new editorial from NEJM AI by Gallifant and Bitterman. We explore the transition from &amp;#34;recall&amp;#34; to &amp;#34;reasoning&amp;#34; and why the future of AI safety lies in &amp;#34;Interactive Interrogation&amp;#34; and high-fidelity sandboxes.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;The models are becoming superhuman. It’s time our tests caught up.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Further recommended listening: https://www.youtube.com/watch?v=yQLOicn2vPU&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5842651" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/4ac2062d-1807-42fe-9332-7c2c06373e71/stream.mp3"/>
                
                <guid isPermaLink="false">da5ead0d-29f7-4efa-9b6c-af1f045fd018</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/4ac2062d-1807-42fe-9332-7c2c06373e71</link>
                <pubDate>Mon, 08 Dec 2025 07:05:21 &#43;0000</pubDate>
                <itunes:duration>365</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>039 SVMs - Drawing the Clearest Line Between Sick and Healthy</itunes:title>
                <title>039 SVMs - Drawing the Clearest Line Between Sick and Healthy</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>When the diagnosis isn&#39;t black and white, how does AI decide? Today we explore Support Vector Machines (SVMs), the &#34;Kernel Trick,&#34; and how they manage the edge cases in medical data.</p><p><br></p><p>#SVM #Diagnostics #MedicalImaging #DataScience #HealthAI #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;When the diagnosis isn&amp;#39;t black and white, how does AI decide? Today we explore Support Vector Machines (SVMs), the &amp;#34;Kernel Trick,&amp;#34; and how they manage the edge cases in medical data.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#SVM #Diagnostics #MedicalImaging #DataScience #HealthAI #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2488946" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/99a84190-7039-4883-81f7-f0981265a750/stream.mp3"/>
                
                <guid isPermaLink="false">cdf05cf7-f92f-4028-b0fc-a51be6dd93bb</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/99a84190-7039-4883-81f7-f0981265a750</link>
                <pubDate>Thu, 04 Dec 2025 07:05:12 &#43;0000</pubDate>
                <itunes:duration>155</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The Verdict on AI Scribes - Analysing their First Major RCTs</itunes:title>
                <title>The Verdict on AI Scribes - Analysing their First Major RCTs</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Everyone is talking about ambient AI scribes, but until now, we’ve relied on marketing hype and small pilots. This week, we dive into two landmark Randomized Controlled Trials published in NEJM AI featuring Abridge, DAX Copilot, and Nabla. The results? Complicated. We explore why these tools might be solving burnout without actually saving time, the hidden &#34;measurement trap&#34; in electronic health records, and why the real revolution isn&#39;t about writing notes—it’s about what happens next.</p><p><br></p><p>References:</p><p>The Editorial: AI Scribes Are Not Productivity Tools (Yet)</p><p>Authors: Eileen Kim, Vincent X. Liu, and Karandeep Singh</p><p>NEJM AI. 2025;2(12).</p><p>DOI: 10.1056/AIe2501051</p><p><br></p><p>The UCLA Study (Comparing DAX Copilot, Nabla, and Control): Ambient AI Scribes in Clinical Practice: A Randomized Trial</p><p>Authors: Paul J. Lukac, William Turner, Sitaram Vangala, Aaron T. Chin, Joshua Khalili, Ya-Chen Tina Shih, Catherine Sarkisian, Eric M. Cheng, and John N. Mafi</p><p>NEJM AI. 2025;2(12).</p><p>DOI: 10.1056/AIoa2501000</p><p><br></p><p>The UW Health Study (Testing Abridge): A Pragmatic Randomized Controlled Trial of Ambient Artificial Intelligence to Improve Health Practitioner Well-Being</p><p>Authors: Majid Afshar, Mary Ryan Baumann, Felice Resnik, Josie Hintzke, Anne Gravel Sullivan, Graham Wills, Kayla Lemmon, Jason Dambach, Leigh Ann Mrotek, Mariah Quinn, Kirsten Abramson, Peter Kleinschmidt, Thomas B. Brazelton, Margaret A. Leaf, Heidi Twedt, David Kunstman, Brian Patterson, Frank Liao, Stacy Rasmussen, Elizabeth S. Burnside, Cherodeep Goswami, and Joel Gordon</p><p>NEJM AI. 2025;2(12).</p><p>DOI: 10.1056/AIoa2500945</p><p><br></p><p>#HealthAI #DigitalHealth #ClinicianBurnout #MedTech #NEJMAI #AmbientScribes #NHS #HealthcareInnovation #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Everyone is talking about ambient AI scribes, but until now, we’ve relied on marketing hype and small pilots. This week, we dive into two landmark Randomized Controlled Trials published in NEJM AI featuring Abridge, DAX Copilot, and Nabla. The results? Complicated. We explore why these tools might be solving burnout without actually saving time, the hidden &amp;#34;measurement trap&amp;#34; in electronic health records, and why the real revolution isn&amp;#39;t about writing notes—it’s about what happens next.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;References:&lt;/p&gt;&lt;p&gt;The Editorial: AI Scribes Are Not Productivity Tools (Yet)&lt;/p&gt;&lt;p&gt;Authors: Eileen Kim, Vincent X. Liu, and Karandeep Singh&lt;/p&gt;&lt;p&gt;NEJM AI. 2025;2(12).&lt;/p&gt;&lt;p&gt;DOI: 10.1056/AIe2501051&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;The UCLA Study (Comparing DAX Copilot, Nabla, and Control): Ambient AI Scribes in Clinical Practice: A Randomized Trial&lt;/p&gt;&lt;p&gt;Authors: Paul J. Lukac, William Turner, Sitaram Vangala, Aaron T. Chin, Joshua Khalili, Ya-Chen Tina Shih, Catherine Sarkisian, Eric M. Cheng, and John N. Mafi&lt;/p&gt;&lt;p&gt;NEJM AI. 2025;2(12).&lt;/p&gt;&lt;p&gt;DOI: 10.1056/AIoa2501000&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;The UW Health Study (Testing Abridge): A Pragmatic Randomized Controlled Trial of Ambient Artificial Intelligence to Improve Health Practitioner Well-Being&lt;/p&gt;&lt;p&gt;Authors: Majid Afshar, Mary Ryan Baumann, Felice Resnik, Josie Hintzke, Anne Gravel Sullivan, Graham Wills, Kayla Lemmon, Jason Dambach, Leigh Ann Mrotek, Mariah Quinn, Kirsten Abramson, Peter Kleinschmidt, Thomas B. Brazelton, Margaret A. Leaf, Heidi Twedt, David Kunstman, Brian Patterson, Frank Liao, Stacy Rasmussen, Elizabeth S. Burnside, Cherodeep Goswami, and Joel Gordon&lt;/p&gt;&lt;p&gt;NEJM AI. 2025;2(12).&lt;/p&gt;&lt;p&gt;DOI: 10.1056/AIoa2500945&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #ClinicianBurnout #MedTech #NEJMAI #AmbientScribes #NHS #HealthcareInnovation #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5889044" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/18321c78-901e-4912-b665-ffb88ad31d40/stream.mp3"/>
                
                <guid isPermaLink="false">2b94f726-28ff-4111-a6b5-1752e7806910</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/18321c78-901e-4912-b665-ffb88ad31d40</link>
                <pubDate>Wed, 03 Dec 2025 07:04:23 &#43;0000</pubDate>
                <itunes:duration>368</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>038 Random Forests - The Wisdom of the MDT</itunes:title>
                <title>038 Random Forests - The Wisdom of the MDT</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Why ask one algorithm when you can ask a thousand? Random Forests and how &#34;ensemble learning&#34; mimics the reliability of a Multi-Disciplinary Team meeting to solve complex cases.</p><p><br></p><p>#RandomForest #MachineLearning #HealthTech #ClinicalDiagnosis #AI #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Why ask one algorithm when you can ask a thousand? Random Forests and how &amp;#34;ensemble learning&amp;#34; mimics the reliability of a Multi-Disciplinary Team meeting to solve complex cases.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#RandomForest #MachineLearning #HealthTech #ClinicalDiagnosis #AI #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2483513" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/e855e0d0-def8-407d-a31b-e7f5ab21f870/stream.mp3"/>
                
                <guid isPermaLink="false">ae039942-6e1f-4661-b086-2f61bf25aa01</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/e855e0d0-def8-407d-a31b-e7f5ab21f870</link>
                <pubDate>Tue, 02 Dec 2025 07:05:55 &#43;0000</pubDate>
                <itunes:duration>155</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>037 Decision Trees - The Algorithms You Already Use</itunes:title>
                <title>037 Decision Trees - The Algorithms You Already Use</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Not all AI is a black box. In this episode, we explain Decision Trees, the transparent, flowchart-style algorithms that mirror clinical reasoning, and why they sometimes &#34;overfit&#34; the textbook.</p><p><br></p><p>#DecisionTrees #ClinicalDecisionSupport #AIinMedicine #DigitalHealth #MedTech #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Not all AI is a black box. In this episode, we explain Decision Trees, the transparent, flowchart-style algorithms that mirror clinical reasoning, and why they sometimes &amp;#34;overfit&amp;#34; the textbook.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#DecisionTrees #ClinicalDecisionSupport #AIinMedicine #DigitalHealth #MedTech #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2090631" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/ff394e1b-58d4-4c25-b431-c57752d162ff/stream.mp3"/>
                
                <guid isPermaLink="false">f2cad609-0e07-4fde-8a39-6efa79917f93</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/ff394e1b-58d4-4c25-b431-c57752d162ff</link>
                <pubDate>Thu, 27 Nov 2025 07:05:27 &#43;0000</pubDate>
                <itunes:duration>130</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>036 Regression - The Statistical Bedrock of AI</itunes:title>
                <title>036 Regression - The Statistical Bedrock of AI</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Here we break down the difference between linear and logistic regression, their massive role in current guidelines, and why their transparency makes them a clinical favourite.</p><p><br></p><p>#MedEd #HealthAI #EvidenceBasedMedicine #Regression #ClinicalData #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Here we break down the difference between linear and logistic regression, their massive role in current guidelines, and why their transparency makes them a clinical favourite.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#MedEd #HealthAI #EvidenceBasedMedicine #Regression #ClinicalData #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2723422" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/5ddc680f-7e9f-46d1-be83-0fc2ea69bab7/stream.mp3"/>
                
                <guid isPermaLink="false">52852ee9-482a-433a-880a-3e892fe7e974</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/5ddc680f-7e9f-46d1-be83-0fc2ea69bab7</link>
                <pubDate>Tue, 25 Nov 2025 07:05:19 &#43;0000</pubDate>
                <itunes:duration>170</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>035 The Architecture Shift - From Mechanics to Models</itunes:title>
                <title>035 The Architecture Shift - From Mechanics to Models</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>We’ve covered the &#34;physiology&#34; of AI learning. Now we move to the &#34;anatomy.&#34; We are shifting focus from how models learn to the specific architectures designed for medical tasks. The coming episodes will dissect the differences between the AI that reads X-rays (CNNs) and the AI that writes notes (LLMs) and why clinicians must match the model to the medical problem.</p><p><br></p><p>#HealthAI #MedEd #DigitalHealth #AIArchitecture #MachineLearning #MedicalInnovation #ClinicalAI#ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;We’ve covered the &amp;#34;physiology&amp;#34; of AI learning. Now we move to the &amp;#34;anatomy.&amp;#34; We are shifting focus from how models learn to the specific architectures designed for medical tasks. The coming episodes will dissect the differences between the AI that reads X-rays (CNNs) and the AI that writes notes (LLMs) and why clinicians must match the model to the medical problem.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #MedEd #DigitalHealth #AIArchitecture #MachineLearning #MedicalInnovation #ClinicalAI#ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2254471" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/c4b8da3b-7f4c-42ba-b2dd-28ed7568aba6/stream.mp3"/>
                
                <guid isPermaLink="false">83de9d84-de09-431c-a880-1baadfd27dd2</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/c4b8da3b-7f4c-42ba-b2dd-28ed7568aba6</link>
                <pubDate>Mon, 24 Nov 2025 07:00:29 &#43;0000</pubDate>
                <itunes:duration>140</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The WHO AI Report and Why Governance is Stalling Health AI</itunes:title>
                <title>The WHO AI Report and Why Governance is Stalling Health AI</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Today we break down the massive new WHO report on AI readiness across 50 European countries.</p><p>There is big ambition, but the gaps are glaring.</p><p>- Only 8% of countries have a health-specific AI strategy.</p><p>- 86% cite &#34;legal uncertainty&#34; as the top barrier.</p><p>- Less than a quarter offer AI training to clinicians.</p><p><br></p><p>We explore the hurdles, the brilliant case studies from Finland, Turkey, and the UK, and propose a pragmatic path forward. It’s time to move from &#34;pilotitis&#34; to policy.</p><p><br></p><p>Link to the full report: https://iris.who.int/items/84f1c491-c9d0-4bb3-83cf-3a6f4bf3c3b1</p><p><br></p><p>#HealthAI #DigitalHealth #WHO #Europe #AIStrategy #ClinicalGovernance #NHS #HealthTech #MedicalAI #Policy #WorkforceTraining #DataPrivacy #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Today we break down the massive new WHO report on AI readiness across 50 European countries.&lt;/p&gt;&lt;p&gt;There is big ambition, but the gaps are glaring.&lt;/p&gt;&lt;p&gt;- Only 8% of countries have a health-specific AI strategy.&lt;/p&gt;&lt;p&gt;- 86% cite &amp;#34;legal uncertainty&amp;#34; as the top barrier.&lt;/p&gt;&lt;p&gt;- Less than a quarter offer AI training to clinicians.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;We explore the hurdles, the brilliant case studies from Finland, Turkey, and the UK, and propose a pragmatic path forward. It’s time to move from &amp;#34;pilotitis&amp;#34; to policy.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Link to the full report: https://iris.who.int/items/84f1c491-c9d0-4bb3-83cf-3a6f4bf3c3b1&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #WHO #Europe #AIStrategy #ClinicalGovernance #NHS #HealthTech #MedicalAI #Policy #WorkforceTraining #DataPrivacy #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5564290" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/e6e4c11c-8fe5-4810-aa95-575c007a3b7c/stream.mp3"/>
                
                <guid isPermaLink="false">e9036e75-dc3c-4ccb-9d0f-5dbefd70da6d</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/e6e4c11c-8fe5-4810-aa95-575c007a3b7c</link>
                <pubDate>Fri, 21 Nov 2025 07:05:59 &#43;0000</pubDate>
                <itunes:duration>347</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>034 Regularisation - the antidote to overfitting</itunes:title>
                <title>034 Regularisation - the antidote to overfitting</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>How do you stop an AI from overfitting and just memorising data? You penalise it for being too complex. This is &#34;regularisation,&#34; the primary antidote to the critical problem of overfitting. In this episode, we explain how regularisation works and why looking for it in a paper&#39;s methodology section is a quick way to judge the rigour and quality of the research.</p><p><br></p><p>#Regularization #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #Overfitting #DataScience #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;How do you stop an AI from overfitting and just memorising data? You penalise it for being too complex. This is &amp;#34;regularisation,&amp;#34; the primary antidote to the critical problem of overfitting. In this episode, we explain how regularisation works and why looking for it in a paper&amp;#39;s methodology section is a quick way to judge the rigour and quality of the research.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#Regularization #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #Overfitting #DataScience #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2898546" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/1e149f02-a5c2-49ff-9082-36a4d24febc6/stream.mp3"/>
                
                <guid isPermaLink="false">87a0e32d-0d61-43ad-b2aa-124bd0a1aaff</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/1e149f02-a5c2-49ff-9082-36a4d24febc6</link>
                <pubDate>Thu, 20 Nov 2025 07:02:57 &#43;0000</pubDate>
                <itunes:duration>181</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>033 The Bias-Variance Tradeoff - The Specialist vs The Generalist</itunes:title>
                <title>033 The Bias-Variance Tradeoff - The Specialist vs The Generalist</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Underfitting (too simple) vs. Overfitting (too complex). These two problems are linked by the &#34;Bias-Variance Tradeoff,&#34; the central balancing act of all AI. In this episode, we explain this crucial concept, what bias and variance mean in this context, and how it gives you a powerful new framework for understanding the nature of any AI model&#39;s errors.</p><p><br></p><p>#BiasVarianceTradeoff #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #DataScience #CriticalAppraisal #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Underfitting (too simple) vs. Overfitting (too complex). These two problems are linked by the &amp;#34;Bias-Variance Tradeoff,&amp;#34; the central balancing act of all AI. In this episode, we explain this crucial concept, what bias and variance mean in this context, and how it gives you a powerful new framework for understanding the nature of any AI model&amp;#39;s errors.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#BiasVarianceTradeoff #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #DataScience #CriticalAppraisal #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2951209" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/1b55e6f7-6cca-43dd-9e62-ffd222e8263f/stream.mp3"/>
                
                <guid isPermaLink="false">3bf77ac9-12f1-448d-9794-95d89f6ae5b0</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/1b55e6f7-6cca-43dd-9e62-ffd222e8263f</link>
                <pubDate>Tue, 18 Nov 2025 07:00:31 &#43;0000</pubDate>
                <itunes:duration>184</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>032 Underfitting - the overly simplistic intern</itunes:title>
                <title>032 Underfitting - the overly simplistic intern</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>We&#39;ve all heard of AI models that learn too much, but what about models that fail to learn at all? This is &#34;underfitting&#34; – the opposite of overfitting – where a model is too simple for the clinical task. In this episode, we explain this fundamental concept and what it tells you about the AI development process when you see it.</p><p><br></p><p>#Underfitting #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #DataScience #ModelTraining #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;We&amp;#39;ve all heard of AI models that learn too much, but what about models that fail to learn at all? This is &amp;#34;underfitting&amp;#34; – the opposite of overfitting – where a model is too simple for the clinical task. In this episode, we explain this fundamental concept and what it tells you about the AI development process when you see it.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#Underfitting #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #DataScience #ModelTraining #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2513188" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/1c0ee294-3a4f-4249-b0a6-7374ddf61821/stream.mp3"/>
                
                <guid isPermaLink="false">3742f019-14c5-46c4-bc2a-edca1274ae09</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/1c0ee294-3a4f-4249-b0a6-7374ddf61821</link>
                <pubDate>Thu, 13 Nov 2025 07:08:30 &#43;0000</pubDate>
                <itunes:duration>157</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>031 Overfitting - the brittle genius</itunes:title>
                <title>031 Overfitting - the brittle genius</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Why do some AI models get 99% accuracy in a study but fail in the real world? The answer is &#34;overfitting&#34; – a critical flaw where a model memorises data instead of learning from it. In this episode, we use the analogy of an over-crammed student to explain what overfitting is and give you the key question you must ask to spot it in any research paper.</p><p><br></p><p>#Overfitting #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #CriticalAppraisal #DataScience #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Why do some AI models get 99% accuracy in a study but fail in the real world? The answer is &amp;#34;overfitting&amp;#34; – a critical flaw where a model memorises data instead of learning from it. In this episode, we use the analogy of an over-crammed student to explain what overfitting is and give you the key question you must ask to spot it in any research paper.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#Overfitting #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #CriticalAppraisal #DataScience #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2784444" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/2cc0b7ec-7581-47ef-98c2-08532596434e/stream.mp3"/>
                
                <guid isPermaLink="false">a0a82c18-73cd-420c-9ebe-f8551b9ee3bd</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/2cc0b7ec-7581-47ef-98c2-08532596434e</link>
                <pubDate>Wed, 12 Nov 2025 07:15:28 &#43;0000</pubDate>
                <itunes:duration>174</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>What DeepMind&#39;s AI That Mastered Video Games Taught Us for Building Better Health AI</itunes:title>
                <title>What DeepMind&#39;s AI That Mastered Video Games Taught Us for Building Better Health AI</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Paper - Human-level control through deep reinforcement learning, by Mnih et al 2015, Nature</p><p>Link - https://www.nature.com/articles/nature14236</p><p><br></p><p>This week on The Health AI Brief, we&#39;re going back to basics. We revisit the landmark 2015 DeepMind paper that taught an AI to master 49 different Atari games from scratch.</p><p><br></p><p>Find out why this wasn&#39;t just about video games, but a foundational breakthrough in making AI stable and generalisable. We explore the core concepts of &#34;experience replay&#34; and &#34;target networks&#34; and discuss why they hold crucial lessons for building robust, effective, and trustworthy AI systems for the complex world of healthcare.</p><p><br></p><p>#HealthAI #ArtificialIntelligence #DeepLearning #ReinforcementLearning #DeepMind #DigitalHealth #ClinicalAI #MedTech #AIinHealthcare #MachineLearning #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Paper - Human-level control through deep reinforcement learning, by Mnih et al 2015, Nature&lt;/p&gt;&lt;p&gt;Link - https://www.nature.com/articles/nature14236&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;This week on The Health AI Brief, we&amp;#39;re going back to basics. We revisit the landmark 2015 DeepMind paper that taught an AI to master 49 different Atari games from scratch.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Find out why this wasn&amp;#39;t just about video games, but a foundational breakthrough in making AI stable and generalisable. We explore the core concepts of &amp;#34;experience replay&amp;#34; and &amp;#34;target networks&amp;#34; and discuss why they hold crucial lessons for building robust, effective, and trustworthy AI systems for the complex world of healthcare.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #ArtificialIntelligence #DeepLearning #ReinforcementLearning #DeepMind #DigitalHealth #ClinicalAI #MedTech #AIinHealthcare #MachineLearning #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5761149" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/fc218c63-db5e-4e04-acdb-9ae37e8992c6/stream.mp3"/>
                
                <guid isPermaLink="false">85184f0e-76b7-4dd2-bf4a-9a2aff8816c4</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/fc218c63-db5e-4e04-acdb-9ae37e8992c6</link>
                <pubDate>Tue, 11 Nov 2025 07:15:28 &#43;0000</pubDate>
                <itunes:duration>360</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>030 Actor-Critic Partnership - The Best of Both Worlds</itunes:title>
                <title>030 Actor-Critic Partnership - The Best of Both Worlds</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>What happens when you combine an AI that can &#39;do&#39; with an AI that can &#39;judge&#39;? You get an Actor-Critic partnership, the state-of-the-art in reinforcement learning. We explain how this &#39;trainee and consultant&#39; model is powering the next generation of dynamic, responsive medical AI.</p><p><br></p><p>#HealthAI #DigitalHealth #ArtificialIntelligence #ReinforcementLearning #DeepLearning #ClinicalAI #MedEd #MedicalPodcast #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;What happens when you combine an AI that can &amp;#39;do&amp;#39; with an AI that can &amp;#39;judge&amp;#39;? You get an Actor-Critic partnership, the state-of-the-art in reinforcement learning. We explain how this &amp;#39;trainee and consultant&amp;#39; model is powering the next generation of dynamic, responsive medical AI.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #ArtificialIntelligence #ReinforcementLearning #DeepLearning #ClinicalAI #MedEd #MedicalPodcast #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2368156" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/b2d8a5cd-ddf4-4411-86bf-0fee9b59b4d3/stream.mp3"/>
                
                <guid isPermaLink="false">ef70fbce-97e0-49f0-a89f-33a53d79f4d8</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/b2d8a5cd-ddf4-4411-86bf-0fee9b59b4d3</link>
                <pubDate>Fri, 07 Nov 2025 07:05:52 &#43;0000</pubDate>
                <itunes:duration>148</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>029 Policy-Based Methods - Learning How To Act Directly</itunes:title>
                <title>029 Policy-Based Methods - Learning How To Act Directly</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Forget judging, some AIs learn by doing. &#39;Policy Gradient Methods&#39; create an &#39;AI Actor&#39; that learns skills directly, much like a person learning to suture. This is the technology behind AI in robotics and precision medicine. Here&#39;s what you need to know.</p><p><br></p><p>#HealthAI #DigitalHealth #ArtificialIntelligence #ReinforcementLearning #RoboticSurgery #ClinicalAI #MedEd #MedicalPodcast #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Forget judging, some AIs learn by doing. &amp;#39;Policy Gradient Methods&amp;#39; create an &amp;#39;AI Actor&amp;#39; that learns skills directly, much like a person learning to suture. This is the technology behind AI in robotics and precision medicine. Here&amp;#39;s what you need to know.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #ArtificialIntelligence #ReinforcementLearning #RoboticSurgery #ClinicalAI #MedEd #MedicalPodcast #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2466377" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/fb19da72-dab0-4a9f-b45e-41f3b3ea9517/stream.mp3"/>
                
                <guid isPermaLink="false">5b255945-8819-4ce9-8391-96fe5fecf7ef</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/fb19da72-dab0-4a9f-b45e-41f3b3ea9517</link>
                <pubDate>Wed, 05 Nov 2025 07:15:45 &#43;0000</pubDate>
                <itunes:duration>154</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>When the Chatbot is More Humane Than the Doctor</itunes:title>
                <title>When the Chatbot is More Humane Than the Doctor</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>What happens when a patient decides an AI chatbot is more &#34;humane&#34; than their doctor? A powerful essay in The Guardian and Rest of World explores one mother&#39;s growing reliance on &#39;Dr. DeepSeek&#39;, an AI that is both a comforting, empathetic companion and a source of dangerous medical misinformation.</p><p><br></p><p>In this episode, we dissect the story&#39;s key themes: the &#34;push&#34; from an overburdened healthcare system, the &#34;pull&#34; of AI&#39;s infinite patience, and the peril of confident inaccuracy. We explore how technology is filling a void of loneliness and why a patient might knowingly choose a flawed answer over no answer at all.</p><p><br></p><p>Join us as we break down the essential takeaways for clinicians, AI engineers, and health-tech leaders. Is AI&#39;s rise in healthcare a symptom of a broken system, or a potential cure?</p><p><br></p><p>Article links:</p><p>https://restofworld.org/2025/ai-chatbot-china-sick/</p><p>https://www.theguardian.com/society/2025/oct/28/deepseek-is-humane-doctors-are-more-like-machines-my-mothers-worrying-reliance-on-ai-for-health-advice</p><p><br></p><p>#HealthAI #DigitalHealth #ClinicalAI #LLM #HealthcareInnovation #PatientExperience #MedTech #AIethics #DeepSeek #DoctorAI #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;What happens when a patient decides an AI chatbot is more &amp;#34;humane&amp;#34; than their doctor? A powerful essay in The Guardian and Rest of World explores one mother&amp;#39;s growing reliance on &amp;#39;Dr. DeepSeek&amp;#39;, an AI that is both a comforting, empathetic companion and a source of dangerous medical misinformation.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this episode, we dissect the story&amp;#39;s key themes: the &amp;#34;push&amp;#34; from an overburdened healthcare system, the &amp;#34;pull&amp;#34; of AI&amp;#39;s infinite patience, and the peril of confident inaccuracy. We explore how technology is filling a void of loneliness and why a patient might knowingly choose a flawed answer over no answer at all.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Join us as we break down the essential takeaways for clinicians, AI engineers, and health-tech leaders. Is AI&amp;#39;s rise in healthcare a symptom of a broken system, or a potential cure?&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Article links:&lt;/p&gt;&lt;p&gt;https://restofworld.org/2025/ai-chatbot-china-sick/&lt;/p&gt;&lt;p&gt;https://www.theguardian.com/society/2025/oct/28/deepseek-is-humane-doctors-are-more-like-machines-my-mothers-worrying-reliance-on-ai-for-health-advice&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #ClinicalAI #LLM #HealthcareInnovation #PatientExperience #MedTech #AIethics #DeepSeek #DoctorAI #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5540466" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/1fdc2f73-4033-424c-a716-009a959027a7/stream.mp3"/>
                
                <guid isPermaLink="false">f8a698f0-ad12-49f6-a93a-b11eabb0676e</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/1fdc2f73-4033-424c-a716-009a959027a7</link>
                <pubDate>Tue, 04 Nov 2025 08:30:50 &#43;0000</pubDate>
                <itunes:duration>346</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>028 Value-Based Methods and Their Limits - The World of Q-learning</itunes:title>
                <title>028 Value-Based Methods and Their Limits - The World of Q-learning</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Some AIs learn by becoming expert judges, calculating a score for every possible clinical decision before making a move. We explain value-based methods, the &#39;AI Critic,&#39; and why they excel at multiple-choice medicine but falter when the decisions are infinitely complex.</p><p><br></p><p>#HealthAI #DigitalHealth #ArtificialIntelligence #ReinforcementLearning #DeepLearning #ClinicalAI #MedEd #MedicalPodcast #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Some AIs learn by becoming expert judges, calculating a score for every possible clinical decision before making a move. We explain value-based methods, the &amp;#39;AI Critic,&amp;#39; and why they excel at multiple-choice medicine but falter when the decisions are infinitely complex.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #ArtificialIntelligence #ReinforcementLearning #DeepLearning #ClinicalAI #MedEd #MedicalPodcast #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2474318" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/819b3214-d1fa-46a4-8cc3-e46042bb806c/stream.mp3"/>
                
                <guid isPermaLink="false">cbe29fed-f288-4a8d-8739-bb8a22d99b21</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/819b3214-d1fa-46a4-8cc3-e46042bb806c</link>
                <pubDate>Thu, 30 Oct 2025 07:03:07 &#43;0000</pubDate>
                <itunes:duration>154</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Epic&#39;s New AI - A Crystal Ball for Medicine, or a Look in the Rear-View Mirror</itunes:title>
                <title>Epic&#39;s New AI - A Crystal Ball for Medicine, or a Look in the Rear-View Mirror</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Epic recently unveiled Comet, a new AI model trained on 118 million patient records to predict future health events. The scale is unprecedented, and its initial ability to outperform specialised models is a huge leap forward for clinical AI.</p><p><br></p><p>But what is it really learning from our messy, real-world data? In this today&#39;s episode, we break down why Comet is a landmark achievement but also an important wake-up call. We explore the challenges of &#34;semantic drift&#34; and documentation artifacts, and why the model&#39;s success will ultimately depend on an organisation&#39;s own data quality.</p><p><br></p><p>Is Comet a true crystal ball, or a reflection of medicine&#39;s past?</p><p><br></p><p>Paper: Generative Medical Event Models Improve with Scale by Waxler et al</p><p>Link: https://arxiv.org/abs/2508.12104</p><p><br></p><p>#HealthAI #EpicComet #ClinicalAI #DataQuality #DigitalHealth #FoundationModels #RWE #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Epic recently unveiled Comet, a new AI model trained on 118 million patient records to predict future health events. The scale is unprecedented, and its initial ability to outperform specialised models is a huge leap forward for clinical AI.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;But what is it really learning from our messy, real-world data? In this today&amp;#39;s episode, we break down why Comet is a landmark achievement but also an important wake-up call. We explore the challenges of &amp;#34;semantic drift&amp;#34; and documentation artifacts, and why the model&amp;#39;s success will ultimately depend on an organisation&amp;#39;s own data quality.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Is Comet a true crystal ball, or a reflection of medicine&amp;#39;s past?&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Paper: Generative Medical Event Models Improve with Scale by Waxler et al&lt;/p&gt;&lt;p&gt;Link: https://arxiv.org/abs/2508.12104&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #EpicComet #ClinicalAI #DataQuality #DigitalHealth #FoundationModels #RWE #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3935085" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/4b9c088d-dc20-478b-b88d-c9a39ad5c9bb/stream.mp3"/>
                
                <guid isPermaLink="false">1bcf8b01-7ad8-4fad-9705-473483669c01</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/4b9c088d-dc20-478b-b88d-c9a39ad5c9bb</link>
                <pubDate>Wed, 29 Oct 2025 10:00:05 &#43;0000</pubDate>
                <itunes:duration>245</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>027 Curriculum learning - teaching an agent step-by-step</itunes:title>
                <title>027 Curriculum learning - teaching an agent step-by-step</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>You can&#39;t teach an AI complex medicine by throwing it in at the deep end. Curriculum learning applies the principles of medical school to AI, training models on simple tasks before moving to complex ones. Find out why this matters for building safe and effective clinical AI.</p><p><br></p><p>#HealthAI #DigitalHealth #ArtificialIntelligence #MachineLearning #CurriculumLearning #ClinicalAI #MedEd #MedicalPodcast #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;You can&amp;#39;t teach an AI complex medicine by throwing it in at the deep end. Curriculum learning applies the principles of medical school to AI, training models on simple tasks before moving to complex ones. Find out why this matters for building safe and effective clinical AI.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #ArtificialIntelligence #MachineLearning #CurriculumLearning #ClinicalAI #MedEd #MedicalPodcast #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2681208" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/0a7787fd-5ba0-4ee6-be5d-2808ddb27677/stream.mp3"/>
                
                <guid isPermaLink="false">ad174d2c-d1da-4ffb-b701-e42f3e8294bd</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/0a7787fd-5ba0-4ee6-be5d-2808ddb27677</link>
                <pubDate>Tue, 28 Oct 2025 07:02:05 &#43;0000</pubDate>
                <itunes:duration>167</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>026 Exploration-Exploitation Dilemma</itunes:title>
                <title>026 Exploration-Exploitation Dilemma</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>An AI, like a clinician, faces a constant choice: stick with the proven treatment or explore a novel approach? In this episode, we break down the &#39;exploration-exploitation&#39; dilemma, a core concept in AI that has major implications for how we design and trust medical AI systems.</p><p>#HealthAI #DigitalHealth #ArtificialIntelligence #ReinforcementLearning #ClinicalAI #MedEd #MedicalPodcast#ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;An AI, like a clinician, faces a constant choice: stick with the proven treatment or explore a novel approach? In this episode, we break down the &amp;#39;exploration-exploitation&amp;#39; dilemma, a core concept in AI that has major implications for how we design and trust medical AI systems.&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #ArtificialIntelligence #ReinforcementLearning #ClinicalAI #MedEd #MedicalPodcast#ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2635232" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/e42b18ee-6fcf-4c3a-96c0-1ebd5d89c9bf/stream.mp3"/>
                
                <guid isPermaLink="false">10fc4728-39b8-4233-850b-aa6280a52e7c</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/e42b18ee-6fcf-4c3a-96c0-1ebd5d89c9bf</link>
                <pubDate>Fri, 24 Oct 2025 06:00:54 &#43;0000</pubDate>
                <itunes:duration>164</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI That Speaks the Language of the Cell and Contributs to Biomedical Discoveries</itunes:title>
                <title>AI That Speaks the Language of the Cell and Contributs to Biomedical Discoveries</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Paper: Scaling Large Language Models for Next-Generation Single-Cell Analysis by Rizvi et al</p><p>Paper link: https://www.biorxiv.org/content/10.1101/2025.04.14.648850v2</p><p>This week we&#39;re covering recent research presenting C2S-Scale, a new model from researchers at Yale and Google that teaches Large Language Models the &#34;language of the cell.&#34;</p><p><br></p><p>By translating complex genomic data into &#34;cell sentences,&#34; this AI can predict cellular behaviour, answer complex biological questions, and has even made a novel, lab-validated discovery that could enhance cancer immunotherapy.</p><p>But what&#39;s the real hurdle between this incredible research tool and real-world clinical impact? We break down the ambition, the obstacles, and the concrete steps needed to build clinical trust in arguably one of the most exciting developments in Health AI this year.</p><p><br></p><p>#HealthAI #ArtificialIntelligence #Genomics #SingleCell #DrugDiscovery #LLM #BioTech #DigitalHealth #MedicalInnovation #C2SScale #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p><p><br></p>]]></description>
                <content:encoded>&lt;p&gt;Paper: Scaling Large Language Models for Next-Generation Single-Cell Analysis by Rizvi et al&lt;/p&gt;&lt;p&gt;Paper link: https://www.biorxiv.org/content/10.1101/2025.04.14.648850v2&lt;/p&gt;&lt;p&gt;This week we&amp;#39;re covering recent research presenting C2S-Scale, a new model from researchers at Yale and Google that teaches Large Language Models the &amp;#34;language of the cell.&amp;#34;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;By translating complex genomic data into &amp;#34;cell sentences,&amp;#34; this AI can predict cellular behaviour, answer complex biological questions, and has even made a novel, lab-validated discovery that could enhance cancer immunotherapy.&lt;/p&gt;&lt;p&gt;But what&amp;#39;s the real hurdle between this incredible research tool and real-world clinical impact? We break down the ambition, the obstacles, and the concrete steps needed to build clinical trust in arguably one of the most exciting developments in Health AI this year.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #ArtificialIntelligence #Genomics #SingleCell #DrugDiscovery #LLM #BioTech #DigitalHealth #MedicalInnovation #C2SScale #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="5583516" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/4358372a-6e26-40a3-97af-eb0d6cd1ffbc/stream.mp3"/>
                
                <guid isPermaLink="false">57c7b900-b797-4f19-84b8-0caaf593fdcf</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/4358372a-6e26-40a3-97af-eb0d6cd1ffbc</link>
                <pubDate>Wed, 22 Oct 2025 06:00:47 &#43;0000</pubDate>
                <itunes:duration>348</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>025 Reinforcement learning - rewards and punishments</itunes:title>
                <title>025 Reinforcement learning - rewards and punishments</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>How does an AI like ChatGPT learn to be so helpful? The answer is &#34;Reinforcement Learning,&#34; a powerful method of learning through trial-and-error, rewards, and punishments. In this special extended episode, we break down how reinforcement learning works and explain RLHF, the key technique used to train the language models that are transforming our world.</p><p><br></p><p>#ReinforcementLearning #RLHF #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #LLM #ChatGPT #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;How does an AI like ChatGPT learn to be so helpful? The answer is &amp;#34;Reinforcement Learning,&amp;#34; a powerful method of learning through trial-and-error, rewards, and punishments. In this special extended episode, we break down how reinforcement learning works and explain RLHF, the key technique used to train the language models that are transforming our world.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#ReinforcementLearning #RLHF #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #LLM #ChatGPT #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3815549" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/1ad58434-4262-4b74-b516-e21301a11f02/stream.mp3"/>
                
                <guid isPermaLink="false">31a106a3-c73d-473c-af94-54c26e2f5b1c</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/1ad58434-4262-4b74-b516-e21301a11f02</link>
                <pubDate>Mon, 20 Oct 2025 06:15:34 &#43;0000</pubDate>
                <itunes:duration>238</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Charting a Course for Safe AI in Medicine to Prevent Us Flying Blind - Report from the JAMA Summit on Artificial Intelligence</itunes:title>
                <title>Charting a Course for Safe AI in Medicine to Prevent Us Flying Blind - Report from the JAMA Summit on Artificial Intelligence</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>We are rolling out powerful AI tools in hospitals and clinics at a breathtaking pace. But are they helping, or are they causing harm? A new report from the JAMA Summit on Artificial Intelligence reveals a key gap in our ability to answer that question. Featuring a stark warning from former FDA Commissioner Robert Califf, this episode breaks down why we are &#34;flying blind&#34; and explores the report&#39;s four-part blueprint to build a trustworthy and effective AI ecosystem. It&#39;s a call to action to move from building impressive models to building the transparent, collaborative framework needed to manage them responsibly.</p><p>#AIinHealthcare #DigitalHealth #PatientSafety #HealthTech #JAMA #ArtificialIntelligence #MedAI #ClinicalAI #AIethics #HealthcareInnovation #ai in medicine. Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;We are rolling out powerful AI tools in hospitals and clinics at a breathtaking pace. But are they helping, or are they causing harm? A new report from the JAMA Summit on Artificial Intelligence reveals a key gap in our ability to answer that question. Featuring a stark warning from former FDA Commissioner Robert Califf, this episode breaks down why we are &amp;#34;flying blind&amp;#34; and explores the report&amp;#39;s four-part blueprint to build a trustworthy and effective AI ecosystem. It&amp;#39;s a call to action to move from building impressive models to building the transparent, collaborative framework needed to manage them responsibly.&lt;/p&gt;&lt;p&gt;#AIinHealthcare #DigitalHealth #PatientSafety #HealthTech #JAMA #ArtificialIntelligence #MedAI #ClinicalAI #AIethics #HealthcareInnovation #ai in medicine. Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3524231" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/ed73b12a-8dfe-41ec-b611-3d494a2ed0a8/stream.mp3"/>
                
                <guid isPermaLink="false">018ee7f5-c435-4903-94a3-5a9ab39ead93</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/ed73b12a-8dfe-41ec-b611-3d494a2ed0a8</link>
                <pubDate>Sat, 18 Oct 2025 06:00:58 &#43;0000</pubDate>
                <itunes:duration>220</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>024 Efficient Learning and Smart Shortcuts for Medical AI - Transfer learning and Active learning</itunes:title>
                <title>024 Efficient Learning and Smart Shortcuts for Medical AI - Transfer learning and Active learning</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Why build an AI model from scratch when you can give it a head start? And why waste expert time on easy cases? This episode explores two powerful strategies for efficient AI development. Discover how Transfer Learning gives your model a foundation of pre-existing knowledge and how Active Learning creates a smart feedback loop where the AI asks for help on only the toughest cases.</p><p>#HealthAI #MedicalAI #TransferLearning #ActiveLearning #MachineLearning #AIinMedicine #DataEfficiency #HealthTech #ai in medicine Music generated by Mubert https://mubert.com/render</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Why build an AI model from scratch when you can give it a head start? And why waste expert time on easy cases? This episode explores two powerful strategies for efficient AI development. Discover how Transfer Learning gives your model a foundation of pre-existing knowledge and how Active Learning creates a smart feedback loop where the AI asks for help on only the toughest cases.&lt;/p&gt;&lt;p&gt;#HealthAI #MedicalAI #TransferLearning #ActiveLearning #MachineLearning #AIinMedicine #DataEfficiency #HealthTech #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3146814" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/40c40b0b-6a26-4d83-b6ad-addb70ff01e2/stream.mp3"/>
                
                <guid isPermaLink="false">c59d3d77-c1bf-411e-a57d-1aa5874bf7f7</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/40c40b0b-6a26-4d83-b6ad-addb70ff01e2</link>
                <pubDate>Thu, 16 Oct 2025 06:00:29 &#43;0000</pubDate>
                <itunes:duration>196</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>023 Self- and Semi-Supervised Learning - Learning from Imperfect Data</itunes:title>
                <title>023 Self- and Semi-Supervised Learning - Learning from Imperfect Data</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>How can an AI learn to read a medical scan without a perfect, expert-labeled dataset? In the real world, data is messy. This episode dives into three ingenious techniques (semi-supervised, self-supervised, and weak supervision) that allow AI to learn from a little bit of expert guidance, teach itself from unlabeled data, or make sense of noisy, imprecise information.</p><p><br></p><p>#HealthAI #MedicalAI #SemiSupervisedLearning #SelfSupervisedLearning #WeakSupervision #MachineLearning #DataScience #DigitalHealth #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;How can an AI learn to read a medical scan without a perfect, expert-labeled dataset? In the real world, data is messy. This episode dives into three ingenious techniques (semi-supervised, self-supervised, and weak supervision) that allow AI to learn from a little bit of expert guidance, teach itself from unlabeled data, or make sense of noisy, imprecise information.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #MedicalAI #SemiSupervisedLearning #SelfSupervisedLearning #WeakSupervision #MachineLearning #DataScience #DigitalHealth #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4171232" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/1a37ce14-bb4a-4d6b-91de-25de66f2bb2c/stream.mp3"/>
                
                <guid isPermaLink="false">44f0f53f-ec0f-4f8e-a162-4669aa3310ad</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/1a37ce14-bb4a-4d6b-91de-25de66f2bb2c</link>
                <pubDate>Tue, 14 Oct 2025 06:15:56 &#43;0000</pubDate>
                <itunes:duration>260</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>An AI Doctor Takes on the NEJM: Dr CaBot, the CPC-Bench AI and the Dawn of the Diagnostic Co-Pilot</itunes:title>
                <title>An AI Doctor Takes on the NEJM: Dr CaBot, the CPC-Bench AI and the Dawn of the Diagnostic Co-Pilot</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>The New England Journal of Medicine just featured an AI, &#39;Dr. CaBot,&#39; as a guest expert in its legendary diagnostic challenge. This AI can not only find the right diagnosis but can reason and tell a compelling clinical story, sometimes more convincingly than human doctors.</p><p><br></p><p>But does this mean Dr. AI is ready for the ward? We explore the gap between a perfect, curated case and the messy reality of clinical practice, and make the case for the future of AI not as an oracle, but as a &#39;diagnostic co-pilot&#39; that helps every doctor reason like an expert.</p><p><br></p><p>References:</p><p>- NEJM case including Dr CaBot&#39;s synthesis: https://www.nejm.org/doi/full/10.1056/NEJMcpc2412539</p><p>- The project behind Dr CaBoT:https://arxiv.org/abs/2509.12194</p><p>  - Advancing Medical Artificial Intelligence Using a Century of Cases by Buckley et al.</p><p><br></p><p>#HealthAI #MedicalAI #DigitalHealth #ClinicalReasoning #AIinMedicine #NEJM #FutureofMedicine #CPCBench #DrCaBot #ArtificialIntelligence #HealthTech #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;The New England Journal of Medicine just featured an AI, &amp;#39;Dr. CaBot,&amp;#39; as a guest expert in its legendary diagnostic challenge. This AI can not only find the right diagnosis but can reason and tell a compelling clinical story, sometimes more convincingly than human doctors.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;But does this mean Dr. AI is ready for the ward? We explore the gap between a perfect, curated case and the messy reality of clinical practice, and make the case for the future of AI not as an oracle, but as a &amp;#39;diagnostic co-pilot&amp;#39; that helps every doctor reason like an expert.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;References:&lt;/p&gt;&lt;p&gt;- NEJM case including Dr CaBot&amp;#39;s synthesis: https://www.nejm.org/doi/full/10.1056/NEJMcpc2412539&lt;/p&gt;&lt;p&gt;- The project behind Dr CaBoT:https://arxiv.org/abs/2509.12194&lt;/p&gt;&lt;p&gt;  - Advancing Medical Artificial Intelligence Using a Century of Cases by Buckley et al.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #MedicalAI #DigitalHealth #ClinicalReasoning #AIinMedicine #NEJM #FutureofMedicine #CPCBench #DrCaBot #ArtificialIntelligence #HealthTech #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4480522" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/6890b93c-3f72-4b2b-a59b-25ee35976929/stream.mp3"/>
                
                <guid isPermaLink="false">27a8fceb-8b44-4431-9332-9f30e0965097</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/6890b93c-3f72-4b2b-a59b-25ee35976929</link>
                <pubDate>Fri, 10 Oct 2025 06:00:31 &#43;0000</pubDate>
                <itunes:duration>280</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>022 Unsupervised learning - finding the patterns we might not have seen</itunes:title>
                <title>022 Unsupervised learning - finding the patterns we might not have seen</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>What if an AI could find patterns in patient data that we&#39;ve never seen before? That&#39;s the power of &#34;unsupervised learning&#34;, a type of AI that learns without an answer key. In this episode, we explain how this method works, and why it&#39;s a powerful tool for discovering new patient subtypes and advancing personalised medicine.</p><p><br></p><p>#UnsupervisedLearning #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #PersonalizedMedicine #PrecisionMedicine #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;What if an AI could find patterns in patient data that we&amp;#39;ve never seen before? That&amp;#39;s the power of &amp;#34;unsupervised learning&amp;#34;, a type of AI that learns without an answer key. In this episode, we explain how this method works, and why it&amp;#39;s a powerful tool for discovering new patient subtypes and advancing personalised medicine.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#UnsupervisedLearning #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #PersonalizedMedicine #PrecisionMedicine #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2735960" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/b84c47a7-a56a-4c32-a4ac-6749b876ffb4/stream.mp3"/>
                
                <guid isPermaLink="false">ddc03ed7-839a-47ab-807b-bade79de2c21</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/b84c47a7-a56a-4c32-a4ac-6749b876ffb4</link>
                <pubDate>Thu, 09 Oct 2025 06:10:59 &#43;0000</pubDate>
                <itunes:duration>170</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>021 Supervised learning - like learning with flashcards</itunes:title>
                <title>021 Supervised learning - like learning with flashcards</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>How do we teach an AI to read an ECG? The most common method is &#34;supervised learning,&#34; which is a lot like using flashcards with a medical student. In this episode, we explain this fundamental concept and reveal the two critical questions you should always ask about the data to assess the quality of any medical AI model.</p><p>#SupervisedLearning #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #MedicalAI #MedicalData #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;How do we teach an AI to read an ECG? The most common method is &amp;#34;supervised learning,&amp;#34; which is a lot like using flashcards with a medical student. In this episode, we explain this fundamental concept and reveal the two critical questions you should always ask about the data to assess the quality of any medical AI model.&lt;/p&gt;&lt;p&gt;#SupervisedLearning #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #MedicalAI #MedicalData #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2781936" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/85b59892-c594-486e-a338-f8d76d427e88/stream.mp3"/>
                
                <guid isPermaLink="false">79fbd2be-bac7-42dd-804d-a1147d57acfa</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/85b59892-c594-486e-a338-f8d76d427e88</link>
                <pubDate>Tue, 07 Oct 2025 06:10:17 &#43;0000</pubDate>
                <itunes:duration>173</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI Caught &#39;Cheating&#39; Its Medical Exams - New Research Paper from Microsoft</itunes:title>
                <title>AI Caught &#39;Cheating&#39; Its Medical Exams - New Research Paper from Microsoft</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Top AI models are acing medical benchmarks, but are they actually ready for the clinic? A groundbreaking study reveals that impressive scores can hide a dangerous lack of real-world robustness. In this episode, we break down the ingenious &#34;stress tests&#34; that expose how AI can succeed on an exam for all the wrong reasons—from guessing answers without seeing medical images to failing when the question format is slightly changed. Tune in to understand why we must move beyond leaderboard scores and start demanding real proof of clinical readiness.</p><p>&#34;The Illusion of Readiness: Stress Testing Large Frontier Models on Multimodal Medical Benchmarks&#34;. Gu et al. 22 Sept 2025.</p><p>Link to the paper: https://arxiv.org/html/2509.18234v1 </p><p>#Microsoft #OpenAI #Gemini #HealthAI #AIinHealthcare #DigitalHealth #MedicalAI #ClinicalAI #PatientSafety #Tech #Innovation #MachineLearning #LLM #ai in medicine Music generated by Mubert https://mubert.com/render</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Top AI models are acing medical benchmarks, but are they actually ready for the clinic? A groundbreaking study reveals that impressive scores can hide a dangerous lack of real-world robustness. In this episode, we break down the ingenious &amp;#34;stress tests&amp;#34; that expose how AI can succeed on an exam for all the wrong reasons—from guessing answers without seeing medical images to failing when the question format is slightly changed. Tune in to understand why we must move beyond leaderboard scores and start demanding real proof of clinical readiness.&lt;/p&gt;&lt;p&gt;&amp;#34;The Illusion of Readiness: Stress Testing Large Frontier Models on Multimodal Medical Benchmarks&amp;#34;. Gu et al. 22 Sept 2025.&lt;/p&gt;&lt;p&gt;Link to the paper: https://arxiv.org/html/2509.18234v1 &lt;/p&gt;&lt;p&gt;#Microsoft #OpenAI #Gemini #HealthAI #AIinHealthcare #DigitalHealth #MedicalAI #ClinicalAI #PatientSafety #Tech #Innovation #MachineLearning #LLM #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4959921" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/d4246199-b3c0-432c-b92c-0e0ce27fe330/stream.mp3"/>
                
                <guid isPermaLink="false">f84d0dad-0938-4325-9a3a-a7e8b6d2b14e</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/d4246199-b3c0-432c-b92c-0e0ce27fe330</link>
                <pubDate>Sat, 04 Oct 2025 08:18:09 &#43;0000</pubDate>
                <itunes:duration>309</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>020 Hyperparameters - the AI&#39;s recipe</itunes:title>
                <title>020 Hyperparameters - the AI&#39;s recipe</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>An AI model doesn&#39;t just learn on its own; it follows a protocol. The settings of that protocol, like the &#34;learning rate&#34;, are called hyperparameters. In this episode, we explain what these crucial settings are, why they are the &#39;art&#39; of AI development, and how they help you judge the quality of a research paper.</p><p><br></p><p>#Hyperparameters #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #DataScience #DeepLearning #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;An AI model doesn&amp;#39;t just learn on its own; it follows a protocol. The settings of that protocol, like the &amp;#34;learning rate&amp;#34;, are called hyperparameters. In this episode, we explain what these crucial settings are, why they are the &amp;#39;art&amp;#39; of AI development, and how they help you judge the quality of a research paper.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#Hyperparameters #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #DataScience #DeepLearning #MedicalEducation #MedEd #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2868453" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/00e636f6-6c1e-4481-8dbd-a87ed0752d6c/stream.mp3"/>
                
                <guid isPermaLink="false">506cd6f5-9f09-4c74-ac47-4a166af18b27</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/00e636f6-6c1e-4481-8dbd-a87ed0752d6c</link>
                <pubDate>Thu, 02 Oct 2025 06:15:09 &#43;0000</pubDate>
                <itunes:duration>179</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>019 Learning rates and Gradient descent - finding the bottom of the valley</itunes:title>
                <title>019 Learning rates and Gradient descent - finding the bottom of the valley</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Imagine trying to find the lowest point in a valley while blindfolded. How would you do it? The same way an AI finds the best answer: one step at a time, always moving downhill. This process is called &#34;gradient descent,&#34; and it&#39;s one of the engines that powers machine learning. In this episode, we explain how it works, what the &#34;learning rate&#34; is, and why it matters for understanding AI research.</p><p>#GradientDescent #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #AIexplained #DeepLearning #MedicalEducation #MedEd #ai in medicine</p><p>Music generated by Mubert https://mubert.com/render</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Imagine trying to find the lowest point in a valley while blindfolded. How would you do it? The same way an AI finds the best answer: one step at a time, always moving downhill. This process is called &amp;#34;gradient descent,&amp;#34; and it&amp;#39;s one of the engines that powers machine learning. In this episode, we explain how it works, what the &amp;#34;learning rate&amp;#34; is, and why it matters for understanding AI research.&lt;/p&gt;&lt;p&gt;#GradientDescent #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #AIexplained #DeepLearning #MedicalEducation #MedEd #ai in medicine&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3252140" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/1fc28e35-5453-4695-9b9d-e40830a7e096/stream.mp3"/>
                
                <guid isPermaLink="false">6ad5754c-8702-4a56-aecf-ffc3d024fd76</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/1fc28e35-5453-4695-9b9d-e40830a7e096</link>
                <pubDate>Tue, 30 Sep 2025 06:15:50 &#43;0000</pubDate>
                <itunes:duration>203</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The UK&#39;s New Health AI MHRA Commission - Rewriting the Rulebook or More Red Tape?</itunes:title>
                <title>The UK&#39;s New Health AI MHRA Commission - Rewriting the Rulebook or More Red Tape?</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>The UK has just launched a star-studded National Commission to rewrite the rulebook for AI in the NHS. The goal: faster, safer innovation for patients. It could be a powerful accelerator and will hopefully avoid the pull of becoming another talking shop lost in bureaucracy.</p><p>#HealthAI #AIinHealthcare #DigitalHealth #NHS #HealthTech #Regulation #MHRA #Innovation #MedTech #PatientSafety #FutureofHealthcare #UKInnovation</p><p>#ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;The UK has just launched a star-studded National Commission to rewrite the rulebook for AI in the NHS. The goal: faster, safer innovation for patients. It could be a powerful accelerator and will hopefully avoid the pull of becoming another talking shop lost in bureaucracy.&lt;/p&gt;&lt;p&gt;#HealthAI #AIinHealthcare #DigitalHealth #NHS #HealthTech #Regulation #MHRA #Innovation #MedTech #PatientSafety #FutureofHealthcare #UKInnovation&lt;/p&gt;&lt;p&gt;#ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3863614" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/2284172a-690a-4c63-ab80-9407046d672e/stream.mp3"/>
                
                <guid isPermaLink="false">5ca813d9-8726-452f-b551-2ccfe961e8d1</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/2284172a-690a-4c63-ab80-9407046d672e</link>
                <pubDate>Mon, 29 Sep 2025 15:07:07 &#43;0000</pubDate>
                <itunes:duration>241</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>018 The AI&#39;s Scorecard - What is a Loss Function</itunes:title>
                <title>018 The AI&#39;s Scorecard - What is a Loss Function</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>How does an AI model quantify a mistake? It uses a &#34;loss function&#34; – a scorecard that penalises different types of errors. In this episode, we explain what a loss function is, why it&#39;s not a one-size-fits-all tool, and how it reveals the true clinical priorities of any AI model. A crucial concept for critically appraising new research.</p><p>#LossFunction #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #DigitalHealth #MedicalEducation #MedEd #CriticalAppraisal #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;How does an AI model quantify a mistake? It uses a &amp;#34;loss function&amp;#34; – a scorecard that penalises different types of errors. In this episode, we explain what a loss function is, why it&amp;#39;s not a one-size-fits-all tool, and how it reveals the true clinical priorities of any AI model. A crucial concept for critically appraising new research.&lt;/p&gt;&lt;p&gt;#LossFunction #AIinHealthcare #MachineLearning #ClinicalAI #HealthTech #DigitalHealth #MedicalEducation #MedEd #CriticalAppraisal #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3064058" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/2c034b31-d0ac-4bb3-ac33-85e5d85b6053/stream.mp3"/>
                
                <guid isPermaLink="false">e02a6111-f4b5-4663-9a16-b8574c321662</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/2c034b31-d0ac-4bb3-ac33-85e5d85b6053</link>
                <pubDate>Thu, 25 Sep 2025 06:15:49 &#43;0000</pubDate>
                <itunes:duration>191</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>017 How models actually learn</itunes:title>
                <title>017 How models actually learn</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>How does an AI model actually learn to spot disease on a scan? It all comes down to one fundamental goal: minimising error. In this episode, we kick off a new set of episodes on the mechanics of machine learning by explaining this core principle with a simple clinical analogy that will change how you look at AI. Understanding this is the first step to critically appraising any research paper that lands on your desk.</p><p>#ArtificialIntelligence #AIinHealthcare #MachineLearning #ClinicalAI #AIinMedicine #HealthTech #DigitalHealth #MedicalEducation #MedEd #ErrorMinimisation #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;How does an AI model actually learn to spot disease on a scan? It all comes down to one fundamental goal: minimising error. In this episode, we kick off a new set of episodes on the mechanics of machine learning by explaining this core principle with a simple clinical analogy that will change how you look at AI. Understanding this is the first step to critically appraising any research paper that lands on your desk.&lt;/p&gt;&lt;p&gt;#ArtificialIntelligence #AIinHealthcare #MachineLearning #ClinicalAI #AIinMedicine #HealthTech #DigitalHealth #MedicalEducation #MedEd #ErrorMinimisation #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2066390" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/cbdb6d9a-5b78-4e4a-9c0e-b35eabb6f7c7/stream.mp3"/>
                
                <guid isPermaLink="false">6f9afe47-2b55-496d-9e15-908754c414dc</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/cbdb6d9a-5b78-4e4a-9c0e-b35eabb6f7c7</link>
                <pubDate>Tue, 23 Sep 2025 07:05:05 &#43;0000</pubDate>
                <itunes:duration>129</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Forecasting Health with AI - A Deep Dive into the Delphi-2M AI Transformer Model for Health Records</itunes:title>
                <title>Forecasting Health with AI - A Deep Dive into the Delphi-2M AI Transformer Model for Health Records</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Shmatko, A., Jung, A.W., Gaurav, K. <em>et al.</em> Learning the natural history of human disease with generative transformers. <em>Nature</em> (2025).</p><p>Link to paper: https://www.nature.com/articles/s41586-025-09529-3</p><p>What if an AI could forecast your health like the weather? A groundbreaking new model called Delphi-2M, published in Nature, claims to do just that — predicting your risk for over 1,000 diseases using technology similar to ChatGPT.</p><p><br></p><p>But is this the dawn of a new era in preventive medicine, or a high-tech crystal ball that reflects the biases of our current system?</p><p><br></p><p>In this week&#39;s episode of The Health AI Brief, we put Delphi-2M through a real-world stress test. We break down the impressive science, interrogate its biggest weakness (generalisability), and define the critical next step needed before a tool like this could ever see the light of day in the NHS.</p><p><br></p><p>Tune in to find out if this is a future game-changer or a promising idea that&#39;s not quite ready for primetime.</p><p><br></p><p>#HealthAI #DigitalHealth #PredictiveMedicine #AIinHealthcare #GenerativeAI #NHS #Delphi2M #FutureofMedicine #UKBiobank #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Shmatko, A., Jung, A.W., Gaurav, K. &lt;em&gt;et al.&lt;/em&gt; Learning the natural history of human disease with generative transformers. &lt;em&gt;Nature&lt;/em&gt; (2025).&lt;/p&gt;&lt;p&gt;Link to paper: https://www.nature.com/articles/s41586-025-09529-3&lt;/p&gt;&lt;p&gt;What if an AI could forecast your health like the weather? A groundbreaking new model called Delphi-2M, published in Nature, claims to do just that — predicting your risk for over 1,000 diseases using technology similar to ChatGPT.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;But is this the dawn of a new era in preventive medicine, or a high-tech crystal ball that reflects the biases of our current system?&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this week&amp;#39;s episode of The Health AI Brief, we put Delphi-2M through a real-world stress test. We break down the impressive science, interrogate its biggest weakness (generalisability), and define the critical next step needed before a tool like this could ever see the light of day in the NHS.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Tune in to find out if this is a future game-changer or a promising idea that&amp;#39;s not quite ready for primetime.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #PredictiveMedicine #AIinHealthcare #GenerativeAI #NHS #Delphi2M #FutureofMedicine #UKBiobank #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4878837" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/6f8ecbb7-7880-45fc-a889-a0caed46e341/stream.mp3"/>
                
                <guid isPermaLink="false">7300e61c-b441-4e2e-a344-2440119624df</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/6f8ecbb7-7880-45fc-a889-a0caed46e341</link>
                <pubDate>Thu, 18 Sep 2025 08:45:34 &#43;0000</pubDate>
                <itunes:duration>304</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI in the NHS: A Reality Check on a National Rollout</itunes:title>
                <title>AI in the NHS: A Reality Check on a National Rollout</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>We break down a landmark UCL study on the NHS&#39;s £21m programme to deploy AI in chest diagnostics.</p><p>They uncover the real reasons for significant delays, moving beyond the technology to the critical, real-world barriers: staff capacity, fragmented IT infrastructure, and complex governance. Find out why dedicated project management is the secret to success and what this means for the future of AI in healthcare.</p><p>Essential listening for any clinician, manager, or policymaker involved in digital transformation.</p><p><br></p><p>Paper:</p><p>Procurement and early deployment of artificial intelligence tools for chest diagnostics in NHS services in England: a rapid, mixed method evaluation</p><p>Ramsay et al, eClinicalMedicine</p><p>https://www.thelancet.com/journals/eclinm/article/PIIS2589-5370(25)00414-6/fulltext</p><p><br></p><p>Health AI, NHS, Digital Health, AI Implementation, Radiology AI, Chest Diagnostics, Healthcare Technology, Clinical AI, Change Management, NHS Innovation, Digital Transformation, UCL Study, The Lancet, AI in medicine. Music generated by Mubert https://mubert.com/render</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;We break down a landmark UCL study on the NHS&amp;#39;s £21m programme to deploy AI in chest diagnostics.&lt;/p&gt;&lt;p&gt;They uncover the real reasons for significant delays, moving beyond the technology to the critical, real-world barriers: staff capacity, fragmented IT infrastructure, and complex governance. Find out why dedicated project management is the secret to success and what this means for the future of AI in healthcare.&lt;/p&gt;&lt;p&gt;Essential listening for any clinician, manager, or policymaker involved in digital transformation.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Paper:&lt;/p&gt;&lt;p&gt;Procurement and early deployment of artificial intelligence tools for chest diagnostics in NHS services in England: a rapid, mixed method evaluation&lt;/p&gt;&lt;p&gt;Ramsay et al, eClinicalMedicine&lt;/p&gt;&lt;p&gt;https://www.thelancet.com/journals/eclinm/article/PIIS2589-5370(25)00414-6/fulltext&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Health AI, NHS, Digital Health, AI Implementation, Radiology AI, Chest Diagnostics, Healthcare Technology, Clinical AI, Change Management, NHS Innovation, Digital Transformation, UCL Study, The Lancet, AI in medicine. Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="5510373" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/ca5185fc-1cb7-414b-8958-e54433e16fb0/stream.mp3"/>
                
                <guid isPermaLink="false">afc155bb-a6c5-4f69-ae5c-3d7936e0ca56</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/ca5185fc-1cb7-414b-8958-e54433e16fb0</link>
                <pubDate>Wed, 17 Sep 2025 06:00:08 &#43;0000</pubDate>
                <itunes:duration>344</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>016 DICOM, HL7, FHIR: The Languages of Medical Data Exchange.</itunes:title>
                <title>016 DICOM, HL7, FHIR: The Languages of Medical Data Exchange.</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Ever been handed a patient&#39;s CT scan on a CD-ROM and wondered why medical systems struggle to communicate? The problem is that they need to speak the same language. This episode decodes the three essential standards of medical data exchange.</p><p>We break down DICOM (the &#34;courier package&#34; for images), HL7 (the &#34;digital fax machine&#34; for classic hospital data), and FHIR (the modern &#34;smartphone app&#34; enabling a connected future). Understand these acronyms to better appraise AI research and see the path forward for a truly interoperable healthcare system.</p><p>#DICOM #HL7 #FHIR #Interoperability #HealthInformatics #DataStandards #EHR #HealthcareAI #MedicalAI #DigitalHealth #HealthTech #ai in medicine Music generated by Mubert https://mubert.com/render</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Ever been handed a patient&amp;#39;s CT scan on a CD-ROM and wondered why medical systems struggle to communicate? The problem is that they need to speak the same language. This episode decodes the three essential standards of medical data exchange.&lt;/p&gt;&lt;p&gt;We break down DICOM (the &amp;#34;courier package&amp;#34; for images), HL7 (the &amp;#34;digital fax machine&amp;#34; for classic hospital data), and FHIR (the modern &amp;#34;smartphone app&amp;#34; enabling a connected future). Understand these acronyms to better appraise AI research and see the path forward for a truly interoperable healthcare system.&lt;/p&gt;&lt;p&gt;#DICOM #HL7 #FHIR #Interoperability #HealthInformatics #DataStandards #EHR #HealthcareAI #MedicalAI #DigitalHealth #HealthTech #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3603226" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/1e82c599-676f-4664-851f-70c0a962af62/stream.mp3"/>
                
                <guid isPermaLink="false">7a781816-da3d-4a4a-83a1-2fc00a2e15e1</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/1e82c599-676f-4664-851f-70c0a962af62</link>
                <pubDate>Tue, 16 Sep 2025 06:39:08 &#43;0000</pubDate>
                <itunes:duration>225</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>015 - Training data vs validation data vs test data</itunes:title>
                <title>015 - Training data vs validation data vs test data</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>How do we know if a medical AI has truly learned to spot disease, or just memorised the answers to its practice questions? The same way we evaluate a trainee: with a final, unseen exam.</p><p>This crucial process involves splitting data into three sets: training data (the textbook), validation data (the mock exam), and test data (the final exam). In this episode of The Health AI Brief, we explain why this split is our best defence against overconfident AI, what &#39;overfitting&#39; means for clinical practice, and why the &#39;test set&#39; result is the only number you should trust when appraising a new AI study.</p><p>#TrainingData #ValidationData #TestData #Overfitting #ModelValidation #ArtificialIntelligence #MachineLearning #HealthcareAI #MedicalAI #ClinicalAI #CriticalAppraisal #EvidenceBasedMedicine #DigitalHealth #ai in medicine Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;How do we know if a medical AI has truly learned to spot disease, or just memorised the answers to its practice questions? The same way we evaluate a trainee: with a final, unseen exam.&lt;/p&gt;&lt;p&gt;This crucial process involves splitting data into three sets: training data (the textbook), validation data (the mock exam), and test data (the final exam). In this episode of The Health AI Brief, we explain why this split is our best defence against overconfident AI, what &amp;#39;overfitting&amp;#39; means for clinical practice, and why the &amp;#39;test set&amp;#39; result is the only number you should trust when appraising a new AI study.&lt;/p&gt;&lt;p&gt;#TrainingData #ValidationData #TestData #Overfitting #ModelValidation #ArtificialIntelligence #MachineLearning #HealthcareAI #MedicalAI #ClinicalAI #CriticalAppraisal #EvidenceBasedMedicine #DigitalHealth #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3849404" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/47a1c6f8-b302-4458-85d2-25b781e07e69/stream.mp3"/>
                
                <guid isPermaLink="false">a783a93d-a04b-4b56-a1fa-7d803a5ca896</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/47a1c6f8-b302-4458-85d2-25b781e07e69</link>
                <pubDate>Thu, 11 Sep 2025 06:50:53 &#43;0000</pubDate>
                <itunes:duration>240</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>014 Data annotation and labelling</itunes:title>
                <title>014 Data annotation and labelling</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>How do you teach an AI to read a chest X-ray? The same way a consultant teaches a resident doctor on a ward round: you point, you trace, and you provide the correct answer.</p><p><br></p><p>This is data annotation, the meticulous, human-led process of &#34;teaching&#34; an algorithm by labelling thousands of examples. In this episode of The Health AI Brief, we explain why the quality of this digital teaching is everything. Discover why you should always ask who did the labelling when reading a new study, how human disagreement limits AI certainty, and why this is becoming a vital new form of clinical practice.</p><p><br></p><p>#ai in medicine #DataAnnotation #DataLabelling #SupervisedLearning #GroundTruth #MedicalAI #HealthcareAI #ExpertAnnotation #AIinRadiology #ClinicalAI #ArtificialIntelligence #MachineLearning #DigitalHealth #HealthTech Music generated by Mubert https://mubert.com/render</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;How do you teach an AI to read a chest X-ray? The same way a consultant teaches a resident doctor on a ward round: you point, you trace, and you provide the correct answer.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;This is data annotation, the meticulous, human-led process of &amp;#34;teaching&amp;#34; an algorithm by labelling thousands of examples. In this episode of The Health AI Brief, we explain why the quality of this digital teaching is everything. Discover why you should always ask who did the labelling when reading a new study, how human disagreement limits AI certainty, and why this is becoming a vital new form of clinical practice.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#ai in medicine #DataAnnotation #DataLabelling #SupervisedLearning #GroundTruth #MedicalAI #HealthcareAI #ExpertAnnotation #AIinRadiology #ClinicalAI #ArtificialIntelligence #MachineLearning #DigitalHealth #HealthTech Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3424339" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/bd8e2866-922e-4296-88a9-665d2f4a0854/stream.mp3"/>
                
                <guid isPermaLink="false">4ff1048b-0b71-45a9-a812-d338f9151ee2</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/bd8e2866-922e-4296-88a9-665d2f4a0854</link>
                <pubDate>Tue, 09 Sep 2025 08:55:49 &#43;0000</pubDate>
                <itunes:duration>214</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI stethoscopes research trial - press release vs peer review</itunes:title>
                <title>AI stethoscopes research trial - press release vs peer review</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>The headlines were everywhere: a revolutionary AI stethoscope that could more than double the detection of heart failure in GP clinics. The reported results from the TRICORDER trial sound transformative. But what happens when you look beyond the press release?</p><p>Was it truly the AI that improved diagnosis, or did the trial simply prompt more testing? With reports that 70% of clinics stopped using the device long-term, what does this mean for real-world feasibility? </p><p>#HealthAI #DigitalHealth #AIinHealthcare #Cardiology #PrimaryCare #NHS #MedTech #ClinicalTrials #TRICORDER #ai in medicine Music generated by Mubert https://mubert.com/render</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;The headlines were everywhere: a revolutionary AI stethoscope that could more than double the detection of heart failure in GP clinics. The reported results from the TRICORDER trial sound transformative. But what happens when you look beyond the press release?&lt;/p&gt;&lt;p&gt;Was it truly the AI that improved diagnosis, or did the trial simply prompt more testing? With reports that 70% of clinics stopped using the device long-term, what does this mean for real-world feasibility? &lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #AIinHealthcare #Cardiology #PrimaryCare #NHS #MedTech #ClinicalTrials #TRICORDER #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4987506" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/0ecf9d69-ae38-4aad-a0d7-1522a4a91a83/stream.mp3"/>
                
                <guid isPermaLink="false">dcf8190a-c417-4b9e-b9bf-5908efeadae9</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/0ecf9d69-ae38-4aad-a0d7-1522a4a91a83</link>
                <pubDate>Tue, 02 Sep 2025 14:10:33 &#43;0000</pubDate>
                <itunes:duration>311</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>013 Data cleaning and preprocessing</itunes:title>
                <title>013 Data cleaning and preprocessing</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>A patient&#39;s record is a chaotic mix of notes, lab test results, and codes. We can navigate the mess, but how can an AI? The answer lies in data cleaning and preprocessing – the most critical, yet unglamorous, step in building medical AI.</p><p><br></p><p>This episode of The Health AI Brief explains why this process is like meticulously preparing ingredients for a complex recipe. We break down the key steps - from handling missing values to standardising formats, and offers three essential takeaways for appraising new AI studies and understanding why a tool that works in one hospital might fail in yours.</p><p>#DataCleaning #DataPreprocessing #DataScience #ArtificialIntelligence #MachineLearning #MedicalAI #HealthcareAI #ClinicalAI #DataQuality #DigitalHealth #HealthTech #CriticalAppraisal #ai in medicine Music generated by Mubert https://mubert.com/render</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;A patient&amp;#39;s record is a chaotic mix of notes, lab test results, and codes. We can navigate the mess, but how can an AI? The answer lies in data cleaning and preprocessing – the most critical, yet unglamorous, step in building medical AI.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;This episode of The Health AI Brief explains why this process is like meticulously preparing ingredients for a complex recipe. We break down the key steps - from handling missing values to standardising formats, and offers three essential takeaways for appraising new AI studies and understanding why a tool that works in one hospital might fail in yours.&lt;/p&gt;&lt;p&gt;#DataCleaning #DataPreprocessing #DataScience #ArtificialIntelligence #MachineLearning #MedicalAI #HealthcareAI #ClinicalAI #DataQuality #DigitalHealth #HealthTech #CriticalAppraisal #ai in medicine Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3732793" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/ddc9118f-8ef3-42aa-b402-3191f5aeb614/stream.mp3"/>
                
                <guid isPermaLink="false">62144924-71de-4234-9eb9-69fb4858c126</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/ddc9118f-8ef3-42aa-b402-3191f5aeb614</link>
                <pubDate>Tue, 02 Sep 2025 13:18:35 &#43;0000</pubDate>
                <itunes:duration>233</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>012 Data Quality - Junk in, junk out</itunes:title>
                <title>012 Data Quality - Junk in, junk out</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>A critically high potassium result arrives for a patient who looks completely well. Your first instinct isn&#39;t to treat, but to question the sample. Should we be just as sceptical of the data behind medical AI?</p><p><br></p><p>This episode of The Health AI Brief dives into the most fundamental rule of artificial intelligence: junk in, junk out. Dr. Stephen uses the classic example of a haemolysed blood sample to explain why an AI model is only as reliable as the data it’s trained on. Discover how flawed data can mislead even the most sophisticated algorithms and learn three essential takeaways for critically appraising AI tools and trusting your clinical judgement in this new era of medicine.</p><p><br></p><p>#ai in medicine #ArtificialIntelligence #MachineLearning #DataQuality #JunkInJunkOut #GIGO #HealthcareAI #ClinicalDecisionSupport #MedicalAI #AIBias #TrainingData #DigitalHealth #CriticalAppraisal #EvidenceBasedMedicine #HealthTech</p><p>Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;A critically high potassium result arrives for a patient who looks completely well. Your first instinct isn&amp;#39;t to treat, but to question the sample. Should we be just as sceptical of the data behind medical AI?&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;This episode of The Health AI Brief dives into the most fundamental rule of artificial intelligence: junk in, junk out. Dr. Stephen uses the classic example of a haemolysed blood sample to explain why an AI model is only as reliable as the data it’s trained on. Discover how flawed data can mislead even the most sophisticated algorithms and learn three essential takeaways for critically appraising AI tools and trusting your clinical judgement in this new era of medicine.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#ai in medicine #ArtificialIntelligence #MachineLearning #DataQuality #JunkInJunkOut #GIGO #HealthcareAI #ClinicalDecisionSupport #MedicalAI #AIBias #TrainingData #DigitalHealth #CriticalAppraisal #EvidenceBasedMedicine #HealthTech&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3768737" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/d402b981-3801-436e-aeaf-f9de0210a734/stream.mp3"/>
                
                <guid isPermaLink="false">ecba93b9-bc8d-4e4b-b87a-e5c9f3bd0581</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/d402b981-3801-436e-aeaf-f9de0210a734</link>
                <pubDate>Fri, 29 Aug 2025 15:55:03 &#43;0000</pubDate>
                <itunes:duration>235</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>011 Structured vs. Unstructured Data</itunes:title>
                <title>011 Structured vs. Unstructured Data</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Some estimates indicate up to 80% of clinical data is &#34;unstructured&#34; narrative. It’s messy, complex, and where the real patient story lives. This episode explains how AI is finally unlocking this treasure trove of information and what it means for your daily practice.</p><p><br></p><p>#HealthTech #ArtificialIntelligence #ClinicalPractice #MedicalInnovation #EHR #PatientData #DataScience #HealthPodcast #AIforDocs #ai in medicine</p><p>Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Some estimates indicate up to 80% of clinical data is &amp;#34;unstructured&amp;#34; narrative. It’s messy, complex, and where the real patient story lives. This episode explains how AI is finally unlocking this treasure trove of information and what it means for your daily practice.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthTech #ArtificialIntelligence #ClinicalPractice #MedicalInnovation #EHR #PatientData #DataScience #HealthPodcast #AIforDocs #ai in medicine&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3420160" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/1795b567-4cb2-4d3d-85eb-a5bcb8460d61/stream.mp3"/>
                
                <guid isPermaLink="false">b27291a0-6b31-4416-acd3-cbcf6ca6ae92</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/1795b567-4cb2-4d3d-85eb-a5bcb8460d61</link>
                <pubDate>Tue, 26 Aug 2025 07:23:13 &#43;0000</pubDate>
                <itunes:duration>213</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI robot surgeon that corrects its own mistakes</itunes:title>
                <title>AI robot surgeon that corrects its own mistakes</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Link to the preprint discussed: https://arxiv.org/pdf/2505.10251</p><p>Link to the project with explanations: https://h-surgical-robot-transformer.github.io/</p><p><br></p><p>A surgical robot that corrects its own mistakes sounds like science fiction.</p><p>In this paper, new research from Johns Hopkins &amp; Stanford makes it a reality. But is it ready for the operating room?</p><p>The new SRT-H system allows a da Vinci robot to autonomously perform key steps of a gallbladder removal, achieving a 100% success rate in a lab setting. It can even identify and correct its own errors in real-time—a huge leap for surgical AI.</p><p>But the biggest challenge isn&#39;t executing a perfect plan; it&#39;s managing the messy, unpredictable reality of a live patient.</p><p>In the latest episode of The Health AI Brief podcast, we break down:</p><p>- The gap between lab performance and clinical reality.</p><p>- The crucial shift from chasing full autonomy to proving ultra-reliable, supervised autonomy.</p><p>It&#39;s a really interesting and impressive application of AI. This isn&#39;t just about technology. It&#39;s about building trust, managing risk, and creating AI that surgeons can actually rely on.</p><p>Authors of the work: Ji Woong (Brian) Kim1,2, Juo-Tung Chen1, Pascal Hansen1, Lucy X. Shi2, Antony Goldenberg1, Samuel Schmidgall1, Paul Maria Scheikl1, Anton Deguet1, Brandon M. White1, De Ru Tsai3, Richard Cha3, Jeffrey Jopling1, Chelsea Finn2, Axel Krieger1</p><p>1 Johns Hopkins University, 2 Stanford University, 3 Optosurgical</p><p>#AIinHealthcare #SurgicalRobotics #AutonomousSurgery #HealthTech #DigitalHealth #MedTech #AIinSurgery #MachineLearning #daVinciSurgery #PatientSafety #FutureofMedicine #ClinicalInnovation #JohnsHopkins #Stanford</p><p>Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Link to the preprint discussed: https://arxiv.org/pdf/2505.10251&lt;/p&gt;&lt;p&gt;Link to the project with explanations: https://h-surgical-robot-transformer.github.io/&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;A surgical robot that corrects its own mistakes sounds like science fiction.&lt;/p&gt;&lt;p&gt;In this paper, new research from Johns Hopkins &amp;amp; Stanford makes it a reality. But is it ready for the operating room?&lt;/p&gt;&lt;p&gt;The new SRT-H system allows a da Vinci robot to autonomously perform key steps of a gallbladder removal, achieving a 100% success rate in a lab setting. It can even identify and correct its own errors in real-time—a huge leap for surgical AI.&lt;/p&gt;&lt;p&gt;But the biggest challenge isn&amp;#39;t executing a perfect plan; it&amp;#39;s managing the messy, unpredictable reality of a live patient.&lt;/p&gt;&lt;p&gt;In the latest episode of The Health AI Brief podcast, we break down:&lt;/p&gt;&lt;p&gt;- The gap between lab performance and clinical reality.&lt;/p&gt;&lt;p&gt;- The crucial shift from chasing full autonomy to proving ultra-reliable, supervised autonomy.&lt;/p&gt;&lt;p&gt;It&amp;#39;s a really interesting and impressive application of AI. This isn&amp;#39;t just about technology. It&amp;#39;s about building trust, managing risk, and creating AI that surgeons can actually rely on.&lt;/p&gt;&lt;p&gt;Authors of the work: Ji Woong (Brian) Kim1,2, Juo-Tung Chen1, Pascal Hansen1, Lucy X. Shi2, Antony Goldenberg1, Samuel Schmidgall1, Paul Maria Scheikl1, Anton Deguet1, Brandon M. White1, De Ru Tsai3, Richard Cha3, Jeffrey Jopling1, Chelsea Finn2, Axel Krieger1&lt;/p&gt;&lt;p&gt;1 Johns Hopkins University, 2 Stanford University, 3 Optosurgical&lt;/p&gt;&lt;p&gt;#AIinHealthcare #SurgicalRobotics #AutonomousSurgery #HealthTech #DigitalHealth #MedTech #AIinSurgery #MachineLearning #daVinciSurgery #PatientSafety #FutureofMedicine #ClinicalInnovation #JohnsHopkins #Stanford&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4549485" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/f80510cd-27a5-47c8-9180-18f5ab0bd2c5/stream.mp3"/>
                
                <guid isPermaLink="false">7ab912e6-7ad8-4bff-a6d2-498f039fcf3c</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/f80510cd-27a5-47c8-9180-18f5ab0bd2c5</link>
                <pubDate>Wed, 20 Aug 2025 07:30:08 &#43;0000</pubDate>
                <itunes:duration>284</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>010 The AI Tipping Point in Medicine - Why Now?</itunes:title>
                <title>010 The AI Tipping Point in Medicine - Why Now?</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>AI in medicine has reached a clear tipping point. But what are the specific factors driving this rapid progress? This episode breaks down the three essential pillars: the explosion in clinical data, massive leaps in computation, and recent, powerful breakthroughs in algorithms.</p><p>We explore how mature algorithms from outside of medicine, particularly in image and natural language processing, are now being repurposed for clinical use. You&#39;ll also learn why the biggest hurdles for AI in healthcare are no longer necessarily the algorithms themselves, but the practical challenges of accessing high-quality clinical data, system integration, and the costs of computation.</p><p>This is your essential primer on the core components of modern clinical AI, providing the foundation needed to evaluate new health tech tools.</p><p>Keywords: AI in Healthcare, Machine Learning, Digital Health, Clinical Data, Algorithms, Computation, Medical Imaging, AI for Doctors, AI in medicine</p><p>Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;AI in medicine has reached a clear tipping point. But what are the specific factors driving this rapid progress? This episode breaks down the three essential pillars: the explosion in clinical data, massive leaps in computation, and recent, powerful breakthroughs in algorithms.&lt;/p&gt;&lt;p&gt;We explore how mature algorithms from outside of medicine, particularly in image and natural language processing, are now being repurposed for clinical use. You&amp;#39;ll also learn why the biggest hurdles for AI in healthcare are no longer necessarily the algorithms themselves, but the practical challenges of accessing high-quality clinical data, system integration, and the costs of computation.&lt;/p&gt;&lt;p&gt;This is your essential primer on the core components of modern clinical AI, providing the foundation needed to evaluate new health tech tools.&lt;/p&gt;&lt;p&gt;Keywords: AI in Healthcare, Machine Learning, Digital Health, Clinical Data, Algorithms, Computation, Medical Imaging, AI for Doctors, AI in medicine&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3454432" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/dcb3fde6-9292-42f5-93b2-047a85ff32da/stream.mp3"/>
                
                <guid isPermaLink="false">87f51f8c-7c3c-4e2b-abc6-8c2830fcfb56</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/dcb3fde6-9292-42f5-93b2-047a85ff32da</link>
                <pubDate>Sat, 16 Aug 2025 08:59:59 &#43;0000</pubDate>
                <itunes:duration>215</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy - Does &#39;helpful&#39; tech actually make us worse?</itunes:title>
                <title>Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy - Does &#39;helpful&#39; tech actually make us worse?</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>A new study from The Lancet that has sent a ripple of anxiety through the clinical AI community. The paper suggests that AI tools designed to help doctors may actually cause their skills to decline over time.</p><p>But is the evidence as solid as the headlines suggest? Is AI dependency a real threat to patient safety?</p><p>#HealthAI #ArtificialIntelligence #ClinicalAI #PatientSafety #Deskilling #DigitalHealth #MedTech #Colonoscopy #Gastroenterology #TheLancet #NHS #DeepMind #MedicalPodcast #HealthcareInnovation</p><p>Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;A new study from The Lancet that has sent a ripple of anxiety through the clinical AI community. The paper suggests that AI tools designed to help doctors may actually cause their skills to decline over time.&lt;/p&gt;&lt;p&gt;But is the evidence as solid as the headlines suggest? Is AI dependency a real threat to patient safety?&lt;/p&gt;&lt;p&gt;#HealthAI #ArtificialIntelligence #ClinicalAI #PatientSafety #Deskilling #DigitalHealth #MedTech #Colonoscopy #Gastroenterology #TheLancet #NHS #DeepMind #MedicalPodcast #HealthcareInnovation&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4412395" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/dc5074ad-31c3-43ea-a9a0-892048b353f5/stream.mp3"/>
                
                <guid isPermaLink="false">c1014562-fcf3-4cbf-a696-905b4f37453a</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/dc5074ad-31c3-43ea-a9a0-892048b353f5</link>
                <pubDate>Wed, 13 Aug 2025 07:45:53 &#43;0000</pubDate>
                <itunes:duration>275</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>009 AI Agents: Automating Healthcare or a New Clinical Safety Risk?</itunes:title>
                <title>009 AI Agents: Automating Healthcare or a New Clinical Safety Risk?</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>AI that can do, not just tell. We explore AI Agents: systems that go beyond diagnosis to take action. This leap forward promises to tackle our admin overload but brings a new level of clinical risk. Are we ready?</p><p><br></p><p>Music generated by Mubert https://mubert.com/render</p><p>AI Agents, Healthcare AI, Clinical Safety, Physician Burnout, Automation, Patient Safety, Autonomous AI, Human-in-the-Loop, AI Risk, Workflow Automation, Future of Medicine, AI Hallucination, Administrative Tasks</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;AI that can do, not just tell. We explore AI Agents: systems that go beyond diagnosis to take action. This leap forward promises to tackle our admin overload but brings a new level of clinical risk. Are we ready?&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;AI Agents, Healthcare AI, Clinical Safety, Physician Burnout, Automation, Patient Safety, Autonomous AI, Human-in-the-Loop, AI Risk, Workflow Automation, Future of Medicine, AI Hallucination, Administrative Tasks&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="2578390" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/607603c9-f994-40a0-9449-82a907c4e193/stream.mp3"/>
                
                <guid isPermaLink="false">a902effe-2bf9-4246-8ff0-b6552c721902</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/607603c9-f994-40a0-9449-82a907c4e193</link>
                <pubDate>Tue, 12 Aug 2025 07:18:41 &#43;0000</pubDate>
                <itunes:duration>161</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>008 Narrow vs General AI, AGI and ASI</itunes:title>
                <title>008 Narrow vs General AI, AGI and ASI</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>We see AI that can read an ECG, but we also hear headlines about a future superintelligence. How do these two realities connect?</p><p><br></p><p>In this episode, we provide an essential reality check. We break down the crucial difference between the AI we have in our clinics today (Narrow AI) and the AI of science fiction (AGI &amp; ASI).</p><p><br></p><p>Understanding this spectrum is key. It helps you ground your expectations when a new tool is presented for your hospital and separates the practical task of clinical validation from the long-term ethical debates.</p><p><br></p><p>🎧 Listen now to cut through the hype and understand the real limits of the AI you&#39;ll encounter in your practice.</p><p><br></p><p>Music generated by Mubert https://mubert.com/render</p><p>#AIinHealthcare #DigitalHealth #NarrowAI #AGI #ArtificialIntelligence #FutureofMedicine #MedEd #HealthTech</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;We see AI that can read an ECG, but we also hear headlines about a future superintelligence. How do these two realities connect?&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this episode, we provide an essential reality check. We break down the crucial difference between the AI we have in our clinics today (Narrow AI) and the AI of science fiction (AGI &amp;amp; ASI).&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Understanding this spectrum is key. It helps you ground your expectations when a new tool is presented for your hospital and separates the practical task of clinical validation from the long-term ethical debates.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;🎧 Listen now to cut through the hype and understand the real limits of the AI you&amp;#39;ll encounter in your practice.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;#AIinHealthcare #DigitalHealth #NarrowAI #AGI #ArtificialIntelligence #FutureofMedicine #MedEd #HealthTech&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4188786" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/7c5488ad-eaca-4c10-a276-94b1dcdba54f/stream.mp3"/>
                
                <guid isPermaLink="false">0e4c181b-f18f-482c-afdb-f02b61226e83</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/7c5488ad-eaca-4c10-a276-94b1dcdba54f</link>
                <pubDate>Sat, 09 Aug 2025 07:38:33 &#43;0000</pubDate>
                <itunes:duration>261</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>OpenAI&#39;s New Release: Why &#39;Open-Weights&#39; Isn&#39;t &#39;Open-Source&#39; - And Why They&#39;re Both Relevant For Clinicians</itunes:title>
                <title>OpenAI&#39;s New Release: Why &#39;Open-Weights&#39; Isn&#39;t &#39;Open-Source&#39; - And Why They&#39;re Both Relevant For Clinicians</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>You’ve seen the headlines about OpenAI’s new model, but much of the coverage is confusing &#39;open-weights&#39; with &#39;open-source&#39;. They are not the same, and the distinction is relevant for patient data security and clinical trust.</p><p><br></p><p>In this episode of The Health AI Brief, we decode some of the jargon. Learn:</p><p>- The fundamental difference between open-weights and true open-source AI.</p><p>- The implications for patient privacy and data security when running models locally.</p><p>- The key question you must ask any vendor about their &#34;open&#34; AI model.</p><p><br></p><p>Tune in to understand the risks and benefits before these tools arrive in your hospital.</p><p><br></p><p>AI in Healthcare, Open-Source AI, Open-Weights AI, OpenAI, LLM, Healthtech, Digital Health, Clinical AI, Patient Data, Data Privacy, Data Security, GDPR, HIPAA, Medical AI, Artificial Intelligence, Machine Learning.</p><p>Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;You’ve seen the headlines about OpenAI’s new model, but much of the coverage is confusing &amp;#39;open-weights&amp;#39; with &amp;#39;open-source&amp;#39;. They are not the same, and the distinction is relevant for patient data security and clinical trust.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this episode of The Health AI Brief, we decode some of the jargon. Learn:&lt;/p&gt;&lt;p&gt;- The fundamental difference between open-weights and true open-source AI.&lt;/p&gt;&lt;p&gt;- The implications for patient privacy and data security when running models locally.&lt;/p&gt;&lt;p&gt;- The key question you must ask any vendor about their &amp;#34;open&amp;#34; AI model.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Tune in to understand the risks and benefits before these tools arrive in your hospital.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;AI in Healthcare, Open-Source AI, Open-Weights AI, OpenAI, LLM, Healthtech, Digital Health, Clinical AI, Patient Data, Data Privacy, Data Security, GDPR, HIPAA, Medical AI, Artificial Intelligence, Machine Learning.&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3526739" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/176e03f2-6dcb-432e-9e7e-7414fde96c87/stream.mp3"/>
                
                <guid isPermaLink="false">e127ce78-c851-4060-93ce-f3e24620b572</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/176e03f2-6dcb-432e-9e7e-7414fde96c87</link>
                <pubDate>Wed, 06 Aug 2025 08:54:22 &#43;0000</pubDate>
                <itunes:duration>220</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>007 Parameters</itunes:title>
                <title>007 Parameters</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>You hear about new AI models having &#34;billions of parameters.&#34; It sounds impossibly complex, but the core idea is surprisingly simple and it&#39;s the single most important concept for understanding how an AI actually works. These parameters determine an AI&#39;s capabilities, its limitations, and its potential for bias.</p><p>In this episode of The Health AI Brief we&#39;ll cover:</p><p>- A simple, intuitive analogy for what a &#39;parameter&#39; actually is.</p><p>- How the process of &#39;training&#39; an AI is really just about adjusting these billions of settings.</p><p>- Why understanding this concept helps you cut through the hype and critically appraise the AI tools being marketed to clinicians.</p><p>This is a foundational concept. Grasp this, and you&#39;ll have a much clearer view of the technology poised to change our practice.</p><p>AI in Healthcare, AI Parameters, Machine Learning, Neural Networks, Large Language Models (LLM), Healthtech, Digital Health, Clinical AI, AI Training, Model Architecture, Artificial Intelligence, Medical AI Explained.</p><p>Music generated by Mubert https://mubert.com/render</p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;You hear about new AI models having &amp;#34;billions of parameters.&amp;#34; It sounds impossibly complex, but the core idea is surprisingly simple and it&amp;#39;s the single most important concept for understanding how an AI actually works. These parameters determine an AI&amp;#39;s capabilities, its limitations, and its potential for bias.&lt;/p&gt;&lt;p&gt;In this episode of The Health AI Brief we&amp;#39;ll cover:&lt;/p&gt;&lt;p&gt;- A simple, intuitive analogy for what a &amp;#39;parameter&amp;#39; actually is.&lt;/p&gt;&lt;p&gt;- How the process of &amp;#39;training&amp;#39; an AI is really just about adjusting these billions of settings.&lt;/p&gt;&lt;p&gt;- Why understanding this concept helps you cut through the hype and critically appraise the AI tools being marketed to clinicians.&lt;/p&gt;&lt;p&gt;This is a foundational concept. Grasp this, and you&amp;#39;ll have a much clearer view of the technology poised to change our practice.&lt;/p&gt;&lt;p&gt;AI in Healthcare, AI Parameters, Machine Learning, Neural Networks, Large Language Models (LLM), Healthtech, Digital Health, Clinical AI, AI Training, Model Architecture, Artificial Intelligence, Medical AI Explained.&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="3043160" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/0143cf9f-3ec5-41ff-925d-6ae49d335e6c/stream.mp3"/>
                
                <guid isPermaLink="false">52850f0e-507e-40ef-9ac8-9aa202420f98</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/0143cf9f-3ec5-41ff-925d-6ae49d335e6c</link>
                <pubDate>Wed, 06 Aug 2025 08:43:12 &#43;0000</pubDate>
                <itunes:duration>190</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>006 Deep learning</itunes:title>
                <title>006 Deep learning</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>How can an AI analyse a CT scan or pathology slide with expert-level accuracy? The answer is Deep Learning—the engine behind the revolution in medical imaging.</p><p><br></p><p>In this episode, we explain how these &#39;deep&#39; neural networks teach themselves to see complex patterns, much like our own visual cortex processes information from simple edges to complex objects.</p><p><br></p><p>Knowing this matters. It helps you understand why these models are often &#34;black boxes,&#34; why appraising their training data is so critical, and why they are hyper-specialised for a single task.</p><p><br></p><p>🎧 Listen now to understand the technology powering the next generation of medical imaging tools.</p><p><br></p><p>Music generated by Mubert https://mubert.com/render</p><p>#DeepLearning #AIinMedicine #MedicalImaging #Radiology #Pathology #HealthTech #NeuralNetworks #ClinicalAI #MedEd</p><p><br></p><p>healthaibrief@outlook.com</p><p><br></p>]]></description>
                <content:encoded>&lt;p&gt;How can an AI analyse a CT scan or pathology slide with expert-level accuracy? The answer is Deep Learning—the engine behind the revolution in medical imaging.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this episode, we explain how these &amp;#39;deep&amp;#39; neural networks teach themselves to see complex patterns, much like our own visual cortex processes information from simple edges to complex objects.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Knowing this matters. It helps you understand why these models are often &amp;#34;black boxes,&amp;#34; why appraising their training data is so critical, and why they are hyper-specialised for a single task.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;🎧 Listen now to understand the technology powering the next generation of medical imaging tools.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;#DeepLearning #AIinMedicine #MedicalImaging #Radiology #Pathology #HealthTech #NeuralNetworks #ClinicalAI #MedEd&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="4144065" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/709bc9c0-b8c5-449f-b874-30d8c8321815/stream.mp3"/>
                
                <guid isPermaLink="false">64b8be3b-036e-4b17-a881-6349a259eb9e</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/709bc9c0-b8c5-449f-b874-30d8c8321815</link>
                <pubDate>Sun, 03 Aug 2025 17:14:00 &#43;0000</pubDate>
                <itunes:duration>259</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Flok Health - an AI Physio for Back Pain - A cure for health service waiting lists? But is it safe?</itunes:title>
                <title>Flok Health - an AI Physio for Back Pain - A cure for health service waiting lists? But is it safe?</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Today we&#39;re discussing AI-powered physiotherapy app from Flok Health which has seen widespread media coverage and has gained both CQC and MHRA approval, promising to slash waiting lists for back pain.</p><p><br></p><p>The goal is compelling: automate care for straightforward cases to free up human clinicians for complex ones. But what does the evidence really say?</p><p><br></p><p>In this 5-minute analysis, I break down:</p><p>  - The core challenge of validating safety, especially around &#34;red flag&#34; screening.</p><p>  - Why pilot studies on NHS staff, while promising, aren&#39;t enough.</p><p>  - The specific, robust evidence needed to justify widespread adoption.</p><p><br></p><p>Is this the future of musculoskeletal care or a solution built on speculative claims? My verdict: Keep a Close Eye on This.</p><p><br></p><p>Listen to the full episode to understand the critical next steps for turning this promising idea into a trusted clinical tool.</p><p><br></p><p>#HealthAI #DigitalHealth #NHSInnovation #Physiotherapy #HealthTech #ClinicalEvidence #MedTech #BackPain #AIinHealthcare #DigitalTransformation #Flok</p><p><br></p><p>Music generated by Mubert https://mubert.com/render</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;Today we&amp;#39;re discussing AI-powered physiotherapy app from Flok Health which has seen widespread media coverage and has gained both CQC and MHRA approval, promising to slash waiting lists for back pain.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;The goal is compelling: automate care for straightforward cases to free up human clinicians for complex ones. But what does the evidence really say?&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this 5-minute analysis, I break down:&lt;/p&gt;&lt;p&gt;  - The core challenge of validating safety, especially around &amp;#34;red flag&amp;#34; screening.&lt;/p&gt;&lt;p&gt;  - Why pilot studies on NHS staff, while promising, aren&amp;#39;t enough.&lt;/p&gt;&lt;p&gt;  - The specific, robust evidence needed to justify widespread adoption.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Is this the future of musculoskeletal care or a solution built on speculative claims? My verdict: Keep a Close Eye on This.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Listen to the full episode to understand the critical next steps for turning this promising idea into a trusted clinical tool.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;#HealthAI #DigitalHealth #NHSInnovation #Physiotherapy #HealthTech #ClinicalEvidence #MedTech #BackPain #AIinHealthcare #DigitalTransformation #Flok&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="7046373" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/d1ccbe2d-63f6-463c-8429-4c077ceff6ef/stream.mp3"/>
                
                <guid isPermaLink="false">b44fc056-b4bb-44ac-ab60-ac2c6895ebf0</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/d1ccbe2d-63f6-463c-8429-4c077ceff6ef</link>
                <pubDate>Fri, 01 Aug 2025 08:06:43 &#43;0000</pubDate>
                <itunes:duration>440</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>005 Neural networks</itunes:title>
                <title>005 Neural networks</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>When we say a machine &#34;learns&#34; from data, what&#39;s actually happening? What is the engine doing?</p><p><br></p><p>In this episode, we break down the fundamental building block of modern AI: the Neural Network. We explain it as a logical chain for synthesising information—from raw data like ECG signals in the &#39;input layer&#39;, to abstract concepts like &#39;high-risk&#39; in the &#39;hidden layers&#39;, to a final diagnosis.</p><p><br></p><p>Understanding this basic architecture is key to knowing an AI&#39;s limits. It reveals why these systems are powerful pattern-finders but don&#39;t &#34;understand&#34; causality; a crucial distinction for clinical safety.</p><p><br></p><p>🎧 Ready to look under the bonnet of AI? Listen now to grasp the core concept behind almost every AI tool you&#39;ll encounter.</p><p><br></p><p>Music generated by Mubert https://mubert.com/render</p><p>#NeuralNetworks #AIinHealthcare #MachineLearning #MedEd #HealthTech #DigitalHealth #DataScience #ClinicalAI</p><p><br></p><p>healthaibrief@outlook.com</p>]]></description>
                <content:encoded>&lt;p&gt;When we say a machine &amp;#34;learns&amp;#34; from data, what&amp;#39;s actually happening? What is the engine doing?&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In this episode, we break down the fundamental building block of modern AI: the Neural Network. We explain it as a logical chain for synthesising information—from raw data like ECG signals in the &amp;#39;input layer&amp;#39;, to abstract concepts like &amp;#39;high-risk&amp;#39; in the &amp;#39;hidden layers&amp;#39;, to a final diagnosis.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Understanding this basic architecture is key to knowing an AI&amp;#39;s limits. It reveals why these systems are powerful pattern-finders but don&amp;#39;t &amp;#34;understand&amp;#34; causality; a crucial distinction for clinical safety.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;🎧 Ready to look under the bonnet of AI? Listen now to grasp the core concept behind almost every AI tool you&amp;#39;ll encounter.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;#NeuralNetworks #AIinHealthcare #MachineLearning #MedEd #HealthTech #DigitalHealth #DataScience #ClinicalAI&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;healthaibrief@outlook.com&lt;/p&gt;</content:encoded>
                
                <enclosure length="4630987" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/406cfde1-b271-4db6-83e9-119b6be7f31f/stream.mp3"/>
                
                <guid isPermaLink="false">feb36581-7ae4-4fe3-98f9-0cf5854c6b29</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/406cfde1-b271-4db6-83e9-119b6be7f31f</link>
                <pubDate>Wed, 30 Jul 2025 08:34:23 &#43;0000</pubDate>
                <itunes:duration>289</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>004 Models</itunes:title>
                <title>004 Models</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>In our last episode, we described Machine Learning as the engine that powers AI. So, what is the specific output of that engine? What is created when an AI is &#34;trained&#34;?</p><p>The answer is the &#34;model.&#34; This is the end product of the training process—the distilled knowledge, captured in digital form.</p><p>In this episode, we explain what an AI model is and why it&#39;s not the whole product. We cover why a model is a &#39;frozen&#39; snapshot of knowledge, the danger of &#39;model drift,&#39; and how biased data creates a flawed tool.</p><p>Understanding this will equip you to evaluate the whole car, not just the engine, the next time you see a new AI solution.</p><p>🎧 Listen to Episode 004: &#34;Models&#34; now and learn what questions to ask before you trust the black box.</p><p>Music generated by Mubert <a href="https://www.google.com/url?q=https%3A%2F%2Fmubert.com%2Frender&sa=E" rel="nofollow">https://mubert.com/render</a></p><p>#AIModel #ModelDrift #ResponsibleAI #MedEd #HealthTech #AIinHealthcare #DataBias #ClinicalAI #Podcast</p>]]></description>
                <content:encoded>&lt;p&gt;In our last episode, we described Machine Learning as the engine that powers AI. So, what is the specific output of that engine? What is created when an AI is &amp;#34;trained&amp;#34;?&lt;/p&gt;&lt;p&gt;The answer is the &amp;#34;model.&amp;#34; This is the end product of the training process—the distilled knowledge, captured in digital form.&lt;/p&gt;&lt;p&gt;In this episode, we explain what an AI model is and why it&amp;#39;s not the whole product. We cover why a model is a &amp;#39;frozen&amp;#39; snapshot of knowledge, the danger of &amp;#39;model drift,&amp;#39; and how biased data creates a flawed tool.&lt;/p&gt;&lt;p&gt;Understanding this will equip you to evaluate the whole car, not just the engine, the next time you see a new AI solution.&lt;/p&gt;&lt;p&gt;🎧 Listen to Episode 004: &amp;#34;Models&amp;#34; now and learn what questions to ask before you trust the black box.&lt;/p&gt;&lt;p&gt;Music generated by Mubert &lt;a href=&#34;https://www.google.com/url?q=https%3A%2F%2Fmubert.com%2Frender&amp;sa=E&#34; rel=&#34;nofollow&#34;&gt;https://mubert.com/render&lt;/a&gt;&lt;/p&gt;&lt;p&gt;#AIModel #ModelDrift #ResponsibleAI #MedEd #HealthTech #AIinHealthcare #DataBias #ClinicalAI #Podcast&lt;/p&gt;</content:encoded>
                
                <enclosure length="3685564" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/d07bcb96-9f6e-4807-bc3e-930f6fa84431/stream.mp3"/>
                
                <guid isPermaLink="false">0fdf543a-646b-42e6-9adb-dbb71bf839b8</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/d07bcb96-9f6e-4807-bc3e-930f6fa84431</link>
                <pubDate>Mon, 28 Jul 2025 17:29:22 &#43;0000</pubDate>
                <itunes:duration>230</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>DD - OpenAI and Penda Health&#39;s AI Consult Real-World Study</itunes:title>
                <title>DD - OpenAI and Penda Health&#39;s AI Consult Real-World Study</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>A Deep Dive into the OpenAI &amp; Penda Health AI Study</p><p>In this episode, we provides a critical analysis of the highly publicised paper, &#34;AI-based Clinical Decision Support for Primary Care: A Real-World Study.&#34;</p><p><br></p><p>We go beyond the abstract&#39;s impressive claims to dissect the real-world implications:</p><p>- The Design: Acknowledging the brilliant workflow integration and &#34;on-the-job&#34; training effect.</p><p>- The Data: Why a 60% baseline error rate for &#34;inappropriate treatment&#34; raises serious questions.</p><p>- The Business Case: The hard trade-off between a 3.5-minute increase in consultation time and no measurable change in patient outcomes.</p><p>- The Verdict: Is this a template for the future or a case of &#34;marketing over medicine&#34;?</p><p>Listen now for a pragmatic, no-nonsense verdict on whether this promising tool is truly ready for the real world.</p><p>Music generated by Mubert https://mubert.com/render</p><p>#AIinHealthcare #DigitalTransformation #ClinicalDecisionSupport #PendaHealth #OpenAI #LLMs #HealthTech #AIConsult #GPT-4o</p>]]></description>
                <content:encoded>&lt;p&gt;A Deep Dive into the OpenAI &amp;amp; Penda Health AI Study&lt;/p&gt;&lt;p&gt;In this episode, we provides a critical analysis of the highly publicised paper, &amp;#34;AI-based Clinical Decision Support for Primary Care: A Real-World Study.&amp;#34;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;We go beyond the abstract&amp;#39;s impressive claims to dissect the real-world implications:&lt;/p&gt;&lt;p&gt;- The Design: Acknowledging the brilliant workflow integration and &amp;#34;on-the-job&amp;#34; training effect.&lt;/p&gt;&lt;p&gt;- The Data: Why a 60% baseline error rate for &amp;#34;inappropriate treatment&amp;#34; raises serious questions.&lt;/p&gt;&lt;p&gt;- The Business Case: The hard trade-off between a 3.5-minute increase in consultation time and no measurable change in patient outcomes.&lt;/p&gt;&lt;p&gt;- The Verdict: Is this a template for the future or a case of &amp;#34;marketing over medicine&amp;#34;?&lt;/p&gt;&lt;p&gt;Listen now for a pragmatic, no-nonsense verdict on whether this promising tool is truly ready for the real world.&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;#AIinHealthcare #DigitalTransformation #ClinicalDecisionSupport #PendaHealth #OpenAI #LLMs #HealthTech #AIConsult #GPT-4o&lt;/p&gt;</content:encoded>
                
                <enclosure length="6159464" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/2a343f08-b54c-40d5-93c6-e207680b1203/stream.mp3"/>
                
                <guid isPermaLink="false">a7c74718-d773-4374-bc71-ea0cefae3c2c</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/2a343f08-b54c-40d5-93c6-e207680b1203</link>
                <pubDate>Wed, 23 Jul 2025 10:56:29 &#43;0000</pubDate>
                <itunes:duration>384</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>003 AI vs machine learning</itunes:title>
                <title>003 AI vs machine learning</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>In the last episode we called AI the &#34;toolbox.&#34; Now, we&#39;re looking at the most powerful tool inside: Machine Learning (ML).</p><p>So, what’s the real difference? Think of it this way: AI is the destination, but Machine Learning is the engine getting us there.</p><p>In this episode, we break down the simple but powerful &#34;Train, Detect, Predict&#34; framework. This is the key to understanding how these tools actually work and, more importantly, where they can fail.</p><p>Master this, and you&#39;ll know exactly what critical questions to ask the next time you encounter a new &#34;AI&#34; solution in your practice.</p><p>🎧 Listen to Part 2: &#34;AI vs. Machine Learning&#34; now and feel empowered, not confused.</p><p>Music generated by Mubert https://mubert.com/render</p><p>#AIvsML #MachineLearning #MLinHealthcare #DataScience #MedEd #HealthTech #AIinHealthcare #DigitalTransformation #ClinicalAI #Podcast</p>]]></description>
                <content:encoded>&lt;p&gt;In the last episode we called AI the &amp;#34;toolbox.&amp;#34; Now, we&amp;#39;re looking at the most powerful tool inside: Machine Learning (ML).&lt;/p&gt;&lt;p&gt;So, what’s the real difference? Think of it this way: AI is the destination, but Machine Learning is the engine getting us there.&lt;/p&gt;&lt;p&gt;In this episode, we break down the simple but powerful &amp;#34;Train, Detect, Predict&amp;#34; framework. This is the key to understanding how these tools actually work and, more importantly, where they can fail.&lt;/p&gt;&lt;p&gt;Master this, and you&amp;#39;ll know exactly what critical questions to ask the next time you encounter a new &amp;#34;AI&amp;#34; solution in your practice.&lt;/p&gt;&lt;p&gt;🎧 Listen to Part 2: &amp;#34;AI vs. Machine Learning&amp;#34; now and feel empowered, not confused.&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;#AIvsML #MachineLearning #MLinHealthcare #DataScience #MedEd #HealthTech #AIinHealthcare #DigitalTransformation #ClinicalAI #Podcast&lt;/p&gt;</content:encoded>
                
                <enclosure length="4897227" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/a6a3efce-49e6-4e47-be66-47e658a096bb/stream.mp3"/>
                
                <guid isPermaLink="false">4e1481f6-c793-4624-a3b5-e22baa934bcc</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/a6a3efce-49e6-4e47-be66-47e658a096bb</link>
                <pubDate>Tue, 22 Jul 2025 17:17:09 &#43;0000</pubDate>
                <itunes:duration>306</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>002 What is AI?</itunes:title>
                <title>002 What is AI?</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>AI is &#34;better than a radiologist.&#34; AI is writing your clinic notes. The term &#39;AI&#39; is everywhere in medicine, but what does it really mean? It&#39;s easy to feel like it&#39;s just a confusing buzzword.</p><p><br></p><p>In Part 1 of our new series, we cut through the hype to give you a clear, simple definition.</p><p><br></p><p>Think of AI as the broad toolbox aiming to mimic human intelligence, from seeing to understanding language. But to really understand it, you need to know about the engine that powers most of these tools.</p><p><br></p><p>🎧 Listen to &#34;What is AI, Anyway?&#34; now to get the foundation right.</p><p><br></p><p>Music generated by Mubert https://mubert.com/render</p><p>#AIinHealthcare #DigitalHealth #MedEd #HealthTech #ClinicalAI #FutureofMedicine #Podcast #ArtificialIntelligence #Doctors #Nurses</p>]]></description>
                <content:encoded>&lt;p&gt;AI is &amp;#34;better than a radiologist.&amp;#34; AI is writing your clinic notes. The term &amp;#39;AI&amp;#39; is everywhere in medicine, but what does it really mean? It&amp;#39;s easy to feel like it&amp;#39;s just a confusing buzzword.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;In Part 1 of our new series, we cut through the hype to give you a clear, simple definition.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Think of AI as the broad toolbox aiming to mimic human intelligence, from seeing to understanding language. But to really understand it, you need to know about the engine that powers most of these tools.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;🎧 Listen to &amp;#34;What is AI, Anyway?&amp;#34; now to get the foundation right.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;&lt;p&gt;#AIinHealthcare #DigitalHealth #MedEd #HealthTech #ClinicalAI #FutureofMedicine #Podcast #ArtificialIntelligence #Doctors #Nurses&lt;/p&gt;</content:encoded>
                
                <enclosure length="2842122" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/66edbe0d-fa49-49f4-8460-c4f9abb529df/stream.mp3"/>
                
                <guid isPermaLink="false">72f718b3-60a7-4c77-b802-939813952f5e</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/66edbe0d-fa49-49f4-8460-c4f9abb529df</link>
                <pubDate>Sat, 19 Jul 2025 09:56:59 &#43;0000</pubDate>
                <itunes:duration>177</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>001 Introduction</itunes:title>
                <title>001 Introduction</title>

                
                
                <itunes:author>Stephen A</itunes:author>
                
                <description><![CDATA[<p>Welcome to The Health AI Brief.</p><p>In the first episode, we set the stage for our journey: to demystify artificial intelligence for busy clinicians</p><p>Roughly five minutes and one concept at a time.</p><p>Music generated by Mubert https://mubert.com/render</p>]]></description>
                <content:encoded>&lt;p&gt;Welcome to The Health AI Brief.&lt;/p&gt;&lt;p&gt;In the first episode, we set the stage for our journey: to demystify artificial intelligence for busy clinicians&lt;/p&gt;&lt;p&gt;Roughly five minutes and one concept at a time.&lt;/p&gt;&lt;p&gt;Music generated by Mubert https://mubert.com/render&lt;/p&gt;</content:encoded>
                
                <enclosure length="2556238" type="audio/mpeg" url="https://audio3.redcircle.com/episodes/3e805c3a-2f57-49bf-a8f8-3fe2274f5bee/stream.mp3"/>
                
                <guid isPermaLink="false">cf72e074-fd9b-439c-b398-6e972b23b19a</guid>
                <link>https://redcircle.com/shows/7ba0f646-5486-4319-a2d1-6a71de0f4634/episodes/3e805c3a-2f57-49bf-a8f8-3fe2274f5bee</link>
                <pubDate>Thu, 17 Jul 2025 19:42:20 &#43;0000</pubDate>
                <itunes:duration>159</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
    </channel>
</rss>
