Dr. Kedar Mate’s Post

View profile for Dr. Kedar Mate, graphic
Dr. Kedar Mate Dr. Kedar Mate is an Influencer

President and CEO at IHI | Co-Host "Turn On The Lights" Podcast | Advancing Health Equity at Rise to the Health Coalition | Health and Health Care Quality Improver | Committed to Advancing Health Equity

Excellent piece published by Harvard Business Review. The authors share a roundup of AI-enabled tools designed from the start to tackle health #inequities. As we study and celebrate advances like these, it’s essential to be just as vocal about the risks.   As AI tools are being put into practice to try and solve real clinical challenges and support clinical decision-making, we need embedded signals telling us when #AI is involved in our care and how. It should be clear: this is an AI conclusion, it was created in this way, these are the elements in it, this is the training data that was utilized, and this is how ongoing monitoring is happening to ensure the solution is producing the intended results. With that level of transparency, the relative value of the AI tool becomes more apparent as it affects our clinical decisioning. Then we leave it to the human clinician and the patient to navigate how useful that AI-powered recommendation or suggestion or idea set is. This transparency is part to how we build AI tools that are mindful of #HealthEquity in the future.

Md Aminul Hasan . PhD

Health Care Expert , Specialized in Health Care Reform, Quality of Care, PPP, Leadership & Management , Board Member of ASQua,

2mo

The article highlights a critical point: while AI holds great promise in addressing health inequities, transparency in its application is essential for trust and effectiveness. As AI becomes more integrated into clinical decision-making, it is crucial to provide clear information about when and how AI is being used, including its methodology, data sources, and monitoring mechanisms. This transparency not only ensures accountability but also empowers clinicians and patients to assess the relevance and reliability of AI-generated recommendations. In the pursuit of equitable healthcare, AI tools must be developed and deployed with mindful consideration of their impact, ensuring they enhance rather than exacerbate disparities.

Like
Reply
Holly Beeman MD. MBA.

Senior Healthcare Executive: Multi-site System Alignment | Strategy and Growth | Culture Builder

2mo

Hi Dr. Kedar Mate, great article, thank you for sharing on LinkedIn. I am grateful the article does not overpromise that AI is the solution, but, is additive. At the end of the day, we (me included) must be hypervigilant about how our own biases impact our interactions with patients. At one of the last IHI Leadership Alliance meetings, many of us shared how we are segregating our 'in process measures' such as sepsis bundle compliance by race to interrogate behaviors and unknown biases we can potentially control.

Like
Reply
Zaw Thet

Co-Founder and CEO @ Exer | Clinical AI | Digital Health

2mo

We’re working on accessibility for all with advanced AI assessments in telehealth!

See more comments

To view or add a comment, sign in

Explore topics