Healthcare AI Guy Weekly Newsletter | 6/6

Regulation bells ringing, improved genetic risk predictions, GPU drug dealers, and more...


Hello tech-savvy readers, we’re back with more!

Here’s a rundown of everything that happened in healthcare AI this past week:

Our Picks

Highlights if you’ve only got 2 minutes…

1/

Regulation bells ringing 

350+ industry leaders, scientists, academics, tech CEOs and public figures, including OpenAI CEO Sam Altman, co-signed a one-sentence statement calling global attention to existential AI risk:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”

Is this real danger, news hype, or a rug-pull to make it harder to catch up to the big players? It’s an interesting debate as industry leaders are increasingly calling for further regulation. I think these AI concerns are legitimate but you could steelman both sides. Here’s another article highlighting that AI rollout might be happening too fast and can cause more harm than good. (Link)

In terms of healthcare, regulation can actually serve as an accelerant… For example, prior to the “Meaningful Use” law passing in 2009, electronic health records (EHRs) didn’t really exist. Docs were busy using paper and had no reason or desire to adopt tech for medical record storage and learn new processes. Then CMS paid financial incentives to docs to use EHRs which resulted in the sea change that digitized healthcare infrastructure. So short way of saying, regulation might actually help AI in healthcare!

2/

Researchers use AI to improve genetic health risk predictions in humans

Using genetic information of about 800 primates across 233 species of apes, monkeys, and lemurs, an AI algorithm was trained and used to analyze DNA from ~450,000 people in the UK. The results showed 12% improved accuracy in the ability to estimate individualized risk factors of developing health problems, greatly improving genetic risk prediction.

Academic researchers teamed up with Illumina, the US company making DNA sequencing equipment, to make this study happen. This is a great leap for health research and potentially proof of concept for expanding ML models using biology knowledge and massive patient datasets. (Link w/out paywall)

3/

Harder to get GPUs than drugs

Those are Elon’s words, not mine — “GPUs at this point are considerably harder to get than drugs.” The AI boom runs on chips but there aren’t enough to go around. Nvidia is the primary maker of these key GPU chips, which explains why their market cap is hovering around $1T. They are trying to make more and better ones, but the demand is too high. This has left everyone, especially the smaller AI startups with less access, scrambling for resources. 

This is going to be one of the roadblocks for growth and adoption in healthcare. At the moment no one really has a fix, but whoever comes up with a solution might be the next Bezos/Gates/Huang/Musk… (Link w/out paywall)

4/

AI algorithms could introduce racial bias into patients’ diagnoses

A study published in JAMA Health Forum found AI algorithms “might exacerbate racial and ethnic inequities.” AI algorithms are used across health systems and can be useful for providers, but “they could also be associated with increased misdiagnoses, biases in care and misrepresentation of the needs of minority patients” based on analysis from 42 clinical, professional, payer, and tech organizations.

However, according to the study, a provider’s “ability to discuss algorithms could facilitate shared decision-making with patients and allow them to meaningfully consent to the care they receive.”

The study’s authors say standardizing “data collection and risk-adjustment models used in health care algorithms” is needed, and recommend national standards around algorithm development, testing, and reporting. The AI-enabled digital health gold rush is underway but AI has a long way to go, in part because of a lack of trust that’s driven by known problems of bias. (Link)

5/

Who’s liable for AI creations?

There is an ongoing debate about who should be held responsible for harmful content on the internet, especially when it's created by AI tools like ChatGPT. A law, Section 230, protects online platforms from being sued for content created by their users, but it doesn't really cover AI creations. The distinction between content creation and information sharing is getting harder to make as these AI tools are becoming more advanced.

The use of AI tools raises concerns about who should be held accountable for what they create. While some situations involving harmful/false content might be straightforward, many won't, which creates a risk in terms of legal responsibility. It's important to find a balance between protecting online platforms and encouraging the development of new technologies. Giving a longer leash to people who create these tools helps explore the potential of AI, but it's also necessary to have thoughtful discussions about liability to address the challenges posed by future tech advancements. (Link w/out paywall)

Miscellaneous 🔔

News, podcasts, blogs, tweets, etc…

  • SVP and Global Head of AI/ML at GSK (top 10 largest global pharma co.) discusses applying ML to clinical trials, building on open source tech, and more with Altimeter Capital partner on Cross Validated podcast (Link)

  • Explore the layers of AI tech (and some of the technical jargon) to give you a knowledge base on Generative AI and LLMs (Link)

  • 32% of people can’t tell the difference between a human and an AI bot (Link)

  • OpenAI’s plans according to Sam Altman. The article was removed at the request of OpenAI… (Link)

  • Healthcare providers embracing AI models with more enthusiasm than expected from a sector that still uses fax and paper checks (Link)

Venture Capital Deals 💸

Spotlight on latest capital raises and investments…

  • Hyro, an NYC-based developer of conversational AI tools for health system call centers, raised $20mm in Series B funding. Macquarie Capital led and was joined by Liberty Mutual Strategic Ventures, Black Opal Ventures, and others.

    Recently, the company launched Spot, a GPT-powered virtual assistant designed to answer FAQs. Intermountain Healthcare, Mercy Health, and Baptist Health (FL) are all users of Hyro. (Link)

Tool Box 🧰

Latest on business, consumer, and clinical healthcare AI tools…

  • Epic: UNC Health, UW Health, UC San Diego Health, and Stanford Health Care are partnering with Epic and Microsoft on an AI pilot tool designed to reduce workload stress for clinicians, particularly during off hours (Link)

  • Carbon Health: Carbon Health launched hands-free charting — an AI-enabled notes assistant — in its EHR across all clinics and providers. Carbon Health’s EHR platform is the first to deploy native AI-assisted charting at scale (Link)

  • Nuance: Physician frustration is leading to uptake of Nuance’s AI-based clinical documentation product, DAX (Link)

  • GE Healthcare: GE Healthcare received FDA clearance for Precision DL, an AI and deep learning-enabled software designed to improve medical imaging capabilities (Link)

AI Images of the Week 🤖

Funny memes and pics from around the web…

Extending your favorite memes…

See you next week 👋

That’s it for this week friends! Back to reading — I’ll see you next week.

— Healthcare AI Guy (aka @HealthcareAIGuy)

PS. I write this newsletter for you. So if you have any suggestions or questions, feel free to reply to this email and let me know