Backed by Huffington’s mental wellness firm Thrive Global and the OpenAI Startup Fund, the early-stage venture fund closely associated with OpenAI, Thrive AI Health will seek to build an “AI health coach” to give personalized advice on sleep, food, fitness, stress management and “connection,” according to a press release issued Monday.
DeCarlos Love, who previously led fitness and health experiences at Google‘s Fitbit subsidiary, primarily on the tech giant’s Pixel Watch wearable, has been appointed CEO. Thrive AI Health counts Walmart co-founder Helen Walton’s Alice L. Walton Foundation among its strategic investors, and the Alice L. Walton School of Medicine is one of Thrive AI Health’s initial health partners.
It wasn’t immediately clear how much capital Thrive AI Health’s backers have invested. We’ve reached out for clarification and will update this post once we hear back.
According to Huffington and Altman (via a Time op-ed), Thrive AI Health’s endgame is training an AI health “coach” on scientific research and medical data, leveraging a forthcoming health data platform and collaborations with partners including Stanford Medicine. Huffington and Altman describe a sort of virtual assistant on a smartphone app and in Thrive’s enterprise products that learns from users’ behaviors and offers real-time, health-related “nudges” and suggestions.
“Most health recommendations at the moment, though important, are generic,” Huffington and Altman write. “The AI health coach will make possible very precise recommendations tailored to each person: swap your third afternoon soda with water and lemon; go on a 10-minute walk with your child after you pick them up from school at 3:15 p.m.; start your wind-down routine at 10 p.m. since you have to get up at 6 a.m. the next morning to make your flight.”
Thrive AI Health is the latest in a long string of tech industry efforts to create health-focused apps with AI-driven personalization. Many have run up against intractable business, technical, and regulatory hurdles.
IBM’s Watson Health division, launched in 2015, was supposed to analyze reams of medical data — far faster than any human doctor could — to generate insights that could improve health outcomes. The company reportedly spent $4 billion beefing up Watson Health with acquisitions, but the tech proved to be inefficient at best — and harmful at worst.
Elsewhere, Babylon Health, an NHS-partnered health chatbot startup that once promised that it could “automate away” consultations with medical professionals, collapsed after investigations revealed that there was no evidence that the company’s tech worked better than a doctor. Once valued at over $4.2 billion, Babylon filed for bankruptcy in 2023 — ultimately selling off its assets for less than $1 million.
In some cases, AI has been found to perpetuate negative stereotypes within health research and the broader medical community. For example, a recent study showed that OpenAI’s AI-powered chatbot platform, ChatGPT, often answers questions relating to kidney function and skin thickness in a way that reinforces false beliefs about biological differences between Black and white people.
Even trained clinicians can be fooled by biased AI models, another study found — suggesting that the biases may be challenging to root out.
To stave off critics, Huffington and Altman are positioning Thrive AI Health as a more careful, thoughtful approach to health than those that have come before it — a way to “democratize” health coaching and “address growing health inequities” in an ostensibly secure, privacy-sensitive way. The company has named Gbenga Ogedegbe, director of NYU Langone’s Institute for Excellence in Health Equity, as an advisor, and claims that the research data its products use will be “peer reviewed” — and that users will have the final say when it comes to which info Thrive AI Health’s products tap to inform its recommendations.
But if history is any indication, it could prove exceedingly difficult for Thrive AI Health to strike a balance between “democratizing” its tech and preserving patient privacy.
In 2016, it was revealed that Google’s AI division, DeepMind, had been passed data on more than a million patients as part of an app development project by the Royal Free NHS Trust in London without the patients’ knowledge or consent. Recent wide-scale data breaches like the UnitedHealth and 23andMe scandals show the danger inherent in entrusting sensitive health data to third parties.
Perhaps Thrive AI Health will avoid the pitfalls of its rivals and progenitors. It’s likely to be an uphill climb regardless — and closely watched by skeptics.
Content Courtesy – Tech Crunch