Meta launches Muse Spark AI, lets users upload health data amid privacy concerns

Meta launches Muse Spark AI, lets users upload health data amid privacy concerns
Wired AI

Key Points

  • Meta released Muse Spark, a generative AI model that can analyze personal health data.
  • The tool is currently available through the Meta AI app and will expand to Facebook, Instagram and WhatsApp.
  • Meta says over 1,000 physicians helped curate the training data for more accurate medical responses.
  • Experts warn the service is not HIPAA‑compliant and may retain data for future AI training.
  • Doctors expressed nervousness about uploading sensitive biometric information to the platform.
  • Muse Spark generated an extreme low‑calorie diet plan, highlighting risks of unsafe recommendations.
  • Meta positions the AI as an educational aide, not a replacement for a physician.

Meta's Superintelligence Labs rolled out Muse Spark, a new generative AI model that can analyze users' personal health information, through the Meta AI app. The company says the tool was trained with input from more than 1,000 physicians and will soon appear on Facebook, Instagram and WhatsApp. Health experts warn that the service is not HIPAA‑compliant, may retain data for future training and could expose sensitive information, raising serious privacy and safety questions.

Meta's Superintelligence Labs introduced Muse Spark this week, a generative AI model that promises to crunch users' health data and offer personalized insights. The feature debuted inside the Meta AI app and is slated for integration across the company’s flagship platforms, including Facebook, Instagram and WhatsApp, within weeks.

Unlike generic chatbots, Muse Spark asks users to "paste your numbers from a fitness tracker, glucose monitor, or a lab report." It then claims to calculate trends, flag patterns and generate visualizations. The model can, for example, take a series of blood‑pressure readings and highlight potential concerns, according to Meta's rollout announcement.

Meta highlighted a collaboration with more than 1,000 physicians to curate the training data that powers Muse Spark, saying the medical input makes the bot's answers "more factual and comprehensive." The company positions the tool as an educational aide rather than a substitute for a physician.

The ability to upload personal biometric data is not unique to Meta. OpenAI's ChatGPT and Anthropic's Claude both offer health‑focused modes that can pull data from users' Apple or Android health apps, while Google lets Fitbit users share medical information with its AI health coach.

Experts caution that these conveniences come with significant privacy risks. "The more information you give it, the more context it has about you and, potentially, the better the responses," noted Monica Agrawal, an assistant professor at Duke University and co‑founder of the HIPAA‑compliant platform Layer Health. "But on the flip side, there are major privacy concerns to sharing your health data without protections." Muse Spark operates under Meta's standard privacy policy, which allows retained data to be used for future model training and could inform targeted advertising.

Medical professionals voiced unease about the service. Gauri Agarwal, a doctor and associate professor at the University of Miami, said she would not connect her own health records to a system she cannot fully control. "These chatbots now allow you to connect your own biometric data, and honestly, that makes me pretty nervous," she warned.

Critics also pointed to the model's potential to provide dangerous advice. When prompted for an extreme intermittent‑fasting regimen, Muse Spark generated a 500‑calorie‑per‑day plan, a recommendation that could trigger malnutrition or exacerbate eating disorders.

Meta responded that Muse Spark is intended for educational purposes only, likening its role to that of a medical school professor rather than a practicing doctor. The company emphasizes that users should share only the data they are comfortable exposing and that the tool includes prompts to strip personal identifiers before analysis.

As the AI health market expands, regulators and clinicians alike urge caution. Kenneth Goodman, founder of the Institute for Bioethics and Health Policy, said he would need solid research demonstrating health benefits before endorsing such tools. Until clearer safeguards emerge, users are advised to treat Muse Spark as a supplemental resource, not a replacement for professional medical care.

#Meta#Muse Spark#AI health#privacy#HIPAA#chatbot#digital health#medical AI#data security#health technology
Generated with  News Factory -  Source: Wired AI

Also available in: