News
Artificial intelligence meets medical records
OpenAI has launched ChatGPT Health, a dedicated section within ChatGPT to manage conversations about health and wellness. No more chats scattered among a thousand topics, but a space specifically designed to handle sensitive health information.
The data that drove this decision is significant: over 230 million people every week ask questions about medical topics, fitness and wellness. OpenAI recognized the need for a more structured and secure environment.
How ChatGPT Health works
ChatGPT Health separates health conversations from standard chats, creating a dedicated area where users can:
- Connect apps and services
Apple Health, MyFitnessPal and other fitness and wellness services can be integrated. This allows the AI to analyze data on physical activity, nutrition, sleep and workouts. - Upload medical records
In the United States, it’s possible to link electronic medical records through partners like b.well, allowing users to consult tests, results and health documentation. A feature currently limited by regulatory and technical constraints. - Obtain analysis and summaries
ChatGPT Health helps interpret reports, identify trends over time and prepare more informed questions to ask the doctor. The value lies in translating and summarizing information, not in medical authority.
Informative assistant, not a doctor
The positioning is clear from launch: ChatGPT Health does not replace healthcare professionals nor provide diagnoses or clinical indications.
OpenAI reiterates this in the terms of service: the tool is designed to help people better understand their health data and navigate often complex information, but not to prescribe solutions or treatments.
The difference is fundamental: it’s about supporting personal awareness, not medical consultation.
The privacy question
Privacy management is one of the project’s pillars. OpenAI states that, by default, ChatGPT Health content is not used to train base models and remains separate from standard chats; limited access may occur for security purposes, and users can manage their choices in settings.
Users maintain total control: they can disconnect apps and services at any time, deciding which data to share and when to revoke access.
A choice that aims to strengthen trust in a sector where protecting personal information is crucial. But the question remains: how much can you trust?
The health tech market
With ChatGPT Health, OpenAI decisively enters the consumer health tech market, where competition is not just about technology but also about the ability to become a reliable interface for increasingly sensitive data.
Other tech players have already made similar moves: Google with Fitbit and Apple with Apple Health. But no one had yet tried to integrate conversational artificial intelligence so directly.
Opportunities and limits
While ChatGPT Health promises greater accessibility and clarity, on the other hand the typical limits of large language models remain: they generate probabilistic responses and can make mistakes.
For this reason OpenAI insists on conscious use that complements the traditional healthcare system. AI can help navigate the complexity of the healthcare world better, but the boundary is thin: on one side accessibility, on the other the risk of replacing the doctor-patient relationship.
ChatGPT Health represents a significant evolution: from generalist assistant to specialized tool, capable of accompanying users in understanding their health without replacing the role of professionals.
It’s an interesting experiment that poses important questions: are we ready to share our health data with an artificial intelligence? How much do we trust the promise that this data won’t be used for other purposes?
The answer, as always with technological innovations, will depend on the balance between perceived utility and acceptable risks. And on OpenAI’s ability to keep its promises on privacy.
Would you entrust your health data to an AI?