Doctors Think AI has a Place in HealthCare But Maybe Not as a Chatbot

Doctors in a hospital discussing the role of artificial intelligence in healthcare alongside a medical AI system

Doctors say AI in healthcare has clear benefits, but many do not support its use as a direct patient-facing chatbot. Clinicians believe AI works best when it assists doctors behind the scenes rather than replacing medical judgment.

The biggest concern is accuracy. Doctors warn that AI chatbots can produce confident but incorrect medical information, which may lead patients to unnecessary anxiety or unsafe conclusions. Accountability also becomes unclear when AI directly advises patients instead of supporting clinicians.

As AI companies respond to these concerns, competition is intensifying over healthcare tools built for clinicians rather than direct patient advice. While OpenAI refines ChatGPT Health, others are emphasizing professional oversight and safety, including Anthropic’s launch of Claude for healthcare as the race to deploy regulated medical AI accelerates.

Medical Professionals Prefer AI Tools That Improve Efficiency Within Clinical Settings, Including:

  • Automating medical documentation
  • Assisting with diagnostics and imaging
  • Supporting triage and clinical workflows
  • Reducing administrative burden on doctors

Some physicians report patients arriving with AI-generated self-diagnoses that are misleading or irrelevant. In one case, a chatbot provided a pulmonary embolism risk estimate that did not medically apply, creating confusion rather than clarity.

Meanwhile, OpenAI is adapting its approach with ChatGPT Health, a version of ChatGPT built for private health discussions. The tool allows users to upload medical records and connect data from Apple Health, though doctors continue to raise concerns around privacy, regulation, and data protection.

Previous Article

US eases regulations on Nvidia H200 chip exports to China

Next Article

iPhone 18 and iPhone Air 2: Launch Date and Everything We Know