Facebook has also become where too many people get health information, even though misinformation about many subjects exceeds valid information. Just because someone says something loudly does not make it true.
Another terrible place to get health information is from “influencers” on TikTok or Instagram, most of whom are paid to push a product or service.
I have long advocated that seekers after valid health information turn to trusted sites maintained by non-profits or major health systems.
For information about vaccines, Google “Vaccine Information Center,” a site maintained by The Children’s Hospital of Philadelphia. For female health issues, go to ACOG.org and click on For Patients (this is the American College of Obstetrics and Gynecology). For children’s health, go to Healthychildren.org, a site maintained by the American Academy of Pediatrics. For general health questions, there is a wealth of good material at clevelandclinic.org under the Health Library tab.
The newest way to get help with health-related questions is to use one of the Large Language Model (LLM) “chatbots,” such as Gemini, Claude or ChatGPT, and use of these programs has exploded. Health questions are among the top inquiries they handle.
Before getting into specific suggestions, it is crucial to remind you that these programs do not think – they apply statistical methods to generate relevant text in response to queries. You can think of them as “autocorrect on steroids.” Just as your phone or email program usually anticipates what you are going to say and advance types it, LLMs generally give helpful responses. They also can be wildly off-base, having no “common sense” to check their replies.
If you are asking a chatbot for advice, be as specific as possible. The more detail you give them, the better their response. Don’t say “I have a cough. What could it be?” Say: “I am a healthy 32-year-old with 3 days of a dry cough, a mild sore throat and a fever of 99 to 100.”
Ask the bot what more it wants to know. A follow-up to its initial response should be “what else do you want to ask me to help answer my question?”
Remember that chatbots want to please. They will always give you an answer, even if they must invent something. So-called “hallucinations” are a very real phenomenon.
They will also appear confident in their responses when they should not be – if something appears odd, ask them for the source of their information – and check that source.
Do not rely solely on an LLM. They are best used as the start of a health information search, not the end. They can prime you for what to ask your doctor and can give you alternatives to consider.
Chatbots are great at retrieving information, but they are not health professionals and they do not know anything about you beyond what you tell them.
Unless you want your personal information to be public, use the "incognito" mode that most LLMs allow.
Prescription for Bankruptcy. Buy the book on Amazon

No comments:
Post a Comment