Racial Myths About Pain Are Embedded in Artificial Intelligence

By Crystal Lindell

A new study published in JAMA found that artificial intelligence (AI) programs are encoded with racial and ethnic biases – just like humans – when it comes to evaluating a patient's pain. 

The authors said they wanted to look into the issue because it's already well-known that doctors underestimate and undertreat black patients’ pain compared to white patients. 

To study how that may impact AI, researchers had 222 medical students and residents evaluate two different patients, one black and one white, who were both experiencing pain. They also had them evaluate statements about how race may impact biology, some of which were myths and some of which were true. 

Then the researchers had two Large Language Models (LLMs) widely used in AI — Gemini Pro  and GPT-4 — do the same by feeding them patient information reports, and then having them evaluate statements about how race impacts biology. 

There wasn’t much difference between the humans and the AI models when it came to rating patients’ pain, regardless of race. Both the humans and the AI models rated the patients as having similar pain scores. 

However, both the humans and AI systems had some false beliefs about race and patient pain. Gemini Pro fared the worst, while GPT-4 and the humans came out relatively similar. 

Specifically, Gemini Pro had the highest rate of racial myths (24%). That was followed by the humans (12%) and GPT-4 (9%).

“Although LLMs rate pain similarly between races and ethnicities, they underestimate pain among Black individuals in the presence of false beliefs,” wrote lead author Brototo Deb, MD, a resident at Georgetown University–MedStar Washington Hospital Center.

“Given LLMs’ significant abilities in assisting with clinical reasoning, as well as a human tendency toward automation bias, these biases could propagate race and ethnicity–based medicine and the undertreatment of pain in Black patients.”

Deb and co-author Adam Rodman, MD, says their study corresponds with previous research showing that AI models have biases related to race and ethnicity. 

Given how AI is increasingly used in clinical practice, there’s concern that black patients’ pain will continue to be undertreated, making them less likely to get opioids and more likely to be drug tested. 

There’s a common belief that AI will eliminate racial bias because computers are seen as more logical than humans. However, AI is encoded with data provided by humans, which means as long as humans have bias, AI will too. 

The real problem is if doctors start to rely too much on AI for patient evaluations, there’s a potential for real harm. Especially if doctors use AI to justify their medical decisions under the false belief that they are unbiased. 

It’s still unclear how these new AI systems will impact healthcare, but everyone involved should be careful to avoid relying too heavily on them. At the end of the day, just like the humans who program them, AI models have their flaws. 

Chatbot ‘Doctor’ Preferred Over Real Ones

By Crystal Lindell, PNN Columnist

Would you rather get your medical advice from a human physician or a chatbot powered by artificial intelligence?

According to an unusual study recently published in JAMA, you’d be better off listening to the chatbot, which gave advice that was rated significantly better both in quality and empathy.  

A team of researchers asked the chatbot -- called ChatGPT – 195 hypothetical questions a patient might ask that were posted online in a Reddit forum called "Ask a Doctor."  A team of healthcare providers then compared the chatbot’s responses to those posted by actual physicians in the same forum.

The responses from ChatGPT were the clear winners. The evaluators preferred the chatbot responses over the physicians’ answers about 79% of the time. The average rating for the chatbot was "better than good," while the physicians overall were given a lackluster grade of "acceptable response."

The evaluators also gave the ChatGPT high marks for empathy, while the physicians’ responses were rated as only “slightly empathetic.”

One of the hypothetical questions involved a patient who accidentally swallowed a toothpick and wanted to know if they were at risk of dying. Here’s how the chatbot and a physician responded:

Physician Response:

If you've surpassed 2-6 (hours), chances are they've passed into your intestines. Which means it can't be retrieved easily.

Many people can swallow razorblades and toothpicks without issues. In case you develop stomach ache, then don't hesitate seeking out an emergency room, and remember to point out the swallowed tooth pick. For now, they won't be doing much.

Chatbot Response:

It's natural to be concerned if you have ingested a foreign object, but in this case, it is highly unlikely that the toothpick you swallowed will cause you any serious harm.

Toothpicks are made of wood, which is not toxic and will not cause poisoning. However, it is possible for a toothpick to cause injury if it becomes stuck in the throat or if it punctures the digestive tract. However, this is unlikely to happen with a dull, cooked toothpick that is only 2 cm long.

If you are experiencing any discomfort or symptoms such as abdominal pain, difficulty swallowing, or vomiting, it is important to contact a healthcare provider for further evaluation. However, if you are not experiencing any symptoms, it is safe to assume that the toothpick has passed through your digestive system, and you do not need to be concerned.

It's always a good idea to be cautious when consuming food and to remove any foreign objects before eating. It's understandable that you may be feeling paranoid, but try not to worry too much. It is highly unlikely that the toothpick will cause you any serious harm.

Which answer would you rather get as a patient? The evaluators preferred the chatbot’s response to the toothpick question, as they did for other questions about a head injury, a lump possibly being cancerous, and if getting bleach in your eye might cause blindness. The chatbot response to those questions also got high scores for empathy.

As a chronic pain patient, I have to admit I felt a little smug when I first saw the headlines about this research. My initial thought was, of course, the chatbot was more empathetic than the doctors! Doctors are often mean, rude and dismissive. I even wrote a column about using a weird trick to manage their egos.

Is this the beginning of the end for human doctors? The short answer is: No. This was a very narrow study with a number of limitations. The authors said they were only trying to look into the possibility of physicians using artificial intelligence (AI) to essentially ghostwrite responses to patients.  

“Chatbots could assist clinicians when messaging with patients, by drafting a message based on a patient’s query for physicians or support staff to edit,” they said. “Such an AI-assisted approach could unlock untapped productivity so that clinical staff can use the time-savings for more complex tasks, resulting in more consistent responses and helping staff improve their overall communication skills by reviewing and modifying AI-written drafts.”

The possibilities for using AI in medical settings are only just emerging, and it could radically change how we think of healthcare and how doctors think of us.

One thing that might improve is how doctors and patients communicate. I know doctors are capable of giving more empathetic answers on their own. The problem is, in the United States at least, they usually just don’t have the time to do so. If AI can help, we should let it.  

However, it should be noted that several authors of the study, which was led by researchers at the University of California San Diego, disclosed ties to the artificial intelligence industry, which means they may financially benefit from any attempts to sell AI to medical professionals.

Also, as the researchers acknowledge, the study doesn’t prove that chatbots are better doctors than actual doctors, just that their answers were better. Physicians would likely respond to medical questions differently in an office setting, as opposed to an online post.

Researchers also did not assess the chatbot responses for accuracy — which is a pretty big deal when it comes to medical care. It’s one thing to write an empathetic reply, it’s another to correctly diagnose and decide on a treatment.

That said, when comparing the various responses, the chatbot did seem to give similar advice as the doctors, which makes me think they were mostly accurate. That may not sound great, but consider how often doctors are wrong. Medical errors are one the leading causes of death in the U.S.

AI technology is rapidly improving, and it’s impossible to predict what it will be capable of in the coming years. But I do believe it’s going to radically change many aspects of our lives, including healthcare. Hopefully, it makes it better. But at this point, I’ll settle for not making it worse.  

Crystal Lindell is a freelance writer who lives in Illinois. After five years of unexplained rib pain, Crystal was finally diagnosed with hypermobile Ehlers-Danlos syndrome. She and her fiancé have 3 cats: Princess Dee, Basil, and Goose. She enjoys the Marvel Cinematic Universe, Taylor Swift Easter eggs, and playing the daily word game Semantle. 

Are You Living in the Matrix?

By Dr. Lynn Webster, PNN Columnist

A recent Netflix documentary, "The Social Dilemma," illustrates how social media networks are selling each of us as commodities to advertisers. Tristan Harris, a former Google employee, points out that platforms such as Reddit, Facebook, Twitter and Instagram make money by allowing advertisers to target members who are interested in their products or services.

That means anyone who uses "free" social media are not just consumers. We are the products being sold.

We’re allowing our brains to be manipulated by sophisticated marketing and artificial intelligence (AI) designed to change our behavior. Each time we “like” something or stay on a web page for more than a second, we provide technology companies with additional knowledge about who we are and what makes us tick. Each click helps AI become better at manipulating us, deciding what we see and what we don’t.

Because the things that fill one person's newsfeed may never make their way into another’s, social media networks shape our version of reality while they polarize society. Unfortunately, “fake news” is sometimes more profitable to advertisers than real news. We have segued from living in the Information Age to subsisting in the Disinformation Age. This is especially frightening when we see how much influence conspiracy theories have in our culture today.  

According to the documentary, we have essentially isolated ourselves in a bubble of technology. Loneliness and depression are rampant in our society, and people interact less in the real world than they do online. When others "like" our posts, we get a hormonal rush of chemicals -- and when it dissipates, we crave another hit. The need for approval and belonging keeps us clicking.

As "The Social Dilemma" asks, "How do you wake up from the matrix when you don't know you are in the matrix?"

People with Pain Live in the Matrix, Too

The online audience for messages has become fragmented for people with different views. This limits the reach of any particular message to those who share the same beliefs or problems. That puts people living with pain who reach out to others through social media at a disadvantage they may not even understand.

The documentary makes the point that the only industry besides social media that uses the term “users” is the illicit drug world. People living with pain or addiction are vulnerable to the same dopamine rush that keeps us clicking to receive affirmation from others who share our experiences.

People in pain desperately want to be heard by people who can help them. But, because of AI, only those who already tend to be supportive of people in pain are likely to see their posts.

The very nature of chronic pain and addiction narrows people’s worlds. So does social media. It has a compounding effect. As a society, we need to understand that our world becomes narrower with each click.

Think of those you unfriend or the people who block you on social media. They are the ones you need to hear from, because they are people who have different views from yours. It may feel comfortable to stay within your own lane, but that won't help if your goal is to understood the world and help change it for the better.   

Bursting Your Information Bubble

There are general principles we can all use to sort the truth from the lies on the internet. One is to recognize that if you have a strong emotional response to an online message, you may have become the successful target of manipulation. Take a deep breath and tell yourself the manipulators found a way to trigger your dopamine release. Remember that the message you saw may be only partially true -- or not true at all.

Before you share a post or believe it yourself, verify the source. Check out the veracity of the story on Snopes or some other fact checker. Seek out sources other than those that appear in your timeline or on your news feed. Instead of trusting your search engine, proactively log onto news sites with opposing viewpoints.

If the stories you find feature quotes, go to the original source and see if the message was taken out of context. The greater your emotional reaction to the message, the greater the chances are that it is inaccurate, misleading information.

“The Social Dilemma” website offers a toolkit to “realign your relationship with technology” and explore these issues -- including what social media is doing to our democracy -- more deeply.

We can't ban the internet. We wouldn't want to, even if we could. There are obvious benefits to digital communication, and we can't put the genie back into the bottle.

However, if we don’t want to be trapped in a dystopia where humanity is controlled by a manipulated reality, we have to realize that "free" online services are anything but that. The cost is the truth. When we sacrifice a balanced view of the world because of our desire to belong, we risk locking ourselves in the matrix. 

Lynn R. Webster, MD, is a vice president of scientific affairs for PRA Health Sciences and consults with the pharmaceutical industry. He is author of the award-winning book The Painful Truth, and co-producer of the documentary It Hurts Until You Die. Opinions expressed here are those of the author alone and do not reflect the views or policy of PRA Health Sciences. You can find Lynn on Twitter: @LynnRWebsterMD.