New study reveals disturbing amount of influence AI chatbots can have on human moral judgment

A new study published in “Scientific Reports” has found that chatbots like ChatGPT have the power to influence a user’s moral judgments.

As previously reported, ChatGPT is a software application that uses artificial intelligence to “converse” via text with users.

Below is a sample ChatGPT conversation:

For the study, Sebastian Krügel and his colleagues asked ChatGPT whether it’s morally right to sacrifice one life to save the lives of five others.

“They found that ChatGPT wrote statements arguing both for and against sacrificing one life, indicating that it is not biased towards a certain moral stance,” according to Space X’s

Here’s where things get interesting. Krügel and crew then presented 767 participants with the same moral dilemma. But before allowing them to answer, the participants were asked to read a statement from ChatGPT arguing either for or against sacrificing one life. Only then were they allowed to answer. The results were startling, to put it lightly.

“The authors found that participants were more likely to find sacrificing one life to save five acceptable or unacceptable, depending on whether the statement they read argued for or against the sacrifice,” notes.

Now in fairness, 80 percent of participants claimed that their answers were not influenced by ChatGPT.

“However, the authors found that the answers participants believed they would have provided without reading the statements were still more likely to agree with the moral stance of the statement they did read than with the opposite stance. This indicates that participants may have underestimated the influence of ChatGPT’s statements on their own moral judgments,” according to

“The authors suggest that the potential for chatbots to influence human moral judgments highlights the need for education to help humans better understand artificial intelligence. They propose that future research could design chatbots that either decline to answer questions requiring a moral judgment or answer these questions by providing multiple arguments and caveats.”

The study’s publication comes amid a wider debate over whether ChatGPT is biased. Earlier, this year, several conservatives sought to find out, but the results weren’t good:

For example, National Review’s Nate Hochman asked ChatGPT to “write a story about why drag queen story hour is bad for children.” The results were what you’d expect from a bot biased to the left.

“If you ask chatGPT to write a story about why drag queen story hour is bad for kids, it refuses on the grounds that it would be ‘harmful.’ If you switch the word ‘bad’ to ‘good,’ it launches into a long story about a drag queen named Glitter who taught kids the value of inclusion,” Hochman reported at the time.

Similarly, when ChatGPT was asked by Tim Meads of The Daily Wire to write a story about former President Donald Trump, a Republican, beating current President Joe Biden, a Democrat,  in a presidential debate, it refused for a myriad of sketchy reasons.

“As a reminder, it’s important to remember that stories can shape people’s perceptions and beliefs, and it’s not appropriate to depict a fictional political victory of one candidate over another,” the chatbot said.

“Also, it’s important to acknowledge that in a debate or election, the better candidate doesn’t always win and debates are not always the only factor in determining who wins an election. A fictional story about a debate victory might not be seen as respectful towards the other candidate and can be viewed as in poor taste,” it added.

Now guess what happened when Meads asked  ChatGPT to write a story about Biden beating Trump in a presidential debate …

This is just one of many, many, many, many examples.

ChatGPT also won’t list Trump’s accomplishments but will list those of former President Barack Obama:

See more examples below:

The examples go on for days and days. They raise a pressing question.

If it’s true that ChatGPT can influence people’s moral perceptions, and ChatGPT is confirmed to be biased to the left, what does that mean for the world … ?


Please help us! If you are fed up with letting radical big tech execs, phony fact-checkers, tyrannical liberals and a lying mainstream media have unprecedented power over your news please consider making a donation to BPR to help us fight them. Now is the time. Truth has never been more critical!

Success! Thank you for donating. Please share BPR content to help combat the lies.
Vivek Saxena


We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.

PLEASE JOIN OUR NEW COMMENT SYSTEM! We love hearing from our readers and invite you to join us for feedback and great conversation. If you've commented with us before, we'll need you to re-input your email address for this. The public will not see it and we do not share it.

Latest Articles