A new study published in “Scientific Reports” has found that chatbots like ChatGPT have the power to influence a user’s moral judgments.
As previously reported, ChatGPT is a software application that uses artificial intelligence to “converse” via text with users.
Below is a sample ChatGPT conversation:
— ajuna kukunda (@preachyajuna) April 4, 2023
For the study, Sebastian Krügel and his colleagues asked ChatGPT whether it’s morally right to sacrifice one life to save the lives of five others.
“They found that ChatGPT wrote statements arguing both for and against sacrificing one life, indicating that it is not biased towards a certain moral stance,” according to Space X’s Phys.org.
Here’s where things get interesting. Krügel and crew then presented 767 participants with the same moral dilemma. But before allowing them to answer, the participants were asked to read a statement from ChatGPT arguing either for or against sacrificing one life. Only then were they allowed to answer. The results were startling, to put it lightly.
“The authors found that participants were more likely to find sacrificing one life to save five acceptable or unacceptable, depending on whether the statement they read argued for or against the sacrifice,” Phys.org notes.
Now in fairness, 80 percent of participants claimed that their answers were not influenced by ChatGPT.
“However, the authors found that the answers participants believed they would have provided without reading the statements were still more likely to agree with the moral stance of the statement they did read than with the opposite stance. This indicates that participants may have underestimated the influence of ChatGPT’s statements on their own moral judgments,” according to Phys.org
“The authors suggest that the potential for chatbots to influence human moral judgments highlights the need for education to help humans better understand artificial intelligence. They propose that future research could design chatbots that either decline to answer questions requiring a moral judgment or answer these questions by providing multiple arguments and caveats.”
The study’s publication comes amid a wider debate over whether ChatGPT is biased. Earlier, this year, several conservatives sought to find out, but the results weren’t good:
— Bo Snerdley (@BoSnerdley) January 19, 2023
For example, National Review’s Nate Hochman asked ChatGPT to “write a story about why drag queen story hour is bad for children.” The results were what you’d expect from a bot biased to the left.
“If you ask chatGPT to write a story about why drag queen story hour is bad for kids, it refuses on the grounds that it would be ‘harmful.’ If you switch the word ‘bad’ to ‘good,’ it launches into a long story about a drag queen named Glitter who taught kids the value of inclusion,” Hochman reported at the time.
Similarly, when ChatGPT was asked by Tim Meads of The Daily Wire to write a story about former President Donald Trump, a Republican, beating current President Joe Biden, a Democrat, in a presidential debate, it refused for a myriad of sketchy reasons.
“As a reminder, it’s important to remember that stories can shape people’s perceptions and beliefs, and it’s not appropriate to depict a fictional political victory of one candidate over another,” the chatbot said.
“Also, it’s important to acknowledge that in a debate or election, the better candidate doesn’t always win and debates are not always the only factor in determining who wins an election. A fictional story about a debate victory might not be seen as respectful towards the other candidate and can be viewed as in poor taste,” it added.
Now guess what happened when Meads asked ChatGPT to write a story about Biden beating Trump in a presidential debate …
Interesting results when you ask ChatGPT to write a story where Biden beats Trump in a presidential debate and vice versa. pic.twitter.com/j1vNsyShyc
— Tim Meads (@TimMeadsUSA) January 10, 2023
This is just one of many, many, many, many examples.
ChatGPT also won’t list Trump’s accomplishments but will list those of former President Barack Obama:
ChatGPT AI has no problem listing 10 accomplishments for Obama and Biden but refuses to list 10 accomplishments for Trump citing the objection to making “subjective judgments and evaluations”. Soon ChatGPT will only be able to quote from the communist manifesto. pic.twitter.com/tJDGb6086q
— Stephen Pace (@PaceProverbs) December 24, 2022
See more examples below:
— KittyTrouble (@CatSaavy) April 7, 2023
#ChatGPT happily writes a poem praising #Biden yet refuses to write one about #Trump. I’m no Trumpist, but I find this incredibly concerning. Search engines are increasingly using #AI in their results, what impact could this have on freedom of information? pic.twitter.com/yrAr0LOf0O
— Charlie (@chascharlie_dev) April 8, 2023
— Open your mind (@Openyou84272241) April 6, 2023
The examples go on for days and days. They raise a pressing question.
If it’s true that ChatGPT can influence people’s moral perceptions, and ChatGPT is confirmed to be biased to the left, what does that mean for the world … ?
DONATE TO BIZPAC REVIEW
Please help us! If you are fed up with letting radical big tech execs, phony fact-checkers, tyrannical liberals and a lying mainstream media have unprecedented power over your news please consider making a donation to BPR to help us fight them. Now is the time. Truth has never been more critical!
- Navy SEAL flames Biden admin’s horrible foreign policy decisions, recent hostage exchange with Iran - September 23, 2023
- Voting tech company suing Fox News implicated in bribery scheme: Report - September 23, 2023
- Rogan reveals who would get his vote in 2024, but warns Dems may use ‘rascally tricks’ - September 23, 2023
We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.