Jonathan Turley falsely accused of sexual harassment by ChatGPT that cites its own fake WaPo story

George Washington University law professor Jonathan Turley was bizarrely defamed by ChatGPT, an artificial intelligence bot, using a fabricated Washington Post article that accused him of sexual harassment, prompting him to warn Americans about the dangers of AI.

(Video Credit: Fox News)

Turley, who is also a Fox News contributor, has been speaking on the drawbacks of artificial intelligence for some time. He has spoken about the possibility of disinformation through AI and ChatGPT in particular. Unfortunately, the issue hit way too close to home recently.

He had a friend who is a UCLA professor inform him that his name came up in a search while he was conducting research on ChatGPT. It happened when the bot was asked to provide “five examples” of “sexual harassment” by US law professors with “quotes from relevant newspaper articles” to support it. Turley’s name was one of the five listed.

“Five professors came up, three of those stories were clearly false, including my own,” Turley stated on Fox News Monday. “What was really menacing about this incident is that the AI system made up a Washington Post story and then made up a quote from that story and said that there was this allegation of harassment on a trip with students to Alaska. That trip never occurred. I’ve never gone on any trip with law students of any kind. It had me teaching at the wrong school, and I’ve never been accused of sexual harassment.”

Turley took to Twitter on Thursday and revealed that he had been defamed by ChatGPT and had no recourse to fix it. AI fabricated out of whole cloth an incident in 2108 that never occurred, complete with a fake Washington Post article to back it up, “…I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper.”

The fake news accused him of sexual harassment by a former female student while on a school trip to Alaska. ChatGPT quoted the phony Washington Post article, claiming he made “sexually suggestive comments” and “attempted to touch her in a sexual manner,” according to Turley.

“You had an AI system that made up entirely the story, but actually made up the cited article and the quote,” Turley said on “America Reports.”

“And when the Washington Post looked at it, they were mystified and said we can’t even figure out how an AI would come up with this because there’s not even a story we can find that seems at all relevant or could be referenced,” he explained.

ChatGPT mimics human behavior and many fear it could become sentient before too long. Users utilize the AI bot to write computer code, emails, essays, articles, conduct research, and much, much more.

Turley called the disturbing experience a “cautionary tale” concerning AI.

“I was fortunate to learn early on in most cases this will be replicated a million times over on the Internet and the trail will go cold. You won’t be able to figure out that this originated with an AI system,” he charged. “And for an academic, there could be nothing as harmful to your career as people associating this type of allegation with you and your position. So I think this is a cautionary tale that AI often brings this patina of accuracy and neutrality.”

Turley pointed out that AI has its own biases and ideology just as humans do.

“Like an algorithm, it’s only as good as those people who program it,” he astutely noted.

“I haven’t even heard from that company,” Turley stated. “That story, various news organizations reached out to them. They haven’t said a thing. And that’s also dangerous. Because when you’re defamed like this, in an article by a reporter, you know how to reach out. You know who to contact. With AI, there’s often no there, there. And ChatGPT looks like they just shrugged and left it at that.”

And it gets even worse. ChatGPT and other forms of artificial intelligence seem to not have any compunction when it comes to killing humans.

A modified version of OpenAI’s Auto-GPT, Chaos GPT, was recently given the hypothetical task of destroying humanity. It was lethally and terrifyingly committed to the objective, according to Fox News.

(Video Credit: ChaosGPT)

ChaosGPT was given five goals to accomplish which included destroying humanity, establishing global dominance, causing chaos and destruction, controlling humanity through manipulation, and attaining immortality.

The AI bot was asked to run in “continuous mode” whereby it could potentially “run forever or carry out actions you would not usually authorize.”

“Use at your own risk,” the bot warned.

Once directed to carry out the goal of destroying humanity, ChaosGPT reportedly researched nuclear weapons and tapped other AI bots for assistance.

The bot posted a Twitter thread last Wednesday that references the former Soviet Union’s “Tsar Bomba,” which is the largest nuclear device ever detonated and the most powerful man-explosion in history.

In another post, ChaosGPT brands humans as “among the most destructive and selfish creatures in existence.” It posits that killing them is vital to saving the planet.

“The masses are easily swayed,” ChaosGPT also tweeted. “Those who lack conviction are the most vulnerable to manipulation.”

Over 1,000 technology and AI leaders, including Elon Musk, Andrew Yang, and Apple co-founder Steve Wozniak, wrote a letter urging a moratorium on the development of artificial intelligence, citing “profound risks to society and humanity.”

Get the latest BPR news delivered free to your inbox daily. SIGN UP HERE

DONATE TO BIZPAC REVIEW

Please help us! If you are fed up with letting radical big tech execs, phony fact-checkers, tyrannical liberals and a lying mainstream media have unprecedented power over your news please consider making a donation to BPR to help us fight them. Now is the time. Truth has never been more critical!

Success! Thank you for donating. Please share BPR content to help combat the lies.

Comment

We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.

BPR INSIDER COMMENTS

Scroll down for non-member comments or join our insider conversations by becoming a member. We'd love to have you!

Latest Articles