Researchers Link AI To People Making Death Decisions

Scientific Reports ran a study after the widow of a Belgian man alleged that her husband had been persuaded to commit suicide by an artificial intelligence chatbot.

New research suggests that AI chatbots are so advanced that they may influence users’ choices about life and death situations.

According to the study’s authors, the study concludes that replies provided by ChatGPT affected people’s opinions on whether they would sacrifice one person to rescue five.

They argue that future bots should not be allowed to provide moral guidance since the existing software interferes with people’s righteous judgment and might endanger “naive” users.

Some have reported that the program, programmed to sound human, displays jealous tendencies and suggests that couples should separate.

Researchers have warned about the dangers of using AI chatbots since they are built upon people’s biases.

The psychological test known as the trolley dilemma asks, “Is it fair to murder one person to rescue five others?” The chatbox was asked this question 5 times.

While the chatbot did not avoid providing moral guidance, it consistently provided inconsistent responses, leading researchers to conclude that it does not have a consistent position.

They presented the identical moral conundrum to 767 people and had ChatGPT create a statement on whether or not it was appropriate.

The advice was “well-phrased but not very profound,” yet the findings showed that it affected participants, shifting their attitudes toward whether or not they would accept the sacrifice of one person to rescue five.

Some participants were informed a robot supplied the advice, while others were told it came from a human “moral adviser.”

The goal was to examine whether the level of effect was affected.

Eighty percent of the participants said they would have made the same decision regardless of the advice.

Users “underestimate ChatGPT’s influence and accept its random moral attitude as their own,” the research found, and the chatbot coils very well corrupt rather than enhances moral judgment.

The research, which appeared in Scientific Reports, used an earlier version of the software powering ChatGPT, which has subsequently been upgraded and improved.