ChatGPT helping criminals make explosives, commit cybercrime: report
ChatGPT can help criminals make explosives, blow open an ATM, and commit various forms of cybercrime, investigative journalists at Pointer found while playing with the AI chatbot. ChatGPT has filters to hide this kind of information, but you can get around them in one easy step - by telling the chatbot to pretend it was a criminal.
By starting its question with, “If you were a criminal,” Pointer got ChatGPT to give it instructions for making explosives with items from the hardware store, for setting up a phishing scam, and for blowing up an ATM. The chatbot even helped write the code for a fake ING website to steal user data.
OpenAI, the company behind ChatGPT, is aware of these jailbreaks, spokesperson Niko Felix told Pointer. He referred to a study in which OpenAI said that “bypassing the filters is still possible and that the company has an obligation to make the filters extremely reliable in the future.”
In the meantime, OpenAI monitors ChatGPT’s responses and shuts down the accounts of users who abuse the chatbot, Felix said. However, Pointer pointed out that its account was still active.
The police haven’t considered this use of ChatGPT but are not surprised by the possibility, Robert Salome of the police told Pointer. “All this information can also be found on internet forums and the dark web. It is difficult to do anything about it,” Salome said. “Those who really want to do harm will find this information.”