Hackers find ways to use ChatGPT to commit cybercrime

1174

A Check Point Research (CPR), the Threat Intelligence division of Check Point Software Technologies, a global cybersecurity solutions provider, found that cybercriminals have increased their use of Telegram bots with the aim of creating a new methodology that they manage to circumvent anti-virus restrictions. abuse of Chat GPT and other tools based on OpenAI. The main goal is to be able to use these features to debug and improve malicious code.

Aware of this problem, OpenAI updated its content policy, created barriers and restrictions to try to prevent the creation of malicious content. Several restrictions have been placed on the ChatGPT user interface to prevent its inappropriate use. For example, from now on AI will not generate responses to requests with specific references to creating malware or phishing emails that attempt to impersonate a bank or other entities.

However, cybercriminals are already working to bypass ChatGPT restrictions. As the team of researchers at Check Point Research detected, there is currently a lot of activity in underground forums, where people talk about how to use the OpenAI API to overcome the barriers and limitations of these tools, highlighting a model focused on the creation and distribution of Telegram bots that emulate the ChatGPT API.

Specifically, CPR detected advertisements for Telegram bots on underground forums. Bots use the OpenAI API to allow an attacker to create malicious emails or code. Bot creators give up to 20 free queries but charge $5,50 for every 100 queries. A virtually zero cost compared to the high profits made from this type of cyberattack.

Cybercriminals continue their exploration of how to use ChatGPT to develop malware and phishing emails. As the controls implemented by ChatGPT improve, they find new ways to use OpenAI models.

“As part of its content policy, OpenAI has created barriers and restrictions to prevent the creation of malicious content on its platform. However, we do see cybercriminals circumventing ChatGPT restrictions, and there are active conversations on underground forums revealing how to use the OpenAI API to bypass ChatGPT barriers and limitations. This is mostly done by creating Telegram bots that use the API, and these bots are advertised on hacker forums to increase their exposure,” details Sergey Shykevich, Threat Group Manager at Check Point Software.

Shykevich further explains that the current version of the OpenAI API is used by external applications and has very few anti-abuse measures in place. As a result, it allows the creation of malicious content such as phishing emails and malware code without the limitations or barriers that ChatGPT has set in its user interface. “We are currently seeing continued efforts by cybercriminals to find ways to circumvent ChatGPT restrictions.”

LEAVE AN ANSWER

Please enter your comment!
Please enter your name here