As many as 1,01,134 stealer-infected devices with preserved ChatGPT credentials are being sold in the dark web between June 2022 and May 2023, according to a recent analysis by Group-IB, a leader in cybersecurity located in Singapore.
They claimed to have discovered more than 26,800 ChatGPT credentials in the past month alone, which is a record high since Group-IB started monitoring the data.
India (12,632), Pakistan (9,217), and Brazil (6,531) were among the top nations where users were affected by the hack, according to the Threat Intelligence Unit of the Group-IB.
The majority of these credentials were exchanged in the Asia-Pacific area, according to the research.
According to the cyber firm’s analysts, the well-known Raccoon virus allowed access to the stolen ChatGPT passwords.
Similar to standard malware, Raccoon takes user information from computers once users download software that seems like an app or file they genuinely desire.
Since it has a strong subscription base and is well-known for being readily available, hackers frequently choose Raccoon as their virus of choice.
According to a statement from Dmitry Shestakov, Head of Threat Intelligence at Group-IB, “Many enterprises are integrating ChatGPT into their operational flow.”
Employees can utilize the bot to optimize their proprietary code or input confidential messages. Given that ChatGPT’s default setting saves all discussions, threat actors may unintentionally get access to a wealth of important information if they manage to get hold of account credentials.
Also read: China Plus One: India Should Benefit – World Bank President
Furthermore, if individuals use the same password across several platforms, the hackers who have access to their ChatGPT accounts can employ malware to get access to their other accounts.
Additionally, if the target purchases ChatGPT Plus, the premium version of the service, they can unknowingly be financing the usage of the paid-for service by others.
Possible risks of a ChatGPT hack
In particular, possible security issues related to having a ChatGPT account hacked by hackers have been brought up by several specialists.
The storage of sensitive data in ChatGPT has been discouraged by several organizations, including Google, as this data may be used to train AI language models.
A feature of ChatGPT allows users to preserve their chat history. After its inventor OpenAI published a feature a few months ago, this became possible.
And it does give validity to the possibility that it may have happened previously that businesses have to alert their staff.