Over the past year, more than 100,000 compromised ChatGPT account credentials have found their way on dark web marketplaces. A June 20 blog post by Singaporean cybersecurity firm, Group-IB, revealed just over 101,000 ChatGPT stolen accounts have been sold via underground cybercrime forums. The majority of the compromised accounts were located in the Asia-Pacific region, with India accounting for 12,632 stolen ChatGPT credentials.

ChatGPT stolen accounts

The report by Group-IB’s Threat Intelligence unit said that Asia Pacific saw the largest number of ChatGPT accounts stolen by info-stealers, 40.5 per cent between June 2022 and May 2023.

“The number of available logs containing compromised ChatGPT accounts reached a peak of 26,802 in May 2023,” Group-IB revealed in a press release. “The Asia-Pacific region has experienced the highest concentration of ChatGPT credentials being offered for sale over the past year.”

France, Brazil, Egypt, Morocco, Indonesia, Vietnam, Pakistan, and the U.S. were also amongst this list of countries with the largest number of compromised ChatGPT login credentials.

The most common information stealer used to breach ChatGPT accounts was Raccoon, in addition to Vidar and RedLine.

What’s concerning about the stolen ChatGPT accounts sold on Dark Web?

ChatGPT accounts can be created directly through ChatGPT’s parent website OpenAI. One can create an account using their Google, Microsoft or Apple account to login and use the AI service. This could lead to a number of security issues:

1. The credentials could be used to access other online accounts. People tend to reuse passwords across multiple accounts, so if a hacker gets their hands on a ChatGPT account password, they could potentially misuse it to access linked accounts, such as email, social media, or banking accounts.

2. The credentials could be used to spread malware. This should be a major concern for individuals and companies. Hackers could use the compromised ChatGPT account to send phishing emails or malicious links to other users. This could lead to further spread of malware, which could then infect a network of computers and steal sensitive information.

3. The credentials could be used to launch denial-of-service attacks. Hackers could use the compromised ChatGPT login credentials to launch denial-of-service attacks against websites. This could lead to a wider disruption of online services and make popular websites unavailable to users.

According to security experts, the logs indicated that the most of the breached ChatGPT accounts were stolen by the Raccoon information-stealing malware. Raccoon is used by cybercriminals to steal confidential and sensitive data from victim’s browsers and cryptocurrency wallets, stealing saved credit card details, saved login credentials, and extracting information from cookies.

Fraudsters and malicious hackers could purchase access to Raccoon’s capabilities for as little as $200-per-month.

It is approximated that one million people had fallen victim to Raccoon by the end of 2022, with users most commonly targeted via boobytrapped emails.

Group-IB has raised several red flags in the past over the rising use of OpenAI’s ChatGPT in the workplace. It warns that confidential and sensitive information about companies could fall into unauthorized hands as user queries and chat history is stored by default.

What’s interesting to note is that the press release by Group-IB warning users about this security faux pas was written with the help of ChatGPT.