New reports on AI reveals how cybercriminals are using ChatGPT to conduct cyber-attacks and write malware
Technology

New reports on AI reveals how cybercriminals are using ChatGPT to conduct cyber-attacks and write malware

Summary: Over 20 cyberattacks are stopped by OpenAI using ChatGPT for phishing, malware, and misinformation campaigns. 


AI has both positive and negative uses. AI is not only used for deep fakes but also for serious illegal activities like cyberattacks. The most recent report by OpenAI reveals the misuse of chatbot ChatGPT to create malware and assist malicious activities. 

 

According to a latest study, 'Influence and Cyber Operations: An Update,' hackers use ChatGPT to create malware, write code, start social engineering campaigns, and carry out post-compromise activities.

 

A cybersecurity firm, ProofPoint, discovered one of the first signs of a problem in April 2023, when the cyber organization, TA547 (nicknamed "Scully Spider") was suspected of using an AI-written PowerShell loader to deliver a final payload, the Rhadamanthys infostealer. 

 

Also, in September 2023, researchers from HP Wolf revealed that fraudsters were utilizing AI tools to write scripts that were part of a multi-step malware infection chain, targeting the French people.

 

OpenAI has addressed more than 20 hostile cyber attacks using ChatGPT since the beginning of 2024, affecting many governments and industries in different countries.

 

In its analysis, OpenAI verified that cybercriminals were misusing ChatGPT, focusing on two cybercriminals: CyberAv3ngers and SweetSpecter.

 

In November 2023, Cisco Talos published for the first time about the Chinese cyber-espionage group SweetSpecter. The group sent spear-phishing emails to OpenAI workers' email addresses to directly target the company. These emails are attached with malicious ZIP attachments under the pretext of support requests. 

 

The attachments started an infection chain resulting in the SugarGh0st remote access Trojan (RAT) being installed when opened. After looking into the issue, OpenAI found that SweetSpecter was engaging in harmful actions like vulnerability analysis and scripting using ChatGPT accounts.

 

The second instance concerned CyberAv3ngers, a group connected to the Islamic Revolutionary Guard Corps (IRGC) and affiliated with the Iranian government. The group has a history of attacking crucial infrastructure systems of Western countries. 

 

OpenAI discovered that CyberAv3ngers created unique bash and Python scripts, obfuscated code, and generated default credentials for popular PLCs using ChatGPT accounts. 

 

The group used ChatGPT to organize its post-compromise activities, learn to steal passwords on macOS computers, and take advantage of specific vulnerabilities.

 

OpenAI has begun terminating accounts involved in these activities and notifying cybersecurity partners of pertinent indicators of compromise (IOCs), such as IP addresses and attack techniques to overcome the issue.