The surge in AI popularity has attracted both well-intentioned individuals and those with more sinister motives. Security experts are raising concerns about the use of AI models for malicious purposes, known as “hackbots,” which have become a popular tool among threat actors. These hackbots are now available as turnkey services in subscription-based platforms.
On one hand, cybersecurity professionals have recognized the potential of AI-enabled tools to enhance system protection. However, threat actors have also eagerly embraced these technologies to exploit security vulnerabilities. In recent years, there has been a significant increase in the use of AI applications by malicious actors, prompting security teams to develop effective strategies to combat AI-related threats.
British cybersecurity specialists also consider AI as an emerging risk, as it operates in uncharted waters and constantly evolves.
The UK’s National Cyber Security Centre (NCSC) predicts that the first quarter of 2024 will witness the highest number of AI-related cybercrimes, surpassing the records of 2022 and 2023. These clever criminals have utilized language models for social engineering, such as creating videos or audio recordings featuring celebrities using phishing schemes or voice recognition devices. Vascut Jakkal, the vice president of security at Microsoft Corporation, addressed this issue at the RSA Conference in 2024. The concern lies not in the deterioration of these tools, but in their increasingly widespread availability for password cracking, which has led to a tenfold increase in identity-related attacks.
Some experts have discovered that chatbots use specific phrases to actively develop malware. While publicly available services like ChatGPT and Gemini have implemented safeguards to prevent misuse for malicious purposes, hackers have managed to bypass many of these protections through sophisticated and prompt engineering techniques.
The growing trend of “hackbot-as-a-service” is becoming prevalent in cybercrime. Recent studies indicate that publicly visible language models generally fail to exploit software security weaknesses. However, OpenAI’s GPT-4 has shown promising features by generating executables for known vulnerabilities. These limitations have likely led to the development of prototype malicious chatbots designed to assist cybercriminals in their malicious activities.
These malicious chatbots are advertised on forums and marketplaces on the dark web, offering cybercriminals the ability to hire and exploit attackers, fueling the hackbot-as-a-service model. Trustwave SpiderLabs published a blog post in August 2023 highlighting the increasing number of malicious language models hosted on various hidden web message boards for profit.
Trustwave introduced the WormGPT model in June 2021, which is one of the known malicious language models used by hackers. These chatbots inject cyber attacks through hacking structures hosted on the dark web. Another malicious language model, FraudGPT, was discovered by threat researchers at Netenrich in July 2023 before it appeared on Telegram.
These tools enable attackers to create assets for social engineering attacks, such as phishing emails, deepfakes, and voice cloning. However, their creators claim that their real value lies in exploiting vulnerabilities. Hackers can input code related to specific vulnerabilities into these malicious models, potentially producing several proof-of-concept (PoC) exploits for attackers to test.
These products are sold on the dark web’s undercover markets, where hackers pay a monthly license fee to use the hackbot. This model is similar to the ransomware-as-a-service (RaaS) model, which is directly linked to the ransomware complex that many companies face today.
While WormGPT was the first large-scale malicious language model, other unethical models like BlackHatGPT, XXXGPT, and WolfGPT quickly emerged, creating a new market segment in the cyber black market.
The effectiveness of hackbots as a threat is still a topic of debate. In contrast to Trustwave’s research, which aimed to test the efficiency of recommended tools by comparing their outputs to those generated by legitimate chatbots, the findings revealed that ChatGPT could effectively generate Python malware with specific prompts. However, the code needed additional modifications, and the message had to claim that the code was white hat before deployment.
ChatGPT may be capable of generating realistic text messages for phishing attacks, but the instructions must be specific. ChatGPT only generates such messages when users request them. Therefore, these chatbots can be seen as a simpler method for cybercriminals or AI to attack users, rather than going through the tedious process of creating web pages or malware.
While this industry is still new and threats are constantly evolving, companies must be fully aware of their current level of protection. The content and cultural characteristics of AI systems can be exploited to create a disinformation gap, which can only be addressed by developing robust AI security programs and identity management tools.
The legitimacy of solutions to this growing problem is still under debate. However, recent ransomware attacks have shown that cybercriminals can match, if not surpass, the pace of software developers’ advancements.