OPINION | How hackers can weaponise generative AI
Representative image
For decades, the cybersecurity ecosystem has struggled with a highly controversial matter—the dual-use nature of new technologies.
Everything from the internet to encryption brought connectivity and cutting-edge capabilities like never before. But it also provided a new playing field for malicious actors.
Now, the almost-lawless pace of development of generative AI, led by large language models (LLMs) such as OpenAI’s ChatGPT, Google’s Gemini, and Claude from Anthropic, is injecting yet another alarming angle to the cybercrime equation.
The forbidden fruit for skilled nation-state hackers or highly sophisticated criminal syndicates is being turned toward a diluted landscape, comprised of less skilled and, at times, some more dangerous geopolitical demographic.
The question very soon won’t be if hackers are weaponising generative AI, but how extensively and how quickly they are doing it.
Lowering the barrier to entry for cybercrime
Generative AI has given humans unprecedented powers, to say the least. Its ability to produce human-like text, code, and even multimedia is a game-changer, but more so for the darker side of the internet.
Why, you may ask? It’s because this new technology has made the technical bar for a range of illicit activities even lower than it was a decade ago, or, for that matter, even five years ago. This has turned the would-be “script kiddies” into more formidable threats.
Hyper-realistic phishing and social engineering
The first lesson in any corporate “Cybersecurity 101” training session would be a poorly translated email riddled with grammatical errors. This was the hallmark of a phishing attempt. But these days are gone by.
Generative AI is helping cybercriminals craft highly convincing and, more importantly, contextually relevant phishing emails, spear-phishing messages, and even complex social engineering scripts.
Your social profiles are a treasure chest for cybercriminals in the Age of AI. LLMs have the power to process all your publicly available data within a matter of seconds. For this, all that a cyber crook needs to do is feed links to your social media profiles or, in some cases, just a target name.
Once the LLMs study your data, a tailor-made message for specific individuals is created, making them incredibly difficult to distinguish from legitimate communications. They can mimic the tone, style, and vocabulary of real colleagues, superiors, or trusted institutions.
In recent times, where geopolitical tensions have flared across the globe, the digital landscape has become a battlefield and AI an ally. AI eliminates the language and cultural nuances that previously made cross-border scams and cyber threats less effective. Criminal groups operating across borders can now perfectly translate and localise content, including disinformation, for a specific demographic.
Emails and geo-targeted content are just one part of how cybercriminals are leveraging LLMs. They are now powering sophisticated chatbot interfaces for real-time social engineering, allowing attackers to engage with victims in longer conversations, building trust or exploiting vulnerabilities without immediate human intervention. This makes it harder for victims to detect red flags that a human attacker might mistakenly reveal.
Facilitating malware development and evasion
While Google said a quarter of its code is written by AI, and Microsoft’s Nadella confirmed 30 per cent of the code in Redmond is written by AI, the tech isn’t yet independently writing zero-day exploits. But that being said, its role in assisting malware development is increasingly apparent.
With AI, a less technically experienced hacker is now able to write a Python script for basic file encryption or network scanning, then adapt it for nefarious purposes. Researchers and cyber crooks have tried their hands at writing malicious payloads and have been successful to a certain extent as well.
Not just emails, but also imagine having a variant of malware for every other person in the organisation. One is different from the other. Yes, AI can be used to generate variations of malware, making it more polymorphic and difficult for signature-based detection systems.
A continuous alteration of the code’s structure while maintaining its malicious functionality can make a security personnel’s life a living nightmare.
While finding new zero-days is a difficult task, AI can help parse vast amounts of research, vulnerability databases (like CVEs), and public exploit code to identify existing vulnerabilities and suggest methods for exploitation—and even write scripts—making it easier for attackers to capitalise on known weaknesses.
Enhanced information gathering and reconnaissance
Just like how AI can help scan vulnerabilities, it can also be used extensively to analyse your entire environment. Generative AI significantly streamlines this phase.
An LLM can sift through massive volumes of open-source intelligence (OSINT)—social media posts, corporate websites, news articles, public records—to quickly build detailed profiles of targets (individuals or organisations). This includes identifying key personnel, organisational structures, potential vulnerabilities, and even personal habits that could be exploited in social engineering.
While not performing active scans itself, AI can process scan results, correlate them with known vulnerabilities, and suggest potential attack vectors. It can also analyse network documentation (if acquired) to map out critical systems and potential entry points.
The ‘democratisation’ of hacking
The most troubling aspect of generative AI’s weaponisation is perhaps its potential to democratise cybercrime. Earlier, launching convincing phishing campaigns or developing malware required specialised skills and knowledge.
Now, individuals with far less technical prowess can leverage AI as an intelligent assistant, enabling them to bypass technical limitations, scale operations, and reduce human error.
This lowering of the entry barrier poses a significant challenge for law enforcement and cybersecurity professionals, as the sheer volume of potential attackers and the sophistication of their attacks could skyrocket.
AI arms race?
The weaponisation of generative AI is not a future threat but a current reality. As LLMs become more powerful, accessible, and integrated into everyday tools, their potential for misuse will continue to expand.
The cybersecurity landscape is entering an era of an AI arms race, where both attackers and defenders will increasingly leverage sophisticated machine intelligence.
For tech companies, the directive is clear—develop AI responsibly, with built-in guardrails that will prevent misuse. The digital future will be defined not just by what AI can do, but by how effectively we control its darker capabilities.
The author is the Senior Director and Head - Solutions Engineering at Cyble.
The opinions expressed in this article are those of the author and do not purport to reflect the opinions or views of THE WEEK.
Sci/Tech