What do you get when cybercriminals amplify phishing and disinformation campaigns with unprecedented ‘generative AI’ superiority?

With the latest iterations of chatbots such as ChatGPT and Bard, where the level of generative AI has reached an unprecedented level of human-like language ability, mundane tasks such as creating content and building software code are being made easy for people. 

On the industry side, fields such as material science and pharmaceuticals are also being swept by the AI adoption race to make processes more efficient and reliable.

However, this technology is also a double-edged sword as attackers will be presented with new opportunities to disrupt operations and steal data.


Scott Hesford, Director, Solutions Engineering (APAC), BeyondTrust

How Generative AI can be abused by cybercriminals

ChatGPT’s ability to automate code building allows those with minimal programming skills to create new attack patterns or mutate existing ones easily. This setup can motivate people to target bigger organizations while increasing their frequency for maximum payouts. Also:

    • Malware source code can be bought from the Dark Web and re-purposed with new coding that allows it to stay inside machines for weeks or months on end without being detected.
    • Attackers who want to conduct phishing campaigns can use generative AI to create convincing or professional-sounding emails to hook unsuspecting users, in addition to using deepfakes for vishing and phishing campaigns.
    • With these evolving threats plaguing the business landscape, business leaders and their security teams will be under even greater pressure need to double down on their efforts to reinforce their infrastructure.

Where security teams go from here

Lowering employees’ risks of becoming targets requires a two-pronged strategy that involves both training and a “reinforced defense” framework. The former equips employees with the knowledge to spot:

    • signs of fake or malicious emails, including addresses that differ from that of the sender
    • suspicious links
    • file attachments
    • outdated logos and unusual requests for personal details

Simultaneously, a “reinforced defense” framework requires organizations to prevent threats from executing or moving laterally across the network even if a breach has already occurred. For this approach:

    1. Local administrator rights should be the main focus area as these provide attackers with the keys to critical systems and assets. This can be achieved through endpoint privilege management, which limits attackers’ capabilities to use endpoints or workstations as a gateway to access other connected devices.
    2. A privilege access management (PAM) solution is needed to provide teams with granular visibility and control over user and device access levels to prevent them from becoming exploitable targets.
    3. On occasions where employees need enhanced privileges to conduct their duties, organizations can also integrate multi-factor authentication (MFA) to challenge and verify identities before granting access.
    4. Furthermore, guides that lay out best practices should be leveraged by organizations to ensure they are insulated against the threats that rise out of generative AI. Not only do such reference guides lay out the ways to harden the organization’s security posture — they can also organizational resilience in the face of an increasingly complex cyber landscape.

Generative AI can be a source of good and bad in the world, depending on who is using it and for what purpose. Regardless of the circumstances, organizations need to prepare themselves to face ever-evolving cyber threats to maintain trust and compliance as well as ensure smooth-running operations.

By building it upon a strong identity security foundation, they will be able to mitigate risks while satisfying employees’ need for frictionless productivity.