While AI adoption chugs along in industries and government, cybercriminals have already weaponized it in preparation for massive global campaigns.

A recent study by Forrester Consulting to assess the cyberthreat landscape across industries and regions globally has found that 86% of respondents believe that AI will be a strong support system for cybercriminals.

Close to half of all cybersecurity decision-makers in the survey expected AI attacks to manifest themselves to the public in the next year. The increase in scale and speed of attacks came out as the top impact of weaponized AI that most cybersecurity professionals are concerned about.

The study, commissioned by cyber AI specialist Darktrace, notes that the gradual widespread rollout of 5G will not just revolutionize cyber network and connectivity, but also expose wider attack surfaces to cybercriminals with AI-enhanced power.

This is already creating an inevitable need to go one up on countermeasures to combat threats and breaches in the cyber environment. Key findings in the report signalled a trend in weaponized, offensive AI that could potentially impact industries and organizations:

  • Firms are slow to respond to offensive AI attacks. Reliance on humans keeps them from quickly detecting and responding to the scale and speed of attacks, compromising business continuity, IP, and reputation
  • Over 80% of respondents value tools that increase autonomous decision making and automate response actions to offset the shortcomings of human-based detection, interpretation and response
  • Decision makers in charge of cybersecurity say their expanded infrastructure has complexified security, while security threats have gotten faster; and advanced attacks have increased
  • Weaponized AI will create new types of cybercrime such as digital eavesdropping, deep-fakes, speech processing, that will directly affect the running of a business
  • Machines are already attacking machines, and humans are already attacking human trust—the future will include machines attacking human trust at speed and scale. Businesses are not ready for this, and neither are consumers. AI-enabled deep-fakes will cost businesses a quarter of a billion dollars in losses in 2020
  • 88% believe it is inevitable for AI-driven attacks to go mainstream. The scary future is not as far as some might think: close to half of cybersecurity decision makers expect AI attacks to manifest themselves to the public in the next year
  • Traditional defenses that rely on prior assumptions will be outmatched against supercharged AI attacks. Organizations are aware of the need for speed-to-response; however, they are slow to respond when they are triaging an incident (discovery, investigation, eradication and recover), a process that can take precious hours, by which time the damage would have been allowed to reach critical mass

The gist of the study drives at the point that AI attacks require autonomous AI vigilance and response rather than manual or AI-guided response.