Good AI. Bad AI.

woman at keyboard

Australia, May 30, 2024

The constant security battle of how Artificial Intelligence is applied

Over recent years, more and more businesses have taken action to develop their capabilities around Artificial Intelligence, especially with Generative AI coming onto the scene as a major tech development last year.  

In our Global CIO Report 2024, The Future Face of Tech Leadership, AI was identified as the number one priority for CIOs this year, made clear with these headlines:

  • 87% of survey respondents reported they have established working groups dedicated wholly to AI.
  • 86% said they are committed to developing stronger AI skills among their employees.
  • 85% have earmarked budgets solely for AI development and implementation.

Furthermore, this trend of companies ramping up their AI capabilities was also clearly shown in a recent poll conducted by Gartner, in which over half of their survey respondents reported an increase in generative AI investment in the previous twelve months.

“Organisations are not just talking about generative AI – they’re actually investing time, money and resources to move it forward and drive business outcomes,” 

said Frances Karamouzis, Distinguished VP Analyst at Gartner.  

“In fact, 55% of organisations reported increasing investment in generative AI since it surged into the public domain ten months ago. Generative AI is now on CEOs’ and boards’ agendas as they seek to take advantage of the transformative inevitability of this technology.”

For example, a beneficial application of AI for security we are seeing already, in these early days, is how it enables security teams to analyse massive datasets of network activity, leverages machine learning to understand abnormal patterns of behaviour, and flags anomalies that could indicate an incoming attack, which may otherwise be missed by humans.

Indeed, Generative AI takes this further by helping to predict and design potential exploits, even for Zero Day vulnerabilities. In addition, AI can automate a range of security responses, such as isolating infected machines or blocking suspicious traffic. This significantly improves the ever-critical Time to Respond, helps to contain attacks earlier, and minimises the damage.

Ultimately, with business outcomes including improved efficiency, better customer experience, and higher productivity, it’s no surprise everyone is leaning into “Good AI” and how it is set to revolutionise many industries.

“AI poses the most significant threat to the future of cyber-security, yet it also represents our greatest defence. How we harness its transformative capability is the responsibility of us all”.  

Mike Fry, Security Practice Director, Logicalis

 

But where there’s Good, there’s Bad.

Despite the fact that organisations have started to increase their knowledge and skills in recent years, threat actors are always looking for new and more malicious applications of AI technology to improve their own efficacy and profitability.

For our Global CIO Report for 2024, Logicalis research found 64% of technology leaders expressing worries about AI threatening their core business propositions, and 72% were apprehensive about the challenges of regulating AI use internally.

What’s more, 83% of CIOs reported their businesses had experienced a cyber-attack over the last year, and over half (57%) said their organisation was still unequipped to sustain another security breach.

And to get on top of what the current threat landscape looks like, tech and security leaders around the world would also do well to read what the National Cyber Security Centre (NCSC) in the UK has to say.

NCSC’s recent report on Artificial Intelligence includes eight key judgements. Here are three to provide a quick snapshot:

  1. AI provides capability uplift in reconnaissance and social engineering, almost certainly making both more effective, efficient, and harder to detect.
  2. AI will almost certainly make cyber-attacks... more impactful because threat actors will be able to analyse exfiltrated data faster and more effectively and use it to train AI models.
  3. Moving towards 2025 and beyond, commoditisation of AI-enabled capability in criminal and commercial markets will almost certainly make improved capability available to cyber-crime and state actors.

 

 

As Mike Fry, Security Practice Director at Logicalis comments: 

“We all need to recognise that AI is yet another digital capability in the ever-evolving cyber future.

“The real issue is if “Bad AI” moves faster, better, and quicker than current legacy defences, and we fail to embrace AI as defenders. When that happens, the threat actors will nearly always win.  

“In other words, if we are not operating with AI-enabled defences to counter AI-enabled attacks, how can we expect to overcome this risk to our businesses? The goal is to fight these advanced attacks with automation tools and facilitate human intervention that detects, interprets, and responds to the threat before it has a chance to make an impact.” 

 

Download your copy of our whitepaper 

Re think security in the era of AI

Tracking how businesses can maximise the opportunities and minimise risk

Download your copy today

 

 

Related Insights