Skip to main content

UK Cyber Week Blog

Sample Image

22 Nov 2023

Generative AI: Cyber Security's Friend, Foe or Distraction?

Generative AI: Cyber Security's Friend, Foe or Distraction?

The rapid rise of AI has been a game-changer for businesses across every industry on a global scale. This innovative technology offers immense potential for novel ways of protecting against cyber threats. But, like with any new technology, there will inevitably be bad actors looking to take advantage of platforms in a more sinister manner.

So what are the benefits of AI in cyber security? Will it help in the battle against cybercrime or end up just another weapon in the cybercriminal’s arsenal? Let's find out. 

Firstly, What is Generative AI? 

Generative AI is a type of machine learning that allows you to produce multiple different types of new and original content, such as text, imagery and audio, from human prompts. This groundbreaking technology has proved extremely popular with its wide range of use cases. Therefore, it's no surprise that ChatGPT, Open AI’s natural language processor, has seen its users balloon to 100 million, with over a billion monthly visits in under a year. But its massive influx of users has also resulted in drawing the inevitable attention of cybercriminals, with around 100,000 accounts being sold on the dark web since launch.

So with many people engaging with generative AI, both in their personal and professional lives, what does this mean for cybersecurity?

recent survey of cybersecurity professionals found that most were concerned that AI-powered cybercrime will outpace cybersecurity, with a quarter believing this will happen within the next year - a major cause for concern. 

But what are some of the risks associated with generative AI? And how are cybercriminals using the technology to their advantage?

Automated attacks 

Bad actors can use AI to automate attacks and scale their efforts unlike ever before. AI systems can and have been trained to automatically identify vulnerabilities in software and generate attacks without the need for human intervention, subsequently resulting in a rapid rise in both attack scale and sophistication.

Polymorphic malware

A polymorphic virus is a form of malware that continuously changes its code or appearance to evade detection by antivirus software. Hackers, now operating with generative AI on their side, can formulate new code faster than ever before, creating an even more relentless type of attack. 

Data poisoning 

AI is trained on large data sets and cybercriminals look to exploit this with what is known as data poisoning. This involves tampering with the data to manipulate the outcome of the AI’s decision-making process. This can then be used to their advantage in many ways from bypassing security measures such as facial recognition or malware detection to getting AI to reveal sensitive information. A famous example of this is the poising of Google’s Gmail spam filter. Bad actors sent millions of emails to Gmail to confuse their algorithm in classifying spam, allowing for more emails containing malware and other cyber security threats to make it to people's inboxes. 

Data leaks 

With over 1 million in the UK people using generative AI at work, the risk of data leaks has exponentially increased. The problem arises when employees put sensitive company information into the likes of ChatGPT and other large language models, whose systems are then breached by cybercriminals. 

Deep fakes

Generative adversarial networks (GANs) can create realistic fake videos, audio recordings, and imagery. This technology can then be used by malicious actors in social engineering and phishing attempts, or even to bypass more basic biometric security measures.

Password hacking

Thanks to generative AI, hackers can try an infinite number of passwords with very minimal effort, which is drastically increasing their chances of successfully gaining access to people's private accounts. 

So What Are the Benefits of AI in Cyber Security?

Undoubtedly AI is helping companies enhance their cyber security with more sophisticated and efficient ways of working. When it comes to cyber threats, speed is very much of the essence and AI supercharges how fast threats can be identified and dealt with.

Take the report by IBM which found AI had the biggest impact on the speed at which data breaches were identified and contained, with the potential to decrease the data breach life cycle by a staggering 108 days. In fact, UK organisations that have extensively utilised security AI and automation have saved on average £1.6m in data breach costs.

How to Protect Against AI Threats 

The simple answer is to fight fire with fire and develop AI cybersecurity that can mitigate the risk of AI cyberattacks. Companies need to invest in AI-based security tools that can proactively detect and block malicious attacks, working alongside more traditional cyber security measures.  

The start of any good defence is to stay informed and gain a better understanding of the technology and what threats are out there. Once you understand what you are up against, you can then focus on investing in the right cybersecurity solutions, conducting regular audits, and focusing on employee training to keep your organisation safe from cyber attacks.

Fail to Prepare and Prepare to Fail with Generative AI 

Unfortunately, many companies seem ill-prepared for the threat of AI-powered attacks. According to the McKinsey Global Survey, 40 per cent of organisations were looking to increase their investment in AI due to the advances in generative AI. However, the survey found that only 21 per cent of respondents said that their organisation had an employment policy in place for the use of AI at work. 

When it comes to cyber security, one of the biggest threats to valuable data is human error and with so many organisations using AI in the workplace, proper training is essential. Without it, establishments run the risks of employees inputting sensitive information into generative AI models or being fooled by increasingly clever phishing and social engineering scams.

As AI continues to grow and become a part of our everyday lives, it will need policymakers and cyber industry experts to work together to mitigate against the threats that are only looking to get more widespread and advanced. AI will always be a double-edged sword and an arms race between bad actors and those looking to protect themselves.

But we must not let ourselves get completely distracted by the buzz around AI, as cybercriminals are still using traditional types of cybercrime with extremely effective results. Organisations cannot afford for these to be left as an afterthought and need to invest in cyber security measures that take all threats into account.

If you are interested in finding out more about the benefits of AI in cyber security and how to use it to your advantage then make sure to attend UK Cyber Week - Expo & Conference on 17th and 18th April, 2024. 

Hear from over 100+ cybersecurity experts, hackers and industry leaders in an event that promises to give next-level insights and world-class advice on how to keep you and your organisation safe from ever-evolving cyber threats.

Register your interest here.

View all UK Cyber Week Blog
Loading