Artificial Intelligence is coming more and more to the front in the news, in just about all spheres of IT, no matter the vertical it serves. 

What exactly is AI?

Artificial intelligence (AI) describes computer systems which can perform tasks usually requiring human intelligence. This could include visual perception, speech recognition or translation between languages.

Of course, that’s not the only description you’ll find if you use your best research tool, Google, but it’s one used by the National Cyber Security Centre, so it’ll do for me.

I’m willing to bet that many of you, most of you, have some form of AI app downloaded on your devices.  ChatGPT is arguably the most popular amongst the general populace but it’s not the only game in town.  These apps are becoming more and more available and popular. ChatGPT is an artificial intelligence chatbot developed by OpenAI, a US tech startup. It’s based on GPT-3, a language model released in 2020 that uses deep learning to produce human-like text.  It has an underlying technology that has been around much longer, but this blog isn’t about the technicalities of AI, but more about how it affects SMEs as they go about their business.

I’ve been arguing that perhaps the biggest potential threat in terms of proliferation, ie the number of attacks waged at a relatively low level, aimed at quick wins in terms of scamming money, is the re-emergence of the script kiddie.  I wrote, some time ago, about how code could be written to be inserted into a Ransomware attack, quite easily, using AI. 

Script Kiddie

A script kiddie was what we called someone of relatively low skill levels who would go online to the dark web, and purchase scripts written by more advanced criminals that they had put up for sale.   The script kiddie would then use these scripts to mount an attack on a company or organisation.  These often failed because the script kiddie wasn’t all that bright, but just occasionally, they were harmful and even devastating.

Another scam, reported in The Washington Post, was an update on the very well reported CEO Scam, whereby someone impersonates the CEO of a company using spoofed email, but this time it’s using AI spoofing the CEOs voice, attempting to scam money from the company.  This one worked, and the company lost a lot of money.

So why am I bringing this up again so soon?  Well AI continues to be reported in the news, sometimes with a good result and sometimes not so good.  So, I thought I’d ask Chat GPT what it thought about AI and Cyber Security.  I asked 2 questions, the first was, ‘Is AI a danger in terms of Cybersecurity?’.  The answer came back:

Double Edged Sword

AI can be both a boon and a potential danger in terms of cybersecurity.  On one hand, AI can enhance cybersecurity by detecting and mitigating threats more efficiently, analysing vast amounts of data for anomalies, and automating certain security tasks.  On the other hand, AI can also pose risks if it falls into the wrong hands or is used maliciously.  Sophisticated AI powered attacks could exploit vulnerabilities, evade detection, or launch targeted attacks at an unprecedented scale.  It is crucial to develop robust safeguards ethical guidelines, and responsible AI practices to ensure AI remains a force for good in cybersecurity.

So even ChatGPT thinks this is a double edged sword.  The real problem for my target clientele, the SME space, is that affording AI powered tools to detect and mitigate threats and analyse large amount of data, will seemingly be too costly for them.  But as many defences use AI themselves now, it’s actually quite affordable.  So, when we are talking about developing robust safeguards, understanding the risks becomes even more vital, if that’s possible, than it ever has been before.  It also brings Zero Trust solutions sharply into focus and finding such solutions that are affordable, is going to become crucial.

The next question I asked was, ‘How do we ensure that AI isn’t a cybersecurity threat?’.  The answer came back:

Key Considerations

Ensuring that AI is not a cybersecurity threat requires a multi-faceted approach. Here are some key considerations:

1. Robust Security Measures: Implement strong security protocols, encryption, and access controls to safeguard AI systems and their underlying data from unauthorized access or manipulation.

2. Rigorous Testing and Validation: Thoroughly test and validate AI algorithms and models to identify and fix vulnerabilities, including potential avenues for cyberattacks, before deployment.

3. Adversarial Testing: Conduct adversarial testing to simulate potential attacks and evaluate the resilience of AI systems against different cybersecurity threats.

4. Data Privacy and Governance: Establish clear policies and practices for data privacy, ensuring that sensitive information is properly handled, stored, and shared in compliance with relevant regulations.

5. Ongoing Monitoring and Updates: Continuously monitor AI systems for potential security risks, promptly address vulnerabilities, and keep software and hardware up to date with the latest security patches.

6. Ethical AI Development: Integrate ethical considerations into the design and deployment of AI systems, emphasizing transparency, fairness, and accountability to mitigate potential biases or unintended consequences.

7. Collaboration and Information Sharing: Foster collaboration among industry, academia, and government entities to share knowledge, best practices, and emerging threats, enabling a collective effort to combat AI-related cybersecurity challenges.

8. Education and Awareness: Promote cybersecurity awareness and provide training to AI developers, users, and stakeholders, emphasizing responsible AI usage and the identification of potential threats.

By following these guidelines and continuously adapting to emerging risks, we can enhance the security of AI systems and minimize the potential for them to be exploited as cybersecurity threats.

Next Steps & Relevance

Again, looking at this from an SME perspective, some of it isn’t relevant i.e., No6 for example, SMEs aren’t going to be developing their own AI solutions.  And much of it I would have come up with on my own, without the aid of a machine.  It would appear that AI uses some common sense, which is nice.  No8 for example.  I bang on and on about this.  It is low cost and easy to implement.  It’s staggering how many companies don’t do this. This list also shows the value of Zero Trust solutions and encryption, which on its own, vastly reduces the risk to data, particularly PII (personal identifiable information – UK GDPR).

The argument then is that AI might encourage a proliferation of low level attacks, largely aimed at SMEs who generally have the lowest defences.  Quite low level criminals can utilise AI to carry out attacks that heretofore would have been beyond their skill level.  Common Cyber sense can go a long way to mitigating these attacks.  Technology evolves, attacks evolve, but the basic understanding of threat + vulnerability = risk, has never gone away.  Understand that and you stand a good chance of staying safe.

Scroll to top