At the risk of boring you about the risks inherent in AI, I’m going to have another go, simply because it’s a fascinating subject.  AI can really become the gift that keeps on giving.  We’ve always played catch up to the cyber criminals, trying and often failing to anticipate what the next attack will be, what the next series of attacks will be.  Will it be ransomware, denial of service or perhaps a new and more sophisticated scam?  Who knows?  But there is no doubt that AI is raising the bar.

I have talked a lot about the re-emergence of the script kiddie and how AI in enabling this particular breed of wannabe criminals.  But it’s also true that the more skilled and sophisticated criminal is making use of AI and finding new and innovative ways of relieving you of your hard earned cash.

There is a lot going around within the IT and cyber industry about the ethical usage of AI, its ethical development, and that IT system integrators have a cast of thousands working on such ethical development and usage.  Fine, I applaud them.  But what does that mean for cyber security, and indeed data protection?  Well, I have to say, in my humble opinion, not a great deal.  I say that simple because no matter how ethical we are, the criminal doesn’t give a damn, he or she will continue on their own sweet way and do what criminals have always done, which is to completely disregard ethics.  So, whilst we can applaud and support those companies who are producing software and systems which use AI ethically, for the good, but just like old times, the criminals will do their own thing.

So, let’s take a look at some of what is at risk in terms of our data and systems:

  1. Data Protection.  AI systems tend to be extremely good at analysing, organising, and harvesting vast amounts of data, raising concerns about privacy breaches and unauthorized access to sensitive information.  A good AI powered attack could capture huge amounts of personally identifiable information (PII), in a ridiculously short amount of time.
  • Data Integrity.  In the good old days (please indulge me – I’ve been around a long time), we used to talk about CIA, no, not the infamous US intelligence agency, but Confidentiality, Integrity, and Availability.  We now have something we call the Adversarial Attack.  This is where attackers can manipulate AI algorithms by feeding them misleading data, causing them to make incorrect predictions or classifications, in turn destroying the integrity of your data, not just rendering it useless, but dangerous.
  • Model Vulnerabilities.  This next one is relatively new, at least to me, and as I never tire of saying, I’ve been this game as long as there’s been a game.  It’s something call Model Vulnerabilities.  AI models can be vulnerable to exploitation, such as through model inversion attacks or model extraction, where attackers can reverse-engineer proprietary models.  So, if you’re in the dev game, this is a very real nightmare.
  • Bias and Fairness.  AI systems may inherit biases from training data, leading to unfair or discriminatory outcomes, which can have legal, ethical, and reputational implications.  This could be used as another form of extortion, playing with the integrity of your data, to the point where you can no longer trust it.
  • Malicious Actors.  These can compromise AI systems at various stages of development, deployment, or maintenance, posing risks to organisations relying on these systems.  This has a play in supply chain security.
  • Attackers can leverage AI techniques to enhance the effectiveness of cyberattacks, such as automated spear-phishing, credential stuffing, or malware detection evasion.

Addressing these risks requires a multi-faceted approach, including robust security measures, thorough testing, ongoing monitoring, and regular updates to mitigate emerging threats.

The real danger is complacency.  AI isn’t a future hypothetical threat but is very real and here now, already making itself felt, for both good and bad.

Scroll to top