As the task of protecting digital networks against attack becomes increasingly difficult, and the volumes of data transiting systems increases, cyber-defence professionals and suppliers are increasingly looking to automation to help. Various forms of artificial intelligence (AI) are widely used in cyber-security systems, particularly machine learning (ML) technologies, which can be deployed to detect anomalous patterns of behaviour on networks and to react to them far more quickly than a human analyst.
Of course, once a technology has proved an effective form of defence, its potential as an offensive tool is also evident. Yet one of the UK's foremost cyber-security practitioners - Paul Chichester, the director of operations at the UK's centralised cyber-defence authority, the National Cyber Security Centre - says that the country is not seeing widespread use of AI by adversaries. This is not so much because of a lack of resources or capability - but because they do not need to do anything quite so sophisticated.
"The one thing I always get asked about - beyond what the Russians are doing, and the Chinese, and the Iranians - is what are terrorists doing in terms of cyber; and the latest one is, are our adversaries using AI to hack us?" says Chichester, addressing an audience in DSEI's Security Theatre. "The brutal truth is, they don't need to. The harsh reality is that our adversaries still use far too simple techniques to compromise our networks. Out of the 600 incidents we deal with every year, I can promise you that none of them involve our adversaries using AI. They are all far more simple and human in that space."
However, the NCSC is seeing evidence that entities attacking UK government, business and infrastructure are using AI to help them better exploit data obtained through hacking.
"We do see adversaries using AI, but in very particular ways," he says. "Where AI is hugely helpful is in terms of target development - if I'm trying to work out who my target is, or where's the best place to start. The opportunity that ML gives you is taking vast quantities of data and giving you very clear signposts of where to go. And, certainly, we do see adversaries collecting large amounts of data - and that's what they're doing it for: they're using it to find vulnerabilities in people and technology."