A branch of artificial intelligence is showing real promise for one of cybersecurity’s biggest challenges.
Administrators trying to secure networks have a big problem, speed. Computer systems deal with traffic on a per-second basis, and security algorithms must spot attacks in that traffic in real time to stop intruders sneaking in unnoticed. That makes cybersecurity difficult for administrators to handle.
This is where a branch of artificial intelligence known as machine learning can help. It applies a new data processing approach to the tasks facing security administrators, helping to prevent account misuse.
The need for real-time analysis is becoming an increasing problem in identity and access management (IAM). An IAM system manages user identities and ensures that they have access to only the applications and data that they need. It stops a junior sales rep from accessing information about the customers not assigned to her, but lets the VP of sales examine the entire customer list, for example.
IAM may stop users from abusing accounts beyond their pay grade, but what happens if an attacker steals a user’s account credentials? This happens all the time, both in the consumer and enterprise worlds. A Forrester report on data breaches in 2016 revealed the compromise of one billion records during the year.
These compromises would have spanned the consumer and enterprise sectors, but even the theft of consumer accounts can lead to the compromise of enterprise ones, as people often reuse their passwords. Once attackers have access, they have the same privileges as the user that they stole from, which puts enterprise data and applications at risk.
Administrators could code rules to try and stop unauthorised logins. The obvious approach is to figure out how users should be accessing the network, and then establish rules that stop them doing anything else.
In practice, though, a world of flexible working, remote contractors and changing business conditions means that access patterns vary between employees, and evolve over time. That makes it more difficult for IT administrators to define rules accurately and keep them up to date.
This is where machine learning comes in. Rather than hand coding these rules individually, companies can instead use these algorithms to ‘learn’ how users behave over time.
Machine learning software takes an original approach to processing data. Instead of following explicit step-by-step rules to analyse each new piece of data in the same way, it makes the computing equivalent of a judgement call, based on data that it has already seen. Programmers ‘teach’ machine learning software to look for certain characteristics in data by feeding it lots of historical information up front.
Feed a machine learning algorithm many audio recordings of people saying ‘hello’, for example, and it will find common data patterns in those recordings. It will search for these patterns in any new audio that it hears, and identify the phrase when spoken.
Companies are already applying these pattern-matching capabilities in many areas. Machine learning software is recognising images and detecting speech. It is approving loans based on patterns of consumer behaviour, and helping to spot financial fraud.
The characteristics of these algorithms map nicely to the IAM problem. A machine learning tool can analyse historical access data from an IAM system, such as who accessed an application, when they accessed it, and from where. Other data such as what they specifically requested access to, and from which device, can also help to refine these models.
Just as machine learning algorithms can learn what a face looks like, they can also learn what normal access patterns look like. This then produces a level of confidence in an access request that administrators can use to help quantify the risk of unauthorised account usage.
Machine learning-enhanced IAM systems can be highly granular, varying their confidence levels based on individual users, without requiring high-maintenance fine-grained rules. Instead, the machine learning algorithm can apply these confidence scores in real time to make smarter decisions.
The IAM system might allow someone access using basic authentication mechanisms if their score doesn’t deviate beyond a certain point, for example. On the other hand, it might escalate the security process if the access pattern seems too unusual.
We may not be able to stop cybercriminals from stealing user accounts using phishing, database hacks or device theft, but we can at least make it more difficult for them to use those account credentials in any meaningful way. Using artificial intelligence, we can spot when a malicious actor is trying to misuse a system by watching for unusual behaviour – and then lock the account down. It’s a good example of applied AI in action.