Is AI Wrong for Cybersecurity?
Blog: Enterprise Decision Management Blog
I’ve just participated in a debate over analytics vs. encryption for cybersecurity, on the InformationWeek Dark Reading website. This is a sign of the times — the cyber space is so hot that technologies are being treated as rivals, jockeying to win your infosec budget.
The truth is, it isn’t an either/or proposition. As I note in my article, arguing against encryption would be a bit like arguing against locks on doors. Strong encryption – like firewalls and user authentication – is a basic defense against the damage that might flow from a successful attack on information infrastructure. But encryption is not foolproof, and it shouldn’t be your one means of defense.
As artificial intelligence and analytics have come into play, there has been some criticism — often from competing vendors who misunderstand or misrepresent how AI works.
In the past, cybersecurity analytics were focused on gathering data about compromises, developing threat “signatures,” and using those signatures to protect against future threats. By contrast, advanced detection analytics identify emerging threats by recognizing anomalous patterns in real time. While many firms label their signature-based detection methods as “analytics,” the analytics are largely static and built to block known threats and therefore fall into the category of basic defenses.
Advanced analytics, including those based on machine learning or AI, find anything unusual or threatening that gets by your basic defenses. Here are two ways FICO uses advanced analytics to obtain the objective:
• Self-calibrating models constantly recalibrate traffic behavior of monitored entities, and score anomalies for the extent of their deviation from the norm.
• Self-learning analytics improve with each resolved alert, serving to systematically automate the insights of human security analysts as they work cases.
These technologies work in real time – providing, for the first time, the ability to sense and respond to the most egregious threats as they happen, and before damage is done.
If you’ve followed the world of anti-fraud technology, this will sound familiar. AI/machine learning analytics have been protecting most of the world’s credit cards for years. The fraud teams at card issuers use these systems not only to detect fraud, but to set the level of risk that triggers investigation or card blocking, in order to balance loss prevention with a positive customer experience.
One argument against AI in cybersecurity is that it will require everyone to hire an army of analytic techies. Not true: By crunching data to prioritize the biggest threats, analytics-based systems simplify the lives of fraud professionals, and the same would hold true in information security.
So, is AI wrong for cybersecurity? Not at all. It doesn’t replace other defenses, it adds to them. And who, looking at today’s headlines, can say we don’t need that?
For my full article — as well as the “rebuttal” — see “Encryption Has Its Place But It Isn’t Foolproof” on the InformationWeek Dark Reading site.
The post Is AI Wrong for Cybersecurity? appeared first on FICO.
Leave a Comment
You must be logged in to post a comment.