Artificial intelligence, whether we know it or not, has already become natural. It delivers ads to match our shopping tastes, plays songs it thinks we want to hear, and is literally taking the driver’s seat.
You can debate the pros and cons of AI (Bill Gates and Stephen Hawking are among the big brains with concerns). But with cyberwarfare ranging from fictional chaos (TV’s excellent “Mr. Robot”) to the very real hacking of Sony and Target, AI has an inevitable role to play in protecting us and our data.
CybeRisk Security Solutions says attacks on individuals, corporations and government bodies account for nearly $400 billion in losses every year, with about 90 percent of companies reporting they’ve been victimized. What’s more, well-funded bad guys outnumber the good guys, Symantec CTO Amit Mital told Fortune magazine last year.
The promise of AI defense systems is that they will react in real time to threats – far faster than human beings overwhelmed by reams of data. And AI would keep learning, anticipating future attacks and how to prevent or respond to them.
Many systems are already entering the fray, including one by a startup called PatternEx. Its AI2 platform uses an unsupervised algorithm that culls networks for suspect activity. But because automated systems at present can detect only abnormalities, not attacks, the platform was designed go easy on the alerts; it generates just 100 to 200 potential threats a day for human analysis.
Eventually, many experts say, AI systems will become smart enough to distinguish between a harmless glitch and a vicious attack.
That’s the dream – but could it also become a nightmare? Because AI is unpredictable and no legal framework exists for information warfare, the InfoSec Institute asks:
- What constitutes a cyberweapon and what justifies its use?
- Who would be responsible for disproportionate responses to an attack?
- Who is accountable if AI systems violate international law, including the Geneva Conventions?
- How much control will humans have in the case of an instantaneous attack?
- Could AI systems themselves be vulnerable to attack?
And then there’s the “Terminator” Skynet scenario: Would AI systems come to regard human beings as competitors who should be eliminated? CybeRisk more calmly asserts that “since AI is in essence a technology like any other, future applications will require some assured measures of quality control, so that software and systems will behave as they’ve been programmed to do – and not overstep their bounds.”
As Alan Boyle writes in GeekWire, AI is still far behind human capabilities. Right now it’s just smart enough to delight – or irritate – us (I already know what music I like, thank you). But we’re already asking artificial intelligence to do far more. Our livelihoods, maybe even our lives, could one day depend on it.