DCL Connecting talent
  • Virgin
  • SingTel
  • Tata
  • Nebulas
  • CNS
  • Secure Data
  • Telstra Global
  • Telecity
  • KCOM
Comments Off on You Wouldn’t Put A Four-Year-Old In Charge Of Security… Would You?

You Wouldn’t Put A Four-Year-Old In Charge Of Security… Would You?

Posted by Admin | March 6, 2017 | IT Security

Kirsten Bay, CEO and President of Cyber adAPT outlines the limitations of AI in cyber security and why the human brain remains our greatest asset in the battle against attacks

Let’s start by stating the obvious, shall we? Cyber security is a huge issue. According to official statistics, 90 per cent of all large organisations have reported suffering a security breach. In fact, it is no longer a matter of “if” you suffer a breach, but “when”. There’s been a 144 per cent increase in successful cyber-attacks on businesses and a 267 per cent charted increase of ransomware attacks in 2016. And the average cost of a data breach is now estimated at $4 million.

None of this should come as a shock. We have been fed these stats over and over again by an industry that was estimated to reach $170 Billion by 2020. The enormity of the challenge and the complexity of the solution are mind boggling. As an industry, we are scrabbling for solutions. How can we survive this tidal wave of threats? One solution is to automate it. To get robots – artificial intelligence or AI – to do it for you. Sure – sounds good. But let’s dig a little deeper. Can AI really help?

First of all, let’s consider how smart AI actually is. The answer is “pretty smart”. Plenty of machines can do impressive things. Many of us will remember chess master Garry Kasparov being trounced by IBM’s Deep Blue nearly 20 years ago. Even more of us will recall IBM’s Watson beating the human contestants of TV quiz show Jeopardy in 2011.

But it is not all about supercomputers. Many of us experience AI every day when we talk to Siri or Cortana on our smartphones. Some of us even allow AI to do the cleaning. Amazon recently sold 23,000 robotic vacuum cleaners in a single day before they were let loose to learn how to spruce up our living rooms. Even Tesla’s autopilot is a form of AI. When it comes to commercial deployments, AI is doing entry level jobs like offering holiday shoppers travel ideas and developing personalised marketing. AI is smart and will rapidly get smarter.

So smart in fact, some believe in something called the “Singularity” – the point at which AI becomes as powerful as the human brain. This, if it does happen, will do so sometime around 2045.

The point is this: AI’s good. In fact, it’s amazing. But it’s got a long way to go. At the moment, the common theme in the use of AI is a narrow scope of application: play chess, answer general knowledge, clean the floor, and drive a car. While impressive, AI’s still in its infancy – quite literally. A team from the Massachusetts Institute of Technology developed an AI system able to take an IQ test designed for a young child. The results showed it had the intelligence of a four-year-old.

Some take the view that AI will never trump the human brain. Danko Nikolic, a neuroscientist at the Max Planck Institute for Brain Research in Frankfurt, recently stood up in front of an audience of AI researchers and made a bold claim: we will never make a machine that is smarter than we are. He says “You cannot exceed human intelligence, ever. You can asymptotically approach it, but you cannot exceed it.” Even if we could, implicit in the prophecy of the Singularity is the idea that AI is currently nowhere near as clever as a fully developed human and will not be for nearly 30 years.

As a result, we, as humans, continue to run rings around our computer friends in most respects. And cyber security is no exception. There are extremely successful hackers out there now. Collectively, they steal $billions with groups such as the Carbanak gang pulling off one of the greatest heists of all time without the slightest bit of tunnelling into gold-laden vaults. With their human criminal minds they stole more than $1bn from more than 100 institutions in 30 countries over a period of two years.

These people are smart and they do not just rely on malware to do the job for them. Yes, they need to know how to code and deploy malware, but they also need to be brilliant at social engineering; they need to have an understanding of finances and law enforcement; and they need to be one step ahead of security teams. To achieve what they have requires emotional and technical intelligence as well as an automated army of bots doing the dirty work.

With criminal brilliance like this ready and willing to strike, will you be happy putting your defenses in the hands of AI? Think very carefully. Considering the potential disaster that can be unleashed in the event of a breach, will you be happy putting the equivalent of a four-year-old on the front line of your defenses? No. Me neither.

Sure, that seasoned criminal is using machines and malware to infiltrate networks, which are arguably less smart than the AI, but behind every piece of malware is a person with a specific and very human intent: to steal credentials, to undertake reconnaissance, to shut something down or to embarrass someone. AI cannot beat this. It is not a machine vs. machine battle and treating it as such is to misunderstand the nature of cyber-security.

This begs the question: how do we deal with the tsunami of cyber threats we now face if we can’t use AI? Until the Singularity happens – if it does – the answer lies in a human approach. Hackers are human, with human intentions. It stands to reason that they need to be fought with human insight.

This is why the best defense combines the smartest minds with the best software. In looking for a security partner, organisations keen to defend their networks need to find vendors who have real practitioners from both the security and hacking world.

Combined with the expertise of network and mobile technology specialists, these practitioners need the space to monitor millions of packets of real world traffic so that statisticians can develop models that make a difference. Only by doing so can they focus on codifying patterns of behavior that will find attacks others won’t.

In conclusion, remember: you would not put Siri in charge of the White House, you would not allow a robot vacuum to manage hygiene in a hospital and you would not get the office junior to chair board meetings, and you wouldn’t put AI in charge of your security. AI is just not up to the job. Yet.

Source: informationsecuritybuzz

153 total views, 1 today