The pairing of human and machine intelligence
As machine-learning becomes increasingly embedded into our everyday lives, Dr David Bray, a 2015 Eisenhower Fellow, Visiting Executive In-Residence at the Harvard Kennedy School and CIO at the US Federal Communications Commission, explores both the possible positive opportunities and potential cyber-security challenges.
The exponential changes being driven by digital technology are typically painted in a highly positive light. Just look at the average Internet minute during 2013: more than 204 million emails were sent, more than 4 million Google search results were conducted, more than 2.4 million items of Facebook content were uploaded and more than 72 hours of video were uploaded to YouTube.
But it’s worth remembering that technology itself is amoral: it’s how we humans choose to use it that decides whether it’s good or bad.
The more challenging side of the exponential era is evident elsewhere: in 2013, cyber-security technology company McAfee identified 200 new threat vectors every minute; cyber-threat forensics specialist FireEye has shown that a new item of malware resulting in an advanced persistent threat is discovered every three minutes; and a government agency such as the US Department of Energy deals with an average of over 6,900 attacks a minute.
If the growth on the negative side is just as exponential — and is also tracking Moore’s Law, doubling in capability every two years or so — then that raises some interesting questions about how we can possibly address the threats that will emerge in this new world. Essentially, we’re heading into ‘terra incognita,’ somewhere that has never been charted before, and the non-linear changes we are facing are going to challenge every organization, every society and every nation state.
Behavioral cyber-security and privacy by design
As a 2015 Eisenhower Fellow to Taiwan and Australia, my discussions there suggested that the answer is certainly not to simply apply ever-increasing amounts of human activity to IT cyber-security. Nor is it to do what much of today’s cyber-security currently does: to focus on the signatures of known attacks, looking for dangers that have already been encountered. Rather, we need to focus on cyber-related behaviors. The discussions in Taiwan and Australia suggested that what we need to do is have machines begin to characterize typical behaviors right across the Internet as a whole, the normal behavior patterns of people, of organizations and of other machines.
Such a focus on behavior becomes even more vital with the arrival of the Internet of Everything. This year there will be 14 billion devices on the planet, double the number in 2013 when the number of devices equalled the total human population. By the time we get to 2022, there are going to be anything between 75 and 300 billion Internet-connected devices on Earth, relative to 8 billion people.
So if your refrigerator is spotted accessing your organization’s payroll data at 11pm on a Friday night, you probably want to be able to be alerted and examine what’s going on. And the only way you’ll do that in the exponential era ahead is by pairing machines and humans to focus on behaviors, with machines able to continuously watch for and learn from behaviors, distinguish between normal and abnormal patterns, and then ‘tip-and-cue’ [autonomously trigger a response to an event] and have a human expert pay attention.
There are models for how this level of behavioral analysis could work without having a negative impact on privacy. Analyzing behavior is not unlike gathering data on public health in the way the US does at a Federal level: the authorities do not know who individuals are, just the signs, symptoms and demographics of disease. So we can apply that model to cyber security where we’re focused on the behaviors of these new Internet-connected devices, whether they are in a car, something you’re wearing or a system managing your home. We can protect privacy by ensuring we design in privacy protections from the start.
Smart together: humans and machines
It is not only in cyber-security that we are increasingly able to pair humans with machines.
In 2012, for example, a US charitable foundation ran a competition to see if anyone could build an algorithm that would mark student papers as well as a third-grade teacher. A British particle physicist, a data analyst for the US National Weather Service and a graduate student from Germany jointly won the $60,000 first prize.
And in 2014, Xerox developed an intelligent copier that could scan students’ handwritten tests and grade them for teachers, with no a priori programming involved.
Right now such approaches are still too expensive to replace professionals like teachers, but it is getting to the point where we can pass rote work to intelligent machines, to let machines do things that require large-scale pattern recognition, pairing those activities with the things humans do well.
At this stage, such machines have the pattern-recognition capabilities of a five-year-old child but the growth in their intelligence signals how we are going to be able to address some of the exponential changes we’re now all facing. My visits to Taiwan and Australia as an Eisenhower Fellow convinced me that our exponential future is going to require us to explore new ways of humans working with machines so that collectively we’re smart together as a result.