Why corporates need to pool security intelligence — and instil trust in AI
Share on LinkedIn
Share on Xing

Why corporates need to pool security intelligence — and instil trust in AI

David Needle – November 2019

Companies continue to invest billions battling cyberthreats, but there’s no sign the war is being won. MIT professor Daniel Weitzner has some advice on what needs to happen to fix the situation.

Security products abound, but will there ever be a proverbial magic bullet that keeps organizations’ data safe? Not if the security industry continues to behave as it has for decades, according to professor Daniel Weitzner, principal research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory.

“Enterprises do the best they can, [selecting from] a range of exciting technologies and carefully deploying solutions. But there is not a principled understanding of security risks or serious investment strategy when it comes to defending against cyber threats,” Weitzner said in his keynote at the recent Fujitsu Laboratories Advanced Technology Symposium in Santa Clara, California.

“When you have an honest talk with CISOs and ask them much they spend on security and if it’s enough, they tend to make comparisons to their peers [rather than assess the effectiveness of the spend]. No one is able to measure the ROI on different cybersecurity defensive approaches,” he added.

Weitzner complains that security budgets continue to increase as companies add security innovations in an effort to protect their organizations. He cited JPMorgan Chase’s CEO Jamie Dimon, who stated in his company’s recent earning’s report that it spends almost $600 million a year on cybersecurity and has 3,000 employees focused on this issue.
Shared cyber-intelligence

But Weitzner, who is also director of the MIT Internet Policy Research Initiative, is doing more than point out the problem; he’s doing something about it. Weitzner is working with a group of 10 large enterprises who have agreed to share information through a secure platform about their cybersecurity defenses: what kind of attacks they’ve faced, where these defenses have proved failed and what the cost of those failures have been. “We are gradually  moving to a point where we will be able to make concrete formal claims about what kinds of defenses are effective and which ones are not,” he told the audience of leading technology researchers and executives.

Weitzner has been shocked to find that such collaborations are rare or non-existent in many industries, especially as the early results he’s seen are promising. “We are starting to learn there are bundles (of security solutions) that are effective, and others that aren’t,” he said. “We have to move beyond companies just piling on defense after defense, new technology after new technology, and really understand how to set priorities and security investment strategies — how to instrument the environment better so we know how to respond.”
Global standard for privacy

Digital privacy was also an issue Weitzner explored, building on earlier remarks from Dr Hirotaka Hara, CEO of event host Fujitsu Laboratories.

In a recent Fujitsu-sponsored survey of 900 business leaders across nine countries, 82% felt it was important for them to have full control over their personal data. “They are worried about how their data is being handled,” said Hara. He pointed to Fujitsu’s IDYX authentication technology as one potential solution because it gives users control over what their identity is online and what can and cannot be made available to other services.

But challenges remain. “Privacy is so hard because it means so many different things,” said Weitzner. “If customers feel overly tracked the companies doing it will be challenged in the marketplace.

“Even today, we still don’t have the tools to enable anyone who holds personal data to really police it adequately, to control it and make sure the context in which it is collected is respected,” he added.

The European Union’s General Data Protection Regulation is not without its critics, but Weitzner said it has forced other countries and organizations to rethink how they handle data and privacy concerns.

What’s needed now, said Weitzner, is a broader agreement — ideally worldwide — on data protection. “We need a global approach to privacy, and [in particular] how AI is governed,” said Weitzner.
Opening the ‘black box’ of AI

In the Fujitsu survey a majority of respondents (52%) said they don’t trust the judgments that emerge from AI, yet 63% said they would trust AI if it was clear how the technology came to a decision.

“Customer’s gain confidence with AI when there’s an explanation of what the system is doing,” said Hara. Too often in today’s AI systems, he said, there’s a tendency to ‘black box’ the technology as something that just delivers answers, but most users have no idea how AI-powered decisions are made or if they can trust the outcomes.

Weitzner called for a greater industry and academic focus on ensuring that we know how AI systems reach their conclusions. Indeed, at a higher level, he argued that much more needs to be done around industry standards to establish trust in technology.

As an example, he detailed how a study by MIT’s Internet Policy Research Initiative showed that early facial recognition systems were deeply flawed. “The accuracy was pretty good if you were a light-skinned male and not so bad if you were a light-skinned female, but if you were a person of color, the chances of you being identified were extraordinarily low,” he said.

AI wasn’t the source of the problem; rather, the data on which it was based was the culprit. Weitzner explained that early facial-recognition photographic systems used images of white women because they photographed more clearly within the limits of the earlier technology. “The photographic technology we are building facial recognition on had a built-in bias,” said Weitzner.

Once the problem was recognized, Weitzner said some companies scaled back their marketing of facial recognition technology. But he worries that others continue to offer “sub-standard technology” that threatens to show AI as highly negative in the eyes of business and consumers. “There is a lot of activity in the political realm to try to limit the use of facial recognition technology, particularly in law enforcement applications, because of  the bias problem,” he said. “To me this is really a cautionary tale in what happens when we don’t take governance considerations into account in developing new technology. That we take an entire and very promising sector of the IA marketplace and put a black mark on it says, ‘We don't really know if it can be trusted.’

“It will be possible to restore trust but it would have been much better if we had not had to go through this cycle of skepticism and resistance to this technology. It’s something the industry needs to figure out how to be much better at,” he said.

“We are now in the second phase of the internet where we see [these powerful] technologies pervasively connected to people’s lives, and realize we have to take a broader view of questions of governance in order to have a more trusted environment,” he concluded.

Watchthe full video of Daniel Weitzner’s keynote, and other presentations at Fujitsu Laboratories Advanced Technology Symposium 2019

Download a free report on the Top 12 Cyber Security Predictions for 2020

First published November 2019
Share on LinkedIn
Share on Xing

    Your choice regarding cookies on this site

    Our website uses cookies for analytical purposes and to give you the best possible experience.

    Click on Accept to agree or Preferences to view and choose your cookie settings.

    This site uses cookies to store information on your computer.

    Some cookies are necessary in order to deliver the best user experience while others provide analytics or allow retargeting in order to display advertisements that are relevant to you.

    For a full list of our cookies and how we use them, please visit our Cookie Policy

    Essential Cookies

    These cookies enable the website to function to the best of its ability and provide the best user experience for you. They can still be disabled via your browser settings.

    Analytical Cookies

    We use analytical cookies such as those used by Google Analytics to give us information about the way our users interact with i-cio.com - this helps us to make improvements to the site to enhance your experience.

    For a full list of analytical cookies and how we use them, visit our Cookie Policy

    Social Media Cookies

    We use cookies that track visits from social media platforms such as Facebook and LinkedIn - these cookies allow us to re-target users with relevant advertisements from i-cio.com.

    For a full list of social media cookies and how we use them, visit our Cookie Policy