The technology industry’s moment of truth
Share on LinkedIn
Share on Xing

The technology industry’s moment of truth

April 2019

Rapidly advancing digital technology has the power to both create and erode trust. Speakers at a recent Fujitsu Executive Discussion Evening weighed the responsibilities that places on tech creators, business leaders and policymakers:

Margaret Heffernan: CEO, broadcaster and author,
David Gentle: Director of strategy and foresight, Fujitsu
William Tunstall-Pedoe: AI entrepreneur, former Amazon product principal and co-creator of Echo and Alexa.

Margaret Heffernan  CEO, broadcaster and author

Margaret Heffernan

In the context of technology, and particularly AI, trust and how it is built is what matters. I would argue that building trust requires four key ingredients: benevolence, competency, consistency and integrity.

But, even in the early days of AI, these behaviours have become sporadic and unreliable in the implementation of some systems. And we’re seeing a whole bunch of problems as a result.

First, there’s the issue of consent. This year, for instance, it emerged that a big AI education vendor had inserted social psychological interventions into one of its commercial learning programs to test how thousands of students would
respond, unbeknown to them, their parents or their teachers.

Then there’s bias. What is made is a reflection of the people who make it. In the case of AI, we’ve already started in a wildly unrepresentative place — and that’s deeply troubling, not least of all because 96% of the world’s code is written by men.

Consider AI and datasets: some UK police forces have signed up for tech that claims it can predict whether specific individuals are likely to commit crimes. The AI system learns to recognize the faces of potential criminals by using data on existing prisoners — data that, of course, represents only those who’ve been caught.

Meanwhile, it has been shown that AI systems used by some fast-food companies for recruitment purposes have unlawfully screened out applicants with any history of mental illness.

In another case, an attempt by the city of Boston to use AI to make the allocation of school places fairer backfired. It turned out that the AI programmers had little or no understanding of the demands on the schedules of poorer families where parents are holding down multiple jobs.

There are many other examples of where AI has crossed the fundamental boundary between objective analysis and moral judgment. Who is making these judgments? In whose interests? According to whose values? In pursuit of what goals?

When such judgments are made, we deserve the chance to understand them, validate them and, if necessary, contest them.

The propaganda surrounding AI states — wrongly — that the technology is inevitable and unequivocally productive, so there’s no point in asking questions. This is not language that builds trust.

96% of the world’s code is written by men, which is a deeply troubling starting place for bias in AI.

I may sound like a naysayer, but I love technology – it’s been my career. I simply want AI to live up to its promise, because trust won’t be an issue if it does.

But this requires a number of things to happen. One of them is an end to the propaganda of inevitability, which is a strange brew of sales hype and determinism. It’s misleading, dishonest and confusing. It isn’t the language of benevolence; it’s the language of the bully.

Another requirement is that all AI systems need to be designed so that they can easily be audited for their regulatory compliance. And the technology community needs to involve citizens in deciding what AI does and how far it goes. Only by participating in decisions about the limits and boundaries of AI will it earn sufficient public trust.

The speed at which AI is developing makes this a matter of urgency — for citizens who fear unfair and unlawful treatment; for employees who are worried about what they’re being being roped into and made responsible for; and for companies that can flourish only where they are highly trusted.

David Gentle  Director of strategy and foresight, Fujitsu

David Gentle

There is little doubt that trust is of critical importance in our lives. All of our personal relationships are built around trust; and no business could function without a high degree of trust.

So, if trust has always been important, why is it such a hot topic today? A core reason is simply the level of change we are seeing — particularly the rapid advancement of technology and the depth to which technology is changing the world around us.

My sense is that we are now at something of an inflection point with technology. Take just one major milestone that was passed last year — surprisingly without much fanfare — which was that more than 50% of the world’s population is now online. So being connected, being digital has now become the norm and, going forward, being offline and non-digital will increasingly become the exception.

It is really noticeable, too, just how immersive the online world is becoming; how deeply engaged we are in it as individuals. And for people living in the West, that translates into spending, on average, around a third of their waking time online.

Maybe even more significant are the changes in commercial environments, where we are seeing rapidly advancing digital companies. Of course, it’s nothing new for organizations to grow large and have hundreds of millions of customers. But what is really new is the speed and the scale at which digital companies have been advancing, coupled with the depth of knowledge they have of their customers.

However, to use an analogy from theatre, it feels like we have reached the end of act one, which was all about the emergence of exciting new technologies and companies. Now we are moving into act two and it’s all about consequences and dealing with the implications of protagonists’ actions.

And trust is central to that, even though it’s an abstract concept that is not easily defined. The business school lecturer and author Rachel Botsman calls trust “a confident relationship with the unknown.”

That helps us understand the key challenge around trust in the digital age: the importance of finding ways to build greater confidence in this more uncertain, complex world and so develop better levels of trust.

William Tunstall-Pedoe  AI entrepreneur, former Amazon product principal and co-creator of Alexa

William Tunstall-Pedoe

What does AI and other advanced technology mean for trust? We can think of trust simply as a mental model of the ability, reliability and truthfulness of something or someone. It’s what drives interactions between people and businesses.
But several things are changing the nature of trust. One is that a lot of our day-to-day interactions have become virtual, where we’re not engaging directly with anyone.  

Another big factor is the influence of social media, which can record and amplify much more of life. These networks enable anyone to create content that can reach a potential audience of millions. Single incidents posted on social media have been able to destroy trust in a way that simply wasn’t possible in the past.

Bias is a big concern too. Machine-learning systems are built to learn from historical data. So a system looking at recruitment and selection data, for instance, will replicate any existing biases and then apply those at scale.

‘Black-box’ decision-making also has very serious implications for trust. Modern machine-learning systems intrinsically cannot explain their decisions in a way that makes sense to humans. If a lender’s AI system refuses your loan application it is possible that the bank will not be able to tell you why.

AI even has the potential to destroy trust in things that are intrinsically trustworthy to us. We trust our own senses, but some recent technological advances have actually called these into question.

The AI assistant Google Duplex, for instance, can hold a completely natural conversation when making a restaurant booking over the phone. This application (already live in the US) shows that we’ve already reached a point where it’s possible for a person talking to a machine to have no idea they’re talking to a machine.

Despite all these problematic trends, I am optimistic. I believe technology will eventually be the solution to many of the trust issues it has created.


Illustrations: Paddy Mills/Synergy Art

• Attend the next Fujitsu Executive Discussion Evening


First published April 2019
Share on LinkedIn
Share on Xing

    Your choice regarding cookies on this site

    Our website uses cookies for analytical purposes and to give you the best possible experience.

    Click on Accept to agree or Preferences to view and choose your cookie settings.

    This site uses cookies to store information on your computer.

    Some cookies are necessary in order to deliver the best user experience while others provide analytics or allow retargeting in order to display advertisements that are relevant to you.

    For a full list of our cookies and how we use them, please visit our Cookie Policy

    Essential Cookies

    These cookies enable the website to function to the best of its ability and provide the best user experience for you. They can still be disabled via your browser settings.

    Analytical Cookies

    We use analytical cookies such as those used by Google Analytics to give us information about the way our users interact with - this helps us to make improvements to the site to enhance your experience.

    For a full list of analytical cookies and how we use them, visit our Cookie Policy

    Social Media Cookies

    We use cookies that track visits from social media platforms such as Facebook and LinkedIn - these cookies allow us to re-target users with relevant advertisements from

    For a full list of social media cookies and how we use them, visit our Cookie Policy