The rise of corporate digital responsibility
The technologies reshaping business are also forcing a rethink of corporate responsibility. Three IT leaders — from DHL, Investec and Fujitsu explore what it takes to be a good corporate citizen in a digital age.
At Deutsche Post DHL (DPDHL), we’ve always placed importance on traditional corporate social responsibility. Our Disaster Response Teams help coordinate relief goods in regions affected by natural disasters. With our Mission Zero we want to reduce all logistics-related emissions to zero by the year 2050. And our Upstairs program will support the occupational perspectives of our employees’ children, making sure they can acquire the necessary tech skills to participate in today’s digital world.
But as the world becomes more digital, companies will be faced with an ever-growing need to adopt a robust corporate digital responsibility (CDR) approach to protect both customers and employees. CDR is about making sure new technologies — and data in particular — are used both productively and wisely.
There are four drivers that call for a CDR strategy: the increasing concerns from customers and governments about the use and abuse of personal data; the impact and challenges of automation and robotics; the potential for unethical use of new technologies; and finally, the so-called digital divide.
To us, CDR means taking these concerns and fears seriously and addressing them in a way that will profit us, our employees, our customers and the countries in which we do business.
Change is frightening for most people, particularly when it involves technology. What we need to do is ‘translate’ the change, to explain what these technological advances mean for them. We need to let people know what we’re doing to safeguard their data and how they can benefit from these technologies. At DPDHL we have a comprehensive data security framework, as well as an employee engagement program, which educates our employees about how they can handle digital information. We are equally focused on data protection and security. And for our customers, we have an IT disaster resilience solution that prevents loss of data and productivity in the event of a catastrophe.
On the hot topic of robotics and automation, we need to be upfront: yes, the job market will change but this development will likely create new, if different, jobs. We’re dealing with nothing short of a transformation of work, and our workforce strategy has to be in tune with that. We need to be smart and invest in upskilling our people, honing change management skills and creative, problem-solving abilities.
Corporations need to be ready to account for their business practices at all times if they want to stay in the market. At DPDHL, we’re very explicit about this with our Responsible Business Practices (RBP) program. We are committed to adhere to laws and regulations, ethical standards and international norms, and we make sure our suppliers and subcontractors do so too. Our customers want to be sure their entire supply chain follows ethical and environmental standards. A good RBP helps with recruitment, retention and engagement of employees. And RBP is an important factor in making sure we remain a responsible corporation and do our part for the betterment of society.
While there’s still no agreed under-standing of the term ‘corporate digital responsibility,’ most of the activity in this area today relates to protecting data.
As with any innovation, I’d argue that it is safer to be principles-based first and solutions-based second. So rather than just applying the rules to avoid a big data breach fine, you need to build the principle of safeguarding customer data into the fabric of your systems, with a set of ‘tramlines’ guiding how data is used.
Indeed, through the centuries business has become quite good at creating the tramlines needed to guide people’s behaviour and ethics, which has ultimately resulted in better business decisions. And that certainly applies to one of the most fascinating areas of technology today: the rise of AI and robots.
We are at quite an early stage in our robotics journey, but it’s already fascinating to me how many tech partners are approaching this as a piece of kit that you could deploy to do a job. Robotics is so much more and we need to ask how we make its application safe.
What if we were able to make robots available to anybody in the business with just a little training — and it took off? What kind of mess would that create? We need to think about how we guide such application, the light-bulb moment was when I realized robots need to be treated as if they were employees.
Start with HR process: compliance, starters and leavers, on-boarding processes, etc. If you treat robots as if they were digital people — knowing what they are doing, managing the scope of their activities and making someone responsible for their work — then all the existing people rules can be applied to them. For example, in areas such as financial services there are strict rules about segregation of duties; you can’t have the same person making financial trades and settling them. So the same needs to be applied to robots: segregation of robot duties is needed and a line manager in place to oversee them. By treating robots as people, all sorts of control issues become obvious.
In the age of digitalization, IT is driving a major shift in how we define both ourselves and the world around us. The prevailing expectation is that the digital revolution will help create a better quality of life for all of us, enabling sustainable growth and innovation. However, as with all major changes, it comes with risks, as well as opportunities. Current socio-political debates center on privacy concerns, consumer rights and the question of when technology becomes intrusive — or even takes over. Mastering the balancing act between digital innovation and respecting each individual’s rights — to freedom, safety, autonomy and a working life — is a major challenge for industry, academia and policy-makers alike.
Dealing with these uncertainties, systematically addressing the risks of digitalization and setting criteria for the responsible use of IT are key tasks for organizations that take corporate digital responsibility seriously. These are topics that need to be discussed at boardroom level, embedded into the very core of the organization and translated into day-to-day operations. This needs to be a continuous process, always reviewing and defining what constitutes responsible and ethical behavior in a digital economy. And any considerations must always center on the rights and needs of people — the users of new technologies: from protecting their human rights to ensuring adequate data privacy and autonomy in a digital world.
While digital technologies per se are neither bad nor good, whether they make the world a better and safer place depends largely on how they are developed and applied — and by whom. You can create an AI algorithm for evil purposes just as you can for something good. The challenge is to bring your business values into technology development so you are still a good corporate citizen. That is the thinking behind Fujitsu’s Human Centric AI Zinrai, which combines AI, robotics and cybersecurity so that AI is embraced and implemented to make our daily lives better.
Corporate digital responsibility is a voluntary commitment. It starts with the need to conform to legal requirements and standards — for handling customer data, confidential, intellectual property and so on — but it also extends to wider ethical considerations and the fundamental values that an organization operates by.