Artificial Intelligence, 5g Networks, Quantum Computing, Virtual Reality, Blockchain: It’s Important To Pause And Think Of Potential Risks, Negative Outcomes, And Unintended Consequences

Artificial Intelligence, 5g Networks, Quantum Computing, Virtual Reality, Blockchain: It’s Important To Pause And Think Of Potential Risks, Negative Outcomes, And Unintended Consequences
Artificial Intelligence, 5g Networks, Quantum Computing, Virtual Reality, Blockchain: It’s Important To Pause And Think Of Potential Risks, Negative Outcomes, And Unintended Consequences

There’s a familiar pattern when a new technology is introduced: It grows rapidly, comes to permeate our lives, and only then does society begin to see and address the problems it creates. But is it possible to head off possible problems? While companies can’t predict the future, they can adopt a sound framework that will help them prepare for and respond to unexpected impacts. First, when rolling out new tech, it’s vital to pause and brainstorm potential risks, consider negative outcomes, and imagine unintended consequences. Second, it can also be clarifying to ask, early on, who would be accountable if an organization has to answer for the unintended or negative consequences of its new technology, whether that’s testifying to Congress, appearing in court, or answering questions from the media. Third, appoint a chief technology ethics officer.

We all want the technology in our lives to fulfill its promise – to delight us more than it scares us, to help much more than it harms. We also know that every new technology needs to earn our trust. Too often the pattern goes like this: A technology is introduced, grows rapidly, comes to permeate our lives, and only then does society begin to see and address any problems it might create.

This is not exclusively a modern phenomenon. Consider the early days of the mass-produced automobile. As drivers embraced an exciting new mode of transport, accidents and fatalities were many times more likely than they are today. Imagine if the growth of the automobile had gone differently, with seatbelts developed, safer roadways built, and better traffic laws implemented much earlier, right along with the engine innovations that gave us more power and greater speed. Many risks could have been mitigated and many tragedies avoided.

Companies have to learn to be responsible stewards of the artificial intelligence (AI) they deploy, the 5G networks they have begun to build, and so much more that’s coming toward us, from quantum computing to virtual reality (VR) to blockchain. Technologies that are driving sweeping change, and are central to the growth of the economy, should be trustworthy. Companies that deliver technological advances need to act ethically. The stakes are high. And yet, knowing what needs to be done is not the same as knowing how.

Organizations should try to anticipate and address the potential effects of the technologies they deploy. While they can’t predict the future, they can adopt a sound framework that will help them prepare for and respond to unexpected impacts.

Such a framework would need to fundamentally shift how we develop and deploy new technologies. It would have to revamp existing processes. Any framework should reflect that this is a team effort, not just the job of engineers and managers: It should cut across disciplines. It should open doors to new ways of thinking about the challenges. The goal here is to describe a framework that can do these things.

Involve Specialists

Those who are immersed in the world of software engineering – and I count myself in this group – are often inclined to see, first and foremost, the promise of a technology and the opportunity to create value. As questions about the impacts of a technology become more common, the engineers still have far to go in understanding the potential harms. Engineers and software developers do not necessarily have all the expertise they need to understand and address the ethical risks their work might raise.

In other words, there could be a role for specialists from other disciplines here. We need to change our priorities to help technology development teams think with more foresight and nuance about these issues – guided by those with the most relevant knowledge.

Consider, for instance, the development of a VR training tool that immerses the user in a difficult or dramatic emergency response situation. As the technology evolves, VR simulations are becoming so realistic that the possibility of actual trauma from a virtual experience might need to be addressed. The team would want to have a psychologist involved, working side by side with the software engineers, to tap into the body of knowledge about what can cause trauma or how it might be identified and addressed.

Take cloud manufacturing and 3-D printing as another example. As companies pursue these technologies, which have the potential to dramatically change the skillset needed on the factory floor, they might talk to labor economists who can shed light on larger workforce issues. As 5G connectivity brings factories online that can be managed entirely remotely, companies may want to consult with specialists in plant security, cybersecurity, and perhaps even philosophy to understand the potential pitfalls created by factories that don’t employ people.

Pause And Plan

During the strategic planning stage, a team will naturally focus its attention on what’s possible. That’s where the excitement and enthusiasm lie. But there also has to be attention paid to understanding what can go wrong. It’s vital to pause and brainstorm potential risks, consider negative outcomes, and imagine unintended consequences.

This might mean that, as businesses rush toward a 5G future with a giant leap in data speeds and ubiquitous connectivity, they should pause to consider and address new privacy concerns – perhaps well beyond those we are already grappling with. Or they could consider how this leap in data speeds may exacerbate inequity in our society and widen the digital divide.

This step in the planning process, in which risks are brainstormed and analyzed, should be documented just as clearly as the value proposition or the expected return on investment. The mandate to document this work can help ensure that it becomes part of the effort. Deloitte has developed more than 300 questions exploring ethical and trust dimensions which can inform the strategic planning process and its results.

The tendency is to think of tech ethics in the context of the issues that have already surfaced, such as discriminatory bias in social media marketing or talent acquisition systems.  But this is a flawed way of thinking, because it can make us stop short of seeing other, potentially far greater, ethical risks, and it fails to consider the nuances that cut across organizations, industries, customers, and products.

After assessing potential negative impacts from a particular new technology and building a team that has relevant experience, it’s important to go deeper. The team has to do the research, and from there, it should be possible to begin to establish the guardrails that can minimize risks and mitigate potential harm. Leaders should mandate the development of ethical risk mitigation strategies as part of the planning for any new technology project.

Assign Accountability

It can also be clarifying to ask, early on, who would be accountable if an organization has to answer for the unintended or negative consequences of its new technology. Accountability should be considered when documenting the approach to potential impacts during the strategic planning process.

When a company is called to account for the technology it developed or deployed, someone could end up testifying to Congress, appearing in court or answering questions from the media. Will that person be the CEO or the CIO, a data scientist, a founder, or somebody else?

A discussion on this subject may well encourage more rigorous thinking about what could go wrong. Of course, more rigorous attention to such negative outcomes is exactly the point – and may improve the chances that no one ever has to appear before policy makers or a judge to address how a piece of technology came about in the first place.

Appoint A Chief Tech Ethics Officer

The best methods to address the ethics of new technologies are not going to be one size fits all. A broad range of potential impacts may need to be examined and a varied collection of potential risks may have to be mitigated. But most organizations would likely benefit from placing a single individual in charge of these processes. This is why organizations should consider a chief ethics officer – or a chief technology ethics officer – who would have both the responsibility and the authority to marshal necessary resources.

Some industries have grappled with trust and ethics challenges for decades. Hospitals and research centers have long employed ethics officers to oversee questions in research projects and clinical medical practice, for instance. Technology can certainly raise new concerns, even here: Think of a medical school implementing a VR tool to help augment the competency of surgeons and the importance of examining whether the tool works equally well across race or gender. But the broader point is that trust and ethics issues can be managed effectively – as long as the proper leadership commitments are made.

With a chief technology ethics officer in place, it remains important to involve specialists from a number of different disciplines, as discussed previously. These people may come from the fields of anthropology, sociology, philosophy, and other areas. Depending on the issues presented by a specific technology or application, it may be necessary to seek out people who bring knowledge of law, politics, regulation, education, or media.

Challenges And Opportunities

Our ability to ethically manage and increase trust in our tech tools is expected to only gain in importance in coming years as technology evolves, accelerates and reaches more deeply into our lives. This will likely challenge every company and business – and it may have bruising lessons for the organizations that fail to keep pace.

But opportunities are also everywhere to be seen. If you can imagine a world in which vehicle and traffic safety progressed as fast or faster than the development of the automobile, then you can probably imagine the benefits that could accrue for those who get these challenges right.

originally posted on hbr.org by Beena Ammanath

About Author: Beena Ammanath is the Executive Director of the global Deloitte AI Institute, author of the book “Trustworthy AI,” founder of the non-profit Humans For AI, and also leads Trustworthy and Ethical Tech for Deloitte. She is an award-winning senior executive with extensive global experience in AI and digital transformation, spanning across e-commerce, finance, marketing, telecom, retail, software products, services and industrial domains with companies such as HPE, GE, Thomson Reuters, British Telecom, Bank of America, and e*trade.