IDG Contributor Network: Taking responsibility for responsible AI | Tech News
Artificial Intelligence (AI) affords a tremendous opportunity not only to increase efficiencies and reduce costs, but to help rethink businesses and solve critical problems. Yet for all the promises AI holds, there’s an equal amount of anxiety, across economies and societies. Many people feel that advanced technologies will bring profound changes that are predestined and inevitable.
As AI becomes more sophisticated, it will start to make or assist decisions that have a greater impact on individual lives. This will raise ethical challenges as people adjust to the larger and more prominent role of automated decision making in society.
As such, business leaders and policy makers need to take these fears seriously – and take responsibility for enabling and applying ‘Responsible AI.’
It can be described as adhering to the following principles:
- Accountability and transparency
- Security and safety
- Human-centric design
Accountability and transparency
If AI is used to evaluate applications for a new job, how can applicants be assured of a fair and impartial review? How can potential employees be confident that AI isn’t prone to errors or inherently biased based on how it was trained?
There is a danger that automated processes can ‘learn’ patterns that lead to outcomes we may not desire, through procedures we may not understand or cannot explain. Algorithms might perpetuate existing bias and discrimination in society, adding to the lack of trust around implementing AI.
For instance, researchers at the University of Virginia trained an AI program on a widely used photo data set and later discovered that the system then amplified traditional gender biases. During its later training, it went so far as to categorize a man standing next to a stove as a woman, which is not something a human would do.
Leaders need to reveal machine decision-making. Their AI will have to generate explainable results, promote algorithmic accountability and eliminate biases. This means the leaders need to find ways to, for example, audit algorithms and set up assessments of processed data so that bias is caught. It must be clear where liability lies when systems make mistakes.
Security and safety
Safety and security will be required to build consumer trust in AI and automated systems. Establishing financial liability is one way to do this, but ensuring physical safety and security is the key.
The most advanced discussion is around autonomous vehicles, as an example. Germany passed a new law in 2017 that apportions liability between the driver and the manufacturer depending on whether the driver or the autonomous vehicle was in control. On the back of this approach car manufacturers have said they will assume responsibility if an accident takes place, while their ‘traffic jam pilot’ is in use.
What is new here is considering liability where services are provided by car manufacturers, in the context of legacy insurance constructs. Organizations can help address some concerns by clarifying and developing a more universal understanding of liability. However, beyond establishing financial liability, there are legitimate concerns around safety and security organizations need to address including ensuring security by design is applied in autonomous systems.
This includes making sure that an AI is trained to “raises its hand,” asking for intelligent and experienced human support to help make the ultimate decision in borderline cases instead of proceeding if it’s 51 percent sure of something.
AI should enable enhanced judgement and help people identify and address biases, not invisibly alter their behavior to reflect the desired outcome. This goes beyond eliminating bias in an AI’s own decisions.
Leader should also recognize the power of humans and AI having complementary skills. Human-centric AI capitalizes on critical thinking skills that humans excel at and combines this with the massive computational skill of AI.
For example, a Harvard-based team of pathologists created an AI-based technique to identify breast cancer cells with greater precision. Pathologists beat the machines with 96 percent accuracy versus 92 percent. But the biggest surprise came when humans and AI combined forces. Together, they accurately identified 99.5 percent of cancerous biopsies.
In many cases, AI is better at making decisions—that doesn’t necessarily mean it makes better decisions. Leaders need to ensure that their organization’s values are encompassed in their AI. Deploying AI without anchoring to robust compliance and core values may expose a business to significant risks including employment/HR, data privacy, health and safety issues. In some cases, AI is even becoming the primary face of the organizations, in the form of chatbots, digital customer service agents. This public facing role of AI means that customers have to see the company’s values come through.
My colleagues at Fjord recently proclaimed that we’re witnessing the rise of an Ethics Economy, which means whether organizations have values and defend them has never been more business-critical than today – and AI needs to reflect them.
This article is published as part of the IDG Contributor Network. Want to Join?