As more organizations implement Artificial Intelligence technology into their processes, leaders are taking a closer look at AI bias and ethical considerations. Consider these key questions
Do you have some anxiety about Al’s bias or related issues? You’re not alone. Nearly all business leaders surveyed for Deloitte’s third State of Artificial intelligence in the Enterprise report expressed concerns around the social threats of their Al capabilities.
There is certainly some cause for uneasiness. Nine out of ten respondents to a late 2020 Capgemini Research Institute survey were aware of one case where an AI system had appeared in social issues for their businesses. Nearly two-thirds have experienced the problems that are biased with AI systems, six out of ten indicated their organizations had attracted legal scrutiny as a result of AI applications, and 22 percent have said they suffered customers because of these choices reached by AI systems.
As Capgemini leaders pointed out in their recent blog post:’’The potential of Al is explored by Enterprises, it is used to ensure that the Al applies they must have on the right way.’’
Check out the 10 keys about Artificial Intelligence for businesses
7 Artificial Intelligence ethics questions leaders often hear
While organizations aggressively pursue increased AI capabilities, they will look to IT and data science leaders to explain the risks and best practices around ethical and trusted AI. ‘’ AI is worldwide, so in future adopters should be imaginative, become cleverer AI consumers, and fix themselves as vitreous supervisor of customer data to remain suitable and stay forward of the competition,” says Paul Silverglate, U.S. technology sector leader for Deloitte.
Here, AI experts address some common questions about ethical AI. You may hear these from colleagues, customers, and others. Consider them in the context of your organization:
- Isn’t AI itself inherently ethical and unbiased?
It may seem that technology is neutral, but that is not exactly the case. AI is only as equitable as the humans that create it and the data that feeds it. “Machine learning that supports automation and AI technologies is not created by neutral parties, but instead by humans with bias,” explains Siobhan Hanna, managing director of AI data solutions for digital customer experience services provider Telus International.
“We might never be able to eliminate bias, but we can understand bias and limit the impact it has on AI-enabled technologies. This will be important as the cutting-edge, AI-supported technology of today can and will become outdated rapidly.”
What is ethical AI?
While AI or algorithmic bias is one concern that the ethical use of AI aims to mitigate, it is not the only one. Ethical AI considers the full impact of AI usage on all stakeholders, from customers and suppliers to employees and society as a whole. Ethical AI seeks to prevent or root out potentially “bad, biased, and unethical” uses of AI. “Artificial intelligence has limitless potential to positively impact our lives, and while companies might have different approaches, the process of building AI solutions should always be people-centered,” says Telus International’s Hanna.
“Responsible AI considers the technology’s impact not only on users but on the broader world, ensuring that its usage is fair and responsible,” Hanna explains. “This includes employing diverse AI teams to mitigate biases, ensure appropriate representation of all users, and publicly state privacy and security measures around data usage and personal information collection and storage.”
How big a concern is ethical AI?
It’s top of mind from board rooms (where C-suite leaders are becoming aware of risks like biased AI) to break rooms (where employees worry about the impact of intelligent automation on jobs).
IRFAN SAIF[co leader of Al] says:“As organizations become more pervaded in AI, it’s essential that they have a standard framework and practices for the board, C-suite, company and third-party ecosystem to handle AI problems and form a trust with both their customers and businesses,”
.4. How does diversity (or lack thereof) impact ethical AI?
A culturally diverse team is a powerful way to detect and eliminate baking conscious and unconscious biases into AI.
A culturally diverse team is a powerful way to detect and eliminate baking conscious and unconscious biases into AI, Hanna says.
“By tapping into the strength of this diversity, your brand might be better positioned to think and behave differently about trust, safety, and ethics and then transfer that knowledge and experience into AI solutions,” says Hanna, who recommends a “human-in-the-loop” approach. This ensures that algorithms programmed by humans with inherent blind spots and biases are reviewed and corrected in the early phases of development or deployment.
What else are leading companies doing to address the ethical risks of AI?
At a tactical level, avoiding data-induced AI bias is critical. CompTIA advises ensuring balanced label representation in training data, as well as making sure the purpose and goals of the AI models are clear enough so that proper test datasets can be created to test the models for biasesOpen-source tool kits from organizations such as Aequitas can help count prejudice in data sets. Themis–ml, an open-source machine learning library, aims to lessen data prejudice using relevant algorithms. There are also tools available to identify flawed algorithms and an extensible open-source toolkit that combine several bias-mitigating algorithms to help teams detect problems in machine-learning models.
Some organizations are creating ethical guidelines for AI development as well as clear processes for informing users about the use of their data in AI applications. According to the Capgemini Research Institute survey – a dramatic increase from just 5 percent in 2019. However, the 59 percent of organizations last year that instructed costumers about the way AI decisions might affect them was a drop from the 73 percent that did so the year prior.
Will AI steal my job?
Myth-busting can also be a meaningful aspect of ethical AI. “There are several misunderstandings that business leaders should be aware of,” says Hanna of Telus International. “For instance, AI will not replace the human workforce, but support it. While the technology has proven to be helpful across several of industries, including outperforming humans in diagnosing cancer or reviewing legal documents, a cataclysmic impact on human jobs is not in our future.
Business leaders should focus their efforts not on automating as many employees as possible out of their roles, but rather on what new work those employees may be freed up to do. What business opportunities does that automation open up?
For more advice on managing job loss fears, read Adobe CIO Cynthia Stoddard’s article: “What is the procedure of IT automation became a team eye-opener.” As Stoddard puts it, “When you introduce the term “automation,” everyone’s natural reaction is to surprise what will occur to their work. But once our team saw the conclusions – that they could take part in the future of thought, experiences with new technologies, and focus on finding new skills – IT automation became an actual eye-opener.”
Is ethical AI IT’s job?
Ethical AI is important to everyone, but IT leaders can play a powerful role. “AI greatly impacts our lives and is being integrated into every single industry. Trust is everything in the digital era, and enterprise IT leaders should be educated on what constitutes trusted and ethical AI to lead their organizations to establish the correct guidelines and frameworks into their programs,” Hanna says. “While industry standards in this area are still maturing, there is widespread recognition that product architecture and development should be based on appropriate ethics.”