How to build trust in AI

By Catherine Lian, Managing Director, IBM Malaysia.

As the world grapples with a global public health crisis and its economic and social fallout, we need the combined power of humans and machines more than ever. Machine intelligence now has two names – machine learning (ML) and artificial intelligence (AI) – both of which have become buzzwords. Businesses are investing big bucks on ML and AI to get them to deliver on their hyped-up potential.

No wonder then that both, businesses and governments are set to invest US$98 billion on AI-related solutions and services by 2023. That’s a whopping 250 percent higher than the US$37.5 billion that they will spend this year, according to International Data Corp (IDC) estimates.

Malaysia is likewise also gearing up to leverage AI/ML by focusing on empowering the nation through data and boosting investments in AI. Malaysia seeks to leverage AI by formulating policies and job creation across emerging sectors, building data ecosystem to foster innovation in AI-related solutions.

The Malaysian government has announced the formulation of Malaysia’s National Data and AI Policy to complement the National AI Framework, according to the Malaysian Investment Development Authority (MIDA). The policy aims to position the country as a hub in South-East Asia for grooming AI talent and building a commercial AI ecosystem.

What exactly is AI? Simply put, AI is about getting computers to perform tasks or processes that would be considered intelligent if done by humans. An autonomous car, for example, is not just making suggestions to the human driver; it is the one doing the driving.

But cracks are beginning to appear. A recent survey from Deloitte says that 56 percent of organisations are slowing their AI adoption because of emerging risks related to AI governance, trust and ethics. “As usage has grown, so has an awareness of the various risks of AI – from unintended bias to determining accountability,” Deloitte notes. “What appears to have not grown enough is the adoption of specific actions to help mitigate those risks, even by the most skilled adopters.”

Companies that have adopted AI are facing a raft of concerns, including the lack of explainability and transparency in AI decisions, the business impact of poor decisions made by AI, and data breaches. These concerns, which relate to trust and ethics, are in addition to significant challenges from regulatory issues and cyber threats.

The problem is that most companies are ill-prepared due to a shortage of tools and skillsets to ensure that the AI’s output is fair, safe, and reliable. There are also questions about having access to relevant, clean and unbiased data. Despite the gap between risk and preparedness, many companies are not actively addressing these concerns with the right risk management practices.

The possible solution? To get those results, we need to remove barriers to AI adoption. For that, AI needs to be trusted and be ethical. Trust allows organisations to manage AI decisions in their business while maintaining full confidence in the protection of data and insights. They will be more likely to trust if vendors and users ensure that AI models are tested for bias. With that comes the need to understand how exactly AI decisions are made to be able to detect potential flaws.

The tech sector cannot tackle the issues of ethics in AI alone. Stakeholders must agree to the creation of guardrails so we can trust what the tech offers, ensuring the process is transparent and explainable. Unless we have AI that users can trust, we will have a tougher time solving the world’s problems. We’re already beginning to see how AI can help organisations tackle the unforeseen demands of the pandemic.

Ethical AI

Technology can play a pivotal role in mitigating, responding to and recovering from the global Covid-19 pandemic, which comes with its own set of privacy and ethical considerations. AI ethics should not be seen as a separate business objective to be added after an AI system has been deployed. For this reason, IBM has a set of trust and rransparency principles to include ethics in AI development and use. The purpose of AI is to augment human intelligence, and AI systems must be transparent and explainable. It is only by embedding ethical principles into AI applications and processes that companies can build systems that people can trust to be fair, transparent and beneficial.

Another example is the AI Fairness 360, an AI bias detection software that IBM has released into the open-source community, empowering companies to take control of their systems. We’re also working with government agencies worldwide to assess regulatory frameworks and have studied ones that influence ethical considerations in tech development and deployment. One example: IBM has partnered with the University of Notre Dame to establish a Tech Ethics Lab to help scholars and industry leaders to explore and evaluate ethical frameworks and ideas[1].

Open Toolkits

One significant finding from this research: the role of the open-source community to foster trusted AI workflows. Open-source is an excellent enabler to building trust because the code and techniques are available for everyone to access. IBM has donated toolkits to the Linux Foundation AI project to enable the broader community to co-create tools under the governance of the Linux Foundation.

The toolkits are vital for faster software development and testing. Some example: the AI Fairness 360 Toolkit includes fairness metrics and bias mitigation algorithms to help AI developers and data scientists examine and repair bias in AI models; the Adversarial Robustness 360 Toolkit helps evaluate a neural network’s ability to resist security threats; and the AI Explainability 360 Toolkit contains methodologies for gathering insights into how neural networks arrive at their decisions.

In April 2019, Brickfields Asia College (BAC) partnered with IBM to enhance digital skills and knowledge by integrating IBM’s Innovation Center for Education (ICE) programme into BAC’s law curriculum. BAC embeded IBM ICE’s AI, cybersecurity and blockchain syllabus into its law courses.

The aim is to equip the next generation of professionals with an education which is aligned with ethical values. The courses are designed to help students understand the fundamentals of AI and how it can be applied in the workplace. IBM is one of the first major tech partners to take a direct role in Malaysia’s training ecosystem. It’s time for a new partnership between business, government and the labour force.

The upcoming National Data and AI Policy will accelerate the momentum to advance a human-centric approach to AI to facilitate innovation and safeguard public trust. It is important that Malaysia build the necessary data and AI capabilities to ensure that industries, government and people can take advantage of the opportunities data and AI have to offer.


[1] Source: https://www.ibm.org/responsibility/2019/case-studies/aiethicsboard

Previous articleThe government can spend more
Next articleShareholder capitalism’s ugly legacy

LEAVE A REPLY

Please enter your comment!
Please enter your name here