Co-founder & CEO of TheMathCompany.
With more leaders recognizing AI’s utility in unlocking actionable insights and effective problem solving, AI models are now increasingly becoming core value drivers for large enterprises. These models, however, are not risk-free. Gartner estimates that through 2022, 85% of AI-generated insights will produce incorrect results due to bias in data, algorithms or the teams in charge of managing them.
With AI still in the early stages of adoption across the globe, risks such as learning limitations, cyberattacks and a lack of user understanding give rise to issues of trust. This not only limits organizations’ scalability but also creates gray areas in the quest to successfully align AI efforts and business strategy, deaccelerating digital transformation.
Considering that successful AI adoption demands human trust, the gulf of faith that currently persists between AI systems and decision-makers needs to be bridged. The only way teams can do this is by developing trust-optimized models that benchmark intelligibility against fairness. But before diving into how AI systems can be made more accessible and responsible, let’s look at what constitutes this fundamental “problem of trust.”
Data often comes in incomplete or skewed forms, leading to biases getting “baked” into algorithms. Machine learning (ML) models are prone to algorithmic and cognitive biases and therefore snowball into analytical errors, skewed outcomes and compromised accuracy. In real-world scenarios, this translates into missteps such as the infamous internal AI recruiting engine that, fed with historical recruitment data, chose a pool of candidates that was 60% male, highlighting bias against people of other genders.
MORE FROMFORBES ADVISOR
AI models’ accuracies are often inversely proportionate to their interpretability. Add to this “closed-box” ML algorithms, and it gets tougher for teams to understand why and how a model generates a result, impeding users’ trust.
Lack Of Traceability
This lack of transparency leads to another risk: traceability. With shadow IT services proliferating and teams turning to speculative SaaS applications, threats to API security have increased: Attackers can remotely execute malicious codes, making it nearly impossible for teams to trace back through to levels where the network has altered input parameters.
This can have serious implications for data security; for instance, a path-manipulated classifier application that creates customer buckets through social listening can misclassify customer cohorts, impacting customer-centric decision-making efforts.
Why (And How) Should We Trust In AI?
Until now, business leaders have benchmarked AI efficiency against performance (how well it performs), process (the functions it serves) and purpose (the value it delivers). However, the aforementioned factors have made apparent the need for a new yardstick to assess AI’s utility: trust.
Building trust in AI solutions is pivotal. The act of doing so, however, stands at a crossroads between applying simple algorithms for transparency’s sake or opting for opaque models that offer increased efficiency. This dilemma can be resolved if trust in AI can be optimized at a few key levels.
Integrating Ethics At The Heart Of AI
The concept of ethical AI goes beyond implementing best practices during model development: It entails changing the very fabric of AI. Infusing AI with ethical values at its very core would involve creating governing bodies and introducing enterprise-level AI ethics programs that align with business and industry regulations.
Operationalizing ethics across systems — for instance, factoring in impacts on society, climate and resources and employing responsible AI-driven technology to optimize supply chains and minimize waste — is one step firms can take in this regard. Institutionalizing ethics in AI in such ways will not only help businesses resolve issues surrounding data bias and transparency but also actively include human centricity in long-term policy, bolstering customer trust.
Centering On Humanization And Empathy
The idea that an AI system should be trustable is necessitated by us, humans. That said, for a system to be human-centric is imperative.
AI is already performing near-human functions, recognizing speech and images through NLP; however, it lacks a key human trait — empathy. To infuse empathy in AI would mean developing algorithms possessing humanized decision-making abilities and more “sensible” data fabrics that account for data’s accuracy, reliability and confidentiality. For instance, AI-enabled learning platforms embedded with capabilities to observe stress, confidence and difficulties faced by students could help develop personalized course recommendations and foster individualized learning.
Leveraging AI coded with empathy, businesses can obtain granular individual-focused data, enabling hyper-personalized experiences alongside improvements in data quality and completeness, which is key to deepening trust.
Bolstering Transparency With Explainability
With AI systems becoming increasingly complex, understanding their decision-making rationale has become near impossible. However, teams can gain a better understanding of such systems by ingraining “explanation” methods at different levels in a model and extending these to machine reasoning (MR), a field of AI that computationally mimics abstract thinking to make inferences about uncertain data.
Impactful use cases for explainable AI (XAI) include context-aware systems in hospitals, where models can analyze location data, staff availability and patient data — including vital signs, medical history and imaging reports linked to electronic health records — to “reasonably” issue patient condition alerts, mobilize staff and improve patient outcomes. For leaders, explainable AI provides better visibility into such systems’ behaviors and risks, giving them the confidence to assume greater responsibility for a system’s actions and subsequently encouraging further trust in AI adoption.
A Future Built On Trust
Across industries, AI is rewriting the rules of engagement, and we can only trust it when we trust its inner workings. Enhancing a technology with trust will not only de-risk innovation but also inspire responsible innovation. At this point, both AI developers and incubators need to ensure that they build systems that comply not only with legal but also with ethical and emotional rubrics. Ultimately, AI that’s underpinned by trust, transparency and traceability will facilitate unambiguous and robust models, reinforcing confidence in a secure tomorrow.