blog

Incorporating AI Scorecard for building an explainable and ethical AI
By: Rajesh Rathod, Chief Technology Officer, Prodapt
In the last few years, artificial intelligence (AI) adoption has seen steady growth in various sectors worldwide. AI integration and deployment have impacted many human lives and enhanced global engineering and industrial outputs. But with AI’s widespread adoption comes the responsibility of ethical and safe decision-making by these non-human agents.
Administration and governments have been working to define regulatory frameworks that govern the deployment of AI in products and services. An Explainable AI framework is a crucial enabler for service providers to comply with these regulations and create trust in the minds of the consumers.
Impact of an explainable AI framework
AI unleashes the power of data, and businesses have no choice but to embrace this disruption. However, one cannot undermine the risks posed due to AI, which need to be mitigated or even prevented. The explainability of the AI algorithm is one of the cornerstones of mitigating AI-induced risks.
Explainability is the ability of the human mind to understand the rationale behind decisions taken by the AI algorithm. It aims to create an easy-to-understand input-output model that lets us trace the decision (output) to the data (input) and quantifies the impact of the change in the input on the output.
Not surprisingly, some machine learning algorithms are easier to ‘understand’ compared to others – logistic regression, for example, is much easier to ‘explain’ than a deep neural network with several thousand tuning parameters. Explainability significantly impacts the AI system in several ways, some of which we examine next.
Efficiency
Explainable AI systems are more efficient as we can understand them better. Moreover, they can be improvised and tuned to give more desirable outcomes. Insights available from the input-output model help discover blind spots and possible behavioral anomalies that may sometimes have disastrous consequences.
These blind spots could arise due to insufficiencies in the training data used or may be inherent in the model. Explainability is an effective tool for choosing the best choice from amongst different successful machine-trained models: other things being equal, the better-explained model is a more efficient choice.
Ethics
Modern societies expect ethical decision-making, and explainability is at the core of it. For example, we can assess a model for its fairness by examining the model behavior under what-if scenarios without waiting for them to occur for real.
So, if the age of an individual from a particular ethnic group significantly impacts the benefits made available to him while it has no bearing on someone of another ethnicity, a better explained AI system would make it much easier to flag this behavior. The human-in-the-loop can then assess whether it is fair play or not. AI algorithms get biased based on training data – the human mind can detect such biases with contextual and background information and plain intuition.
Trust
Explainable AI demystifies the “black box” of the ML algorithm. Therefore, the AI-enabled consumers trust the outcome as they can better understand why the ‘black box’ works or fails. In effect, the black box is converted to a white box (or at least a grey one!) It is not easy to understate the value of trust in any business, society, or government.
Regulatory compliance
Governments are at work in defining regulations for the deployment of AI. The European Commission has issued draft guidelines to “provide AI developers, deployers and users with clear requirements and obligations regarding the specific use of AI” to address risks created by AI systems. Such regulations propose enforcement, governance, and conformity assessment of AI systems. To meet the compliance requirements, the explainability of the machine-learned model is imperative.
AI Scorecard: An approach to implementing explainable AI
The AI Scorecard is computed by embedding tools and software during the development and deployment of AI systems that evaluate it along the following axes:
- Explainability
- Fairness
- Privacy
- Model Performance
AI platforms generally available now have toolkits for ‘Responsible AI’ that can score an AI system against KPIs that contribute to the above criteria. For example, explainability is quantified with measures like Variable importance, Partial dependencies, Lime, Shap, and Counter-factual analysis.
If incorporated into the AI development pipeline, such a scorecard can flag out-of-bound KPIs that enable the AI developers and data scientists to do in-process tuning of their algorithms to create the right AI system and differentiate their offerings. Research shows that organizations that outperform others in the usage of AI utilize techniques and processes that make their AI explainable.
About the author
Rajesh Rathod assists the CEO & the Board in building strategic growth drivers focused on next-generation technology and capability development. Rajesh is also responsible for scaling Prodapt Labs, building next-gen technology offerings and IP frameworks to boost future growth, in addition to driving the M&A technology strategy. As part of the role, he also oversees the learning & development function to prepare Prodaptians for the next phase of growth.
Sponsored by: