News

How enterprises dealt with AI biases in 2020
While investors are predicting the growth of the AI market to touch $37 billion by 2025, technology leaders need to master the art of preventing biases in AI and ML models to avoid any reputational, regulatory, and revenue losses.

One technology that has shown great velocity in the market in 2020 is called Artificial Intelligence. AI is something that has made every small, medium, and large-scale company overzealous for its use cases and impact on business growth.

Yet, one of the most neglected aspects of AI is how one can successfully eliminate biases in these deployments. Though companies are heavily investing in AI, CIOs, and their specialized AI, data science teams have to learn how to identify and prevent harmful discrimination in their models, or their businesses will suffer reputational, regulatory, and revenue consequences.

ETCIO brings you detailed strategies of a few companies across industry verticals in India which went all out to manage biases of their AI and Ml models this year.

Reliance General Insurance is treating data to mitigate AI bias

RGI is coupling AI solutions with business rules and risk scorecards to assist human decision making. According to Rakesh Jain, ED & CEO of Reliance General Insurance, the company has worked tirelessly on data ingests, and training models before starting to invest in AI. RGI’s recent AI deployment project was broken into phases to eliminate as many biases as the team could possibly do. His tech team believes that in the end the final decision is taken by humans with the support of AI and a data-driven solution framework and they call this Assisted Decision Making.

“Data filtration and treatment is our priority for the development of every AI use case. As data is the solution to many of our problems such as eliminating biases. There are relevant and appropriate validations across life cycles for both structured and unstructured data supported by a scalable and elastic technology stack,” Jain expressed to ETCIO.

AI bias is not a challenge for PayPal

PayPal is leveraging AI and ML to understand its customers in an intelligent way. The company believes that In the lifecycle of an AI solution, the attention that is paid to the training period can make all the difference in eventual outcomes. Not just in terms of effectiveness and efficiency of the solution from a business value perspective, but also in terms of aspects like bias.

According to Guru Bhat, GM & VP, Omni Channel & Customer Success at PayPal India, in spite of the scale at which the company operates, they rely heavily on “Augmented” Intelligence rather than just “Artificial” Intelligence. His team focuses on extensive human supervision, input, modification, and augmentation during the initial phases of learning where decisions made by the underlying machine intelligence framework are scrutinized for not just accuracy from a business perspective but also aspects like bias, inclusion, fairness, etc.


“This is not just the right thing to do, but it is also the smart thing to do in the fintech domain where the “explainability” of the algorithms is a must-have especially when it comes to decision-making in scenarios like credit where we are legally obligated by regulations to have no bias in our credit decisioning platforms. Eliminating bias from our algorithms is just as important to us as eliminating bias in our interview process or in our compensation/promotion process – because, in the end, it is about making our products inclusive, fair, and inherently usable by ALL our customers, not just a few,” Bhat explained to ETCIO.

PayU’s AI framework eliminates biases for the credit model

Not allowing AI-induced bias in credit models is already a big factor in the western markets and will become so in India as well with evolving regulations. Digital companies in the finance sector are already working on this cause. One such company is PayU which has created a responsible AI framework for its credit models.

PayU is trying to ensure that the models do not use any data or variables which can creep biases of gender, religion, political belief, etc. in determining the credit line. The company’s data science team has worked on deploying ML algorithms that are interpretable, so it's possible to explain why someone was denied credit.

According to Sachin Garg, Head- Data Science of PayU, the company has several parameters in place to eliminate biases in the route of AI models but still, the team believes in having human intervention before deploying any new model to determine if it's robust and bias-free.

“In the next 12- 18 months, PayU wants to use data intelligence to further build its core capabilities in digital lending. In addition, to cross leveraging our data and team, we are also exploring newer AI techniques such as knowledge graphs and embeddings to better predict credit risk,” Grag told ETCIO.

https://cio.economictimes.indiatimes.com/news/next-gen-technologies/how-enterprises-dealt-with-ai-biases-in-2020/80038859

 

 

“This is not just the right thing to do, but it is also the smart thing to do in the fintech domain where the “explainability” of the algorithms is a must-have especially when it comes to decision-making in scenarios like credit where we are legally obligated by regulations to have no bias in our credit decisioning platforms. Eliminating bias from our algorithms is just as important to us as eliminating bias in our interview process or in our compensation/promotion process – because, in the end, it is about making our products inclusive, fair, and inherently usable by ALL our customers, not just a few,”
Contact Us - Filler feature box for Ad content
Media Resources - Filler feature box for Ad content