banner letter Contact Us

Introduction

Promoting ethical AI practices
Many companies are embracing artificial intelligence (AI), and some are leading the charge in innovation. But success in this journey requires more than technical skill. It also needs a commitment to ethical practices. Ignoring ethics can cause damage to reputation, regulatory problems, or financial losses.

A Responsible Artificial Intelligence (RAI) strategy is a key reference point. It emphasizes ethics, inclusivity, and transparency. This directs businesses toward sustainable success and integrity in their AI efforts.

Challenges

Balancing innovation and compliance with protected classes
After implementing machine learning models, our client, a top financial institution, encountered a crucial issue: ensuring that these models complied with government-mandated fairness standards, especially regarding protected classes like gender and race. Balancing innovation with strict regulatory compliance became paramount.

Challenge 1
The client needed a solution that could balance innovation with strict compliance standards to meet government-mandated fairness requirements in machine learning.

Challenge 2
Creating a system to thoroughly test AI models before using them became crucial. This system needed to identify and stop any model that didn’t meet fairness standards set in advance, avoiding possible regulatory problems.

The transformative solution

Crafting a tailored AI assessment tool
It took us 11 months to create a customized AI evaluation tool. The first nine months were focused on development, and the following two were spent refining the code, fixing bugs, and ensuring it met standards. The result was a user-friendly tool designed for efficiency and ease of use, perfectly tailored to the client’s needs.
This tool allows users to easily enter model data, choose various data sources and protected classes, and specify the desired types of model outputs. It culminates in generating a comprehensive PDF report, rich with visual graphics and critical bias metrics. This report is very valuable for users, providing insights to thoroughly assess AI models, make informed decisions, and address biases effectively.

What We Provided:
A secure and user-friendly AI tool
We carefully chose our technology stack: Python for backend and API tasks, React JS for creating a responsive frontend user interface (UI), and Snowflake for strong internal data handling. Prioritizing data security, we deployed our models on Domino Data Lab to guard against unauthorized breaches. We deeply embraced RAI principles — fairness, transparency, accountability, and ethics — guided by specific KPIs and metrics for thorough evaluation.

Metric
  • Disparate impact
  • Equal opportunity difference
  • Average odds difference error
  • F1 score difference
  • True favorable rate parity
  • Date differentiation
Description
  • Measures differences in outcomes among groups in a dataset, showing if a model unfairly benefits or harms a particular group.
  • Evaluates differences in true positive rates between groups, assessing equal opportunity for classification.
  • Compares error rates between groups, considering both false positives and false negatives.
  • Quantifies F1 Score differences between groups to measure the model’s performance.
  • Assesses differences in favorable outcomes between groups, ensuring equal opportunity.
  • Calculates the age of individuals and model development date for appropriate data evaluation, using date input.
swiper next
swiper prev
Expertise and user experience were our compasses throughout the development journey. We enlisted subject matter experts to ensure our tool was user-friendly and rich in contextual relevance. Performance was a priority. We refined the code for optimal speed and supplemented the user interface with comprehensive documentation and helpful UI annotations. This strategic approach ensures an intuitive user experience and smooth team integration, supporting operational success since the tool’s adoption.

The results

Enhanced transparency, compliance, and fairness
The immediate impact
The client was provided with detailed graphs and plots, showcasing the outcomes of fairness and bias evaluations for protected classes within the internal dataset. Before entering production, models underwent thorough testing to pinpoint biases in the AI tool’s outputs.

Improved PDF reports
  • The AI tool generates detailed reports with clear explanations, offering deeper insights into model fairness and bias.
Expanded
bias check
  • Now evaluates potential bias across 15 protected classes, significantly expanding from the initial five, for a more comprehensive fairness assessment.
Handling external data sets
  • Analyzes and assesses bias in both internal and external data sets, enabling versatile model performance evaluations.
Reduced regulatory & compliance issues
  • Proactive bias identification and mitigation reduce regulatory and compliance risks, aligning with government regulations and industry standards.
swiper next
swiper prev