AI Is Making
Biased Decisions.
We Stop That.

Hiring algorithms, loan approvals, medical diagnoses — AI is discriminating against real people right now. FairLens detects hidden bias in minutes and shows you exactly how to fix it.

0%
of HR AI tools show gender bias
0s
to get a full bias audit report
0
industry-standard fairness metrics
0%
free — no credit card needed
↓ Live audit result preview
COMPAS Recidivism Dataset · 1,000 rows
Criminal Risk Scoring Audit
HIGH RISK
Fairness Score
56/100
DPD (Race)
0.309
DIR (Race)
0.552
EU AI Act
FAIL
⚡ Gemini AI Verdict

This criminal risk system shows severe racial bias — Black defendants are flagged as high risk at twice the rate of white defendants with identical records.

1Race drives 67% of all bias (Cramér's V)
2Black group: 31% positive rate vs White: 68%
3Fails EEOC 4/5 Rule — DIR 0.552 < 0.80 threshold
The Problem Is Real

Real AI. Real Discrimination.

These aren't hypothetical. These are documented cases of AI systems actively discriminating against real people.

Used in US Courts

COMPAS Criminal Risk AI

Black defendants rated HIGH risk at nearly twice the rate of white defendants with identical records.

Higher false-positive rate for Black defendants
Recruitment AI

Amazon Hiring Algorithm

Algorithm penalized resumes that included the word 'women's' and downgraded graduates of all-women's colleges.

−57%
Score penalty for women's universities
US Hospital Systems

Healthcare Resource AI

System systematically underestimated how sick Black patients were, resulting in reduced access to care programs.

41%
Fewer Black patients correctly identified as high-need

What FairLens Found on COMPAS

When we ran the COMPAS dataset through FairLens, it detected severe racial bias in under 30 seconds — the same bias that took ProPublica journalists months to uncover manually.

Demographic Parity Difference
threshold: < 0.1
0.317
Disparate Impact Ratio
threshold: > 0.8
0.62
EEOC 4/5 Rule
62% of 80% threshold
FAIL
38/100
White defendants flagged HIGH risk
63/100
Black defendants flagged HIGH risk

From Data to Insight in 4 Steps

No ML expertise needed. No code. Just upload and go.

01

Upload Dataset

CSV or Excel. FairLens auto-detects sensitive columns like gender, race, and age.

02

Bias Engine Runs

3 industry-standard fairness metrics computed in seconds using Microsoft's fairlearn library.

03

Gemini Explains

Google Gemini AI translates every metric into plain English. No jargon — just actionable insights.

04

Fix & Download

AI-generated remediation steps + a professional PDF report for your team.

Everything You Need

The most complete AI bias auditing platform — completely free.

Industry Standard

4 Fairness Metrics

DPD, EOD, DIR + Cramér's V bias attribution — gold standard metrics that tell you WHAT and WHY.

Powered by Gemini

Gemini AI Explanations

Every finding explained in plain English with TL;DR verdict, key findings list, and expandable detail.

New

10 Language Support

Re-generate your entire audit explanation in Hindi, Spanish, French, Arabic, Portuguese, Bengali and more.

PDF Audit Reports

Professional downloadable reports ready for executives, compliance teams, and regulatory submission.

Legal Compliance

EU AI Act, EEOC 4/5 Rule, ISO 42001. Know your legal standing before regulators do.

Ask Gemini Anything

Interactive AI chat on your results. 'Who is disadvantaged?' 'What is my legal risk?' Instant answers.

What-If Simulator

Simulate removing any attribute and see your score improve in real-time. Plan your mitigation strategy.

New

Shareable Audit Links

Make any audit public with one click. Share your results with anyone — no account needed to view.

No-Login Demo

Try real bias analysis on COMPAS, Adult Income, and German Credit datasets instantly — zero friction.

Regulatory Compliance Ready

Is Your AI Legally Compliant?

FairLens automatically checks your results against major AI fairness regulations. Know your compliance status before auditors do.

EU AI Act

High-risk AI systems must demonstrate fairness. FairLens flags non-compliant systems.

DPD < 0.1, DIR > 0.8
EEOC 4/5 Rule

US hiring law: selection rate for any group must be ≥ 80% of the most-selected group.

DIR ≥ 0.80 required
ISO 42001

International AI management standard requires bias risk assessment for AI systems.

Full bias audit needed

Make Your AI Fair Today

Every biased AI decision harms a real person. It takes under 60 seconds to find out if yours is one of them.

Free forever · No credit card · Google Sign-In supported