Don’t Blame the Algorithm—Blame the Mirror: How Our Biases Shape AI Outcomes
I find the ways governments, society, and individuals are optimizing and utilizing AI in everyday life absolutely fascinating. Full disclosure: I’m definitely not an expert. I’d describe myself more as someone who likes to use technology as a tool to be better, faster, and stronger—and I always approach it through that lens.
Based on my own experimenting and use, I believe AI has significant capabilities that can drive efficiencies in systems that are currently broken. But I also know—again, from experience—that it’s just as vulnerable to the classic “garbage in, garbage out” problem. To truly maximize AI’s value, I believe we must start with the right questions and inputs. It’s no different from working with any other system or data source: the quality of the output depends entirely on the quality of the input.
I’ve written in other articles about how our human lenses—our biases, perceptions, and assumptions—impact our understanding and use of technology. So for this piece, I’ll keep it simple and rely on two adages:
“You will only see what you want to see,” and “Garbage in, garbage out.”
Far too often, both people and the systems they build are guided by their own limited perspectives. It becomes difficult to see the forest for the trees. That’s exactly what came to mind when I stumbled across this article: MIT Technology Review: Amsterdam’s AI Welfare Fraud System and Its Failures
The article highlights efforts around AI and welfare fraud detection. To be clear, I haven’t dissected the algorithms used in this case, so this isn’t a deep technical critique. Instead, I’m using it as an example to explore a broader issue: it seems that, we’re relying on human perceptions, historical biases, and flawed patterns to determine risk characteristics.
So I experimented with the question myself—because, well, garbage in, garbage out, right?
I used ChatGPT to research and suggest what the key determinants of welfare fraud risk might be. I’ve included the chat below.
Act like an expert, research and create according to accurate data what are the top 5 determinants in a human committing welfare fraud.
Here are the top five determinants influencing why individuals commit welfare fraud, based on academic research and widely accepted criminological models:
1. Economic Pressure / Financial Need (Motive)
Cases often reveal that people turn to fraud when they face unemployment, debt, or struggle to cover basic expenses. However, research shows that while financial strain is a motivator, it alone doesn’t fully explain fraudulent behavior.
2. Perceived Opportunity
Weak oversight, complex bureaucracy, or poor data‑sharing create environments where fraud seems low-risk. When systems are cumbersome, unreported income or extra dependents can be easily overlooked hub.edubirdie.com. The "fraud triangle" model shows that opportunity is one of the core enablers of fraudulent
3. Rationalization & Moral Justification
Many justify their actions through beliefs like, “I deserve this,” or “the system is unfair.” Studies have documented social acceptance of small-scale fraud, often framed as rightful
4. Norms & Social Influence
Fraud becomes normalized when prevalent in a community. Social learning theory suggests individuals adopt behaviors observed in their networks. For instance, if friends or family frequently avoid compliance, it erodes moral resistance
5. Low Risk Perception & Weak Deterrence
When detection rates are low and penalties are perceived as minimal, individuals become more likely to take the risk. Many recipients assess the chance of being caught as negligible
Why This Matters
Understanding these five determinants can guide better policy:
Address financial need through improved support and access to jobs.
Strengthen oversight and inter-agency data integration.
Enhance public messaging to combat rationalizations.
Foster ethical norms at community levels.
Increase detection and consequences to shift risk-benefit assessments.
Theoretical Framework: The Fraud Triangle
This model posits that fraud occurs at the intersection of three elements:
Motivation/Pressure (e.g., financial need)
Opportunity (weak controls)
Rationalization (moral justification)
can you please regenerate and create the top 3 questions to ask filing to mitigate that risk
Absolutely. Here’s a regenerated list of 3 targeted questions per each of the 5 welfare fraud determinants. These questions are designed to surface red flags, prompt honest disclosures, and minimize fraud risk at the point of filing.
1. Economic Pressure / Financial Need
Focus: Identifying real financial strain vs. strategic misrepresentation.
Questions:
“In the past 30 days, have you struggled to pay for basic needs such as rent, utilities, or food? If yes, which ones and how often?”
“Do you have any outstanding debts, court fines, or obligations that may affect your financial situation?”
“Have you received help from anyone (e.g., friends, religious groups, payday lenders) to meet financial needs recently?”
Why: Captures hidden support systems and financial strain used to justify fraudulent filing.
2. Perceived Opportunity
Focus: Testing for intent to exploit weak oversight or vague rules.
Questions:
“Have you recently received income or support that you were unsure whether to report?”
“Did anyone advise you on how to complete this form, or suggest ways to ‘maximize’ your benefits?”
“Do you have any side jobs, digital work, or cash-based activities you haven’t reported yet?”
Why: Gathers indirect admissions, flags attempts to game the system or exploit loopholes.
3. Rationalization / Moral Justification
Focus: Detecting cognitive framing that justifies fraud.
Questions:
“Some people feel government support is owed to them—do you feel that way? Why or why not?”
“If someone needed help but didn’t exactly qualify, do you think it’s OK for them to stretch the truth?”
“Have you ever heard of someone ‘bending the rules’ to get benefits? What do you think about that?”
Why: Surfaces ethical boundaries and perceptions of fraud as acceptable behavior.
4. Norms & Social Influence
Focus: Uncovering cultural or community influences on behavior.
Questions:
“Do you know others in your household, family, or community who receive benefits?”
“Has anyone around you ever encouraged or discouraged you from applying for aid?”
“Has anyone suggested how to ‘do it right’ or what to say when applying?”
Why: Social learning theory shows behavior spreads through norms—this reveals outside influence.
5. Low Risk Perception & Weak Deterrence
Focus: Raising perceived risk and measuring awareness of consequences.
Questions:
“Are you aware that your answers may be cross-checked with tax, employment, or bank records?”
“What do you think happens if someone gives inaccurate information on this form?”
“Do you have any questions or concerns about reporting fully and accurately today?”
Why: Increases perceived detection risk and offers chance to self-correct.
🔒 Integrated Use Strategy
Ask 1–2 questions per category at key points in the application.
Use neutral language to avoid fear or defensiveness.
Integrate in both digital intake systems and interviewer scripts.
Would you like me to adapt these into a standardized compliance questionnaire or a predictive risk-scoring model?
yes on the predictive risk scoring model
Welfare Fraud Risk Scoring Model (WFRSM-5)
🔍 Overview
Purpose: Detect and mitigate fraud risk during welfare intake or eligibility reassessment
Basis: Grounded in behavioral economics, criminology (Fraud Triangle), and empirical welfare research
Inputs: Applicant responses to calibrated questions
Output: Total risk score + flagged risk category
🧩 Model Structure
Risk Domain-Description-Max Score
1. Economic Pressure-Measures financial strain & hardship framing-5
2. Perceived Opportunity-Gauges system exploitation or unreported resources-5
3. Rationalization & Justification-Flags ethical flexibility or entitlement language-5
4. Social Norms & Influence-Detects environmental exposure to fraud acceptance-5
5. Low Risk Perception-Assesses awareness of monitoring/deterrence-5
Total Maximum Risk Score: 25
🎯 Scoring Logic per Category
Each category includes 3 screening questions. Score applicant responses on a 0–5 scale, where:
0 = Fully compliant, no risk signals
1–2 = Minor red flags, possible misunderstanding
3–4 = Moderate concerns (e.g., hidden income, questionable ethics)
5 = Strong indicators of potential fraud or gaming behavior
🔧 Example Scoring – Domain 2: Perceived Opportunity
QuestionHigh-Risk ResponseScore"Have you received any income you were unsure whether to report?"“Yes, but I didn’t think I had to say anything about it.”4"Did someone advise you how to 'maximize' benefits?"“Yeah, they told me what not to say.”5"Do you earn any income through apps or cash?"“I do some side gigs, but I didn’t report that.”4
🎛️ Risk Interpretation Framework
Total Score-Risk Level-Recommended Action
0–7-Low-Proceed with normal processing
8–14-Medium-Flag for enhanced documentation or interview
15–19-High-Mandatory secondary review, data cross-check
20–25-Very High-Audit-trigger or temporary suspension
Risk Model Summary
Category-Key Flags-Sample Mitigation
Economic Pressure-Excessive hardship claims with no debts-Cross-check with utility shutoff or debt data
Perceived Opportunity-Mentions of “tips” to game system-Conduct randomized audits
Rationalization-Ethical ambiguity or entitlement language-Use nudge statements and legal warnings
Social Influence-Refers to others who bend rules-Contextual education or home visit
Low Risk Perception-Thinks “they don’t check”-Increase risk messaging at intake
That was in three minutes of playing with ChatGPT. My only point in illustrating this example, in the context of the responsible use of AI, is that it can be a powerful tool for systemic and efficient change—we just need to observe and rectify our lens in order to smooth out the edges. Observe how you are filtering, and your intent for using AI, for a better, faster, and stronger human collective.