FairPlay AI

FairPlay AI

Financial Services

Los Angeles, CA 1,637 followers

The world's first Fairness-as-a-Service company.

About us

The world's first Fairness-as-a-Service company. Our clients achieve higher profits and fairer results, with no increase in risk. Play Fair, Win Big with FairPlay.

Website
http://fairplay.ai
Industry
Financial Services
Company size
11-50 employees
Headquarters
Los Angeles, CA
Type
Privately Held
Founded
2020
Specialties
fintech, artificial intelligence, algorithmic decision-making

Locations

Employees at FairPlay AI

Updates

  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    The current state of fair lending in the BaaS ecosystem: There has been a marked increase in fair lending scrutiny across the BaaS ecosystem, particularly as shown by a recent FDIC consent order against a major sponsor bank. This heightened scrutiny extends across all lending products, as well as credit-like products, such as buy-now-pay-later, and earned-wage-access. Bank regulators are also increasingly focused on the fairness of deposit-related practices, which can include holds, denials, and transaction limits, often prompted by customer complaints. Sponsor Banks and the Fintechs who originate through them should be aware of the following: Regulators are examining and testing fairness outcomes across all customer touchpoints: marketing, fraud detection, income/identity verification, underwriting, pricing, line assignment, and collections. In each of these areas, the regulators are testing: ➖ Are the data inputs and decision outcomes fair?  ➖ What variables are driving any observed disparities?  ➖ Do disparities persist even after controlling for risk?  ➖ Have less discriminatory strategies been considered to reduce disparities? In addition, the new normal for lenders is shifting from annual, retrospective fair lending testing to a growing regulatory expectation of ongoing monitoring. If you need evidence of this, consider the recent 34-page consent order against a major sponsor bank, where the term “monitoring” appears 26 times. Fairness is also becoming a key consideration in model validation. Among other things, models may be evaluated to ensure they don't exhibit differential predictive performance for protected groups, a practice that might be considered disparate treatment. This intensified focus on fairness is driving up compliance costs for participants in the BaaS ecosystem, affecting both banks and fintechs.  To mitigate compliance costs and fair lending risks sponsor banks and fintechs should: ➡ Invest in fair lending technology: Leverage software to automate fairness testing, monitoring and regulatory reporting. ➡ Integrate fairness analytics across the customer journey and credit policy waterfall: From marketing to fraud detection, identify and income verification, underwriting, pricing, line assignment, account management and servicing. ➡ Search for Less Discriminatory Alternatives:  Proactively explore and implement alternative variables, models, or decision processes that achieve your business objectives while reducing disparities. Given the rising stakes, failure to enhance fair lending practices could lead to severe regulatory consequences, reputational damage, strained customer relationships, and potential legal liabilities. Is your approach to fair lending keeping pace with today’s heightened compliance environment?

    • No alternative text description for this image
  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    Fair Lending teams are becoming more quantitative. As decisioning processes, data sources and financial products grow more complex, fair lending teams must increasingly turn to quantitative and technology-driven methods to uncover and address potential disparities. This shift represents a significant evolution in fair lending compliance. Traditionally, Fair Lending teams focused primarily on qualitative assessments, reviewing policies and procedures to ensure compliance with anti-discrimination laws such as the Equal Credit Opportunity Act and the Fair Housing Act. While these approaches remain essential, data-driven analytics now play a crucial role in detecting bias that might elude qualitative review. The move toward more quantitative fair lending methods is driven by several factors: Big Data and Advanced Analytics: The explosion of big data and the application of AI across the customer journey offer both opportunities for unfairness and a wealth of data for analysis. Regulatory Expectations: Regulatory bodies increasingly expect financial institutions to employ robust statistical methodologies to test and monitor for fair lending compliance. This expectation is driven by: ➡ Emerging best practices from organizations like FinRegLab on machine learning explainability and fairness ➡ The precedent set by the NAACP's public monitorship of Upstart Inc. ➡ A recent Federal Deposit Insurance Corporation (FDIC) consent order against a major partner bank, emphasizing the importance of ongoing monitoring of fair lending risks. Technological Leap: The advent of powerful computing resources and sophisticated software means that for the first time ever, lenders can: ▶ Conduct complex analyses with unprecedented efficiency and frequency ▶ Execute large-scale simulations to model various scenarios ▶ Implement advanced debiasing techniques to proactively address potential issues The future of fair lending is data-driven. In this new era, the right tools and expertise can make all the difference. If you're looking to implement best-in-class quantitative approaches to fair lending that align with regulatory expectations and uncover hidden biases, let’s explore how FairPlay can guide you through this minefield.

    • No alternative text description for this image
  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    How do Americans feel about AI making decisions that impact their everyday lives? According to a recent Consumer Reports survey: increasingly uneasy. Key findings: ⚫ 72% of respondents expressed concern about AI screening during job interviews, with 45% being "very uncomfortable." ⚫ About two-thirds are uncomfortable with banks using AI for loan underwriting and landlords for tenant screening. ⚫ Over half express discomfort with facial recognition in law enforcement and the use of AI in medical diagnosis and treatment. The survey highlights a growing sense of lost control over personal data and anxiety around AI's influence on critical life decisions. These statistics should be a wake-up call for business leaders. Your customers are watching how you implement AI and their trust is at stake. To build and maintain that trust, consider: ▶ Adopting a "privacy first" approach to AI development ▶ Implementing robust data access rights ▶ Establishing clear explanation and appeals processes for AI-driven decisions Developing and deploying trustworthy AI  isn't just about compliance—it can set you apart in an increasingly AI-driven world. Hat tip to Jonathan Joshua for bringing this survey to my attention. https://lnkd.in/gWn35nR6

    Consumer Reports survey: Many Americans concerned about AI, algorithms  - CR Advocacy

    Consumer Reports survey: Many Americans concerned about AI, algorithms  - CR Advocacy

    https://advocacy.consumerreports.org

  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    🏠 Hey homeowners: When was the last time you checked the moss levels on your roof? If you haven’t recently, don’t worry — your insurance company’s drones probably did. Insurers today are using advanced aerial imagery to inspect every inch of your property, from aging shingles to overhanging tree limbs, all without ever setting foot on your lawn. These digital eyes in the sky are part of a broader trend toward ultra-precise risk assessment. Algorithms now process countless data points to determine premiums, promising a more personalized approach to underwriting. But with great data comes great responsibility. Connecticut’s Department of Insurance recently warned that while aerial technology might boost accuracy, it can also lead to unintended and potentially unfair consequences for homeowners. Amid an ongoing crisis in home insurance—driven by climate change, rising construction costs, and regulatory challenges in high-risk areas—advanced tech like aerial imaging and AI analysis offer insurers powerful tools to reduce costs and improve risk selection. But as states like Connecticut and New York remind us, alternative data and AI systems need to be used responsibly and within legal boundaries. With the right approach, this technology can lead to more accurate risk assessment AND fairer pricing. But to achieve that, we need to keep an eye on the roof—and an eye on the algorithms watching it.

    • No alternative text description for this image
  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    Is your neighborhood tax assessor playing favorites? A new study on Philadelphia’s property tax fairness took a hard look at residential tax assessments — and, spoiler alert: It’s not all brotherly love in the City of Brotherly Love. 🏠💔 This study by the Reinvestment Fund dives deep into Philadelphia's 2023 residential taxes, and here’s what they found: Neighborhoods with higher percentages of Black, Hispanic, or low-income residents are more likely to experience: • Inaccurate assessments • Over-assessments • Under-assessment of high-value properties relative to low-value ones The study is a stark reminder that seemingly objective systems can hide biases that hit vulnerable communities the hardest. It’s also a clear call to cities across the country to take a hard look at their own assessment practices. If we can build AI that outsmart grandmasters at chess, surely we can design a system that fairly assesses property taxes — no matter the neighborhood.

    • No alternative text description for this image
  • FairPlay AI reposted this

    View organization page for Spring Labs, graphic

    4,380 followers

    How are leading sponsor banks and fintechs utilizing AI to serve their customers better? Discover cutting-edge insights from industry leaders and explore real-world case studies on AI's current and future deployment at the AI-Native Banking & Fintech Conference. Hear from experts: 🎙️Brian Brooks, Former Comptroller of the Currency 🎙️Kareem Saleh CEO of FairPlay AI 🎙️Walter J. Mix III, Former Commissioner of the California Department of Financial Protection and Innovation (DFPI) 🎙️Dan Pillemer, CEO of CardWorks And many more! 🗓️ Date: October 7th 🕒 Time: 9 am -6 pm MT 📍 Where: The University of Utah, Salt Lake City Co-hosted with Utah Bankers Association, the Utah Governor's Office of Economic Opportunity, and American Fintech Council! Don’t miss the chance to gain exclusive insights, share knowledge, and discuss innovative ideas with those who share your passion for advancing AI responsibly. 🎟️ Early bird tickets on sale now until September 15th: https://lnkd.in/gj964_X8 #GenerativeAI #ArtificialIntelligence #AINative #Banking #Fintech

    • No alternative text description for this image
  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    Are We Building AI on Shaky Foundations? Imagine constructing a skyscraper without thoroughly vetting your materials. That's essentially what we've been doing in AI, according to a recent Nature Magazine study. (Link 👇 in the comments) Researchers introduced a novel framework to evaluate datasets used in biometrics and healthcare through three critical lenses: fairness, privacy, and regulatory compliance. Fairness looks at how well a dataset represents diverse groups. It considers diversity (Does the dataset include a wide range of demographics?), inclusivity (Are all groups meaningfully represented?), and label reliability (How trustworthy is the attached information?). A fair dataset should reflect the rich tapestry of humanity, with accurate, self-reported descriptions. The privacy assessment asks whether datasets could potentially identify individuals or disclose sensitive information. Researchers checked for personal identifiers and sensitive attributes in the data. A high privacy score indicates minimal personal information, safeguarding individual identities. Regulatory compliance looks at whether the data was reviewed by an ethics board, if informed consent was obtained from participants, and whether there are mechanisms for data correction and deletion. A compliant dataset would tick all these boxes. The study analyzed 60 datasets, and the results are sobering: Fairness scores averaged a mere 0.96 out of 5. Our AI systems are learning from woefully unrepresentative data—like teaching a child about society using a book depicting only one type of person. Privacy scores were slightly better, but a "fairness-privacy paradox" emerged: To make datasets fairer, we often need more demographic data, but gathering more data can heighten privacy risks. Regulatory compliance scores were truly dismal, averaging just 0.58 out of 3. Many datasets lack even basic safeguards like institutional review board approval, individual consent, and mechanisms for data correction or deletion. The path forward? Researchers recommend: 1. Securing proper approvals and individual consent 2. Implementing mechanisms for data correction and deletion 3. Striving for diversity while safeguarding privacy 4. Developing comprehensive datasheets documenting dataset characteristics As we stand at the precipice of an AI-driven future, are we comfortable building our digital world on sand or should we take the time to lay a solid bedrock for the AI revolution? The choice is ours.

    • No alternative text description for this image
  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    Does Your Grocery List Hold the Key to Your Credit Score? 🛒💳 Imagine a world where your love for fresh kale could give your credit score a boost, while that late-night ice cream run might actually hurt it. Sound far-fetched? Not according to a fascinating new study exploring the use of grocery data in credit decisions. (Hat tip to Jonathan Joshua for bringing this research to my attention!) Picture this: Two shoppers, Alice and Bob. Alice consistently buys fresh milk, beans, and vinegar dressings. She shops every Friday, spends similar amounts each trip, and is a savvy deal hunter. Bob, on the other hand, frequently purchases energy drinks and processed meats. His shopping trips are erratic, with wildly varying basket sizes. According to the study, Alice is more likely to pay her credit card bills on time than Bob—even after accounting for factors like income and traditional credit scores. Key findings that might make you rethink your grocery list: ➡ Healthier, Less Convenient Foods: Opting for raw ingredients over microwave meals is linked to more responsible payment behavior. ➡ Routines Matter: Consistent shopping habits, like sticking to the same shopping day and similar basket sizes, predict lower default risk. ➡ Deal-Seeking Pays Off: Shoppers who regularly take advantage of promotions tend to have better credit histories. This data could be transformative for the "credit invisible" — those who lack traditional credit scores. But it also opens a Pandora's box of ethical questions: 🔷 Fairness Concerns: Could using grocery data for underwriting unfairly penalize lower-income and marginalized shoppers who may not have access to fresh, healthy foods? 🔷 Ethical Dilemmas: Is it right to judge someone’s creditworthiness based on whether they buy cigarettes or junk food? 🔷Balancing Act: How do we weigh the benefits of financial inclusion against the privacy concerns of using such personal data? While the study acknowledges these challenges, it stops short of offering definitive answers. The next time you're in the grocery aisle, remember: your choices might be feeding more than just your appetite—they could be feeding your financial future.

    • No alternative text description for this image
  • View organization page for FairPlay AI, graphic

    1,637 followers

    Do lenders have to trade accuracy for fairness? "'This is the thorniest, most difficult question in algorithmic fairness right now,' said Kareem Saleh, founder and CEO of FairPlay AI, a company that conducts fairness testing on AI models. 'What you see in the impasse is actually two really thoughtful groups trying to grapple with this question.'" Read the full article here in American Banker:

    In AI-based lending, is there an accuracy vs. fairness tradeoff?

    In AI-based lending, is there an accuracy vs. fairness tradeoff?

    americanbanker.com

  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    Last week the Consumer Financial Protection Bureau sent a spicy letter to the Treasury Secretary about AI in financial services. Here’s the scoop: First, if your business model depends on side-stepping a regulation, it’s an immediate red flag. Case in point: Earned Wage Access? Now treated as loans. Buy Now, Pay Later? Consider it a credit card. 🤖 Generative AI: The CFPB is giving it major side-eye, at least in consumer-facing applications. They're worried about misinformation, flimsy dispute resolution, and a Pandora's box of privacy and security risks. Handle with care! 🔍 Fairness First: The CFPB is cracking down on bias in everything from fraud detection to servicing, debt collection, and valuation models. They want regular testing for disparate treatment and impact. And they’re serious about searching for Less Discriminatory Alternatives (LDAs)—they mentioned it twice. 👀 ❌ RIP sandboxes and no-action letters.  No more regulatory relief for "innovative" companies.The CFPB says these programs “fell short of their intended purpose.” Apparently, waivers and approvals were misrepresented as stamps of endorsement. 🔓 Open Banking: The CFPB says competition should be based on product quality, not data lock-in. The future is open! 🛒 Comparison Shopping Tools: If you're displaying multiple offers to consumers and manipulating results, collecting kickbacks, or using dark patterns—be warned. The CFPB sees these practices as anti-competitive and potentially illegal. 📱 Big Tech Beware: If you’re a major player in digital wallets or payments—looking at you, Google, Apple, Samsung—get ready to be supervised like a bank. And whistleblowers? The CFPB wants to hear from them! In short, AI and Big Tech are squarely in the CFPB's sights. The message? Follow our rules, or face the music. https://lnkd.in/g-CMQp3D

    CFPB Comment on Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector | Consumer Financial Protection Bureau

    CFPB Comment on Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector | Consumer Financial Protection Bureau

    consumerfinance.gov

Similar pages

Browse jobs

Funding

FairPlay AI 3 total rounds

Last Round

Series A

US$ 10.0M

See more info on crunchbase