Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Massachusetts AG Settles Fair Lending Action Based Upon AI Underwriting Model

By Kris D. Kully on July 24, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

While federal regulatory agencies retreat from enforcing disparate impact discrimination, at least one state agency has stepped forward. Massachusetts Attorney General Andrea Joy Campbell announced on July 10, 2025 a settlement with a student loan company, resolving allegations that the company’s artificial intelligence (“AI”) underwriting models resulted in unlawful disparate impact based on race and immigration status.

The disparate impact theory of discrimination in the lending context has been controversial. It has been 10 years since the Supreme Court held in Inclusive Communities that disparate impact is available under the Fair Housing Act if a plaintiff points to a policy or policies of the defendant that caused the disparity. In the fair lending context, then, disparate impact applies to mortgage loans. However, for other types of consumer credit – like auto loans or student loans – a plaintiff or government enforcer claiming discrimination would need to rely on the Equal Credit Opportunity Act (“ECOA”). While ECOA prohibits discrimination against an applicant with respect to any aspect of a credit transaction, there has been much debate over whether it applies to discrimination in the form of disparate impact. The federal government for years relied heavily on ECOA to bring credit discrimination actions. The Biden Administration pursued a vigorous redlining initiative against mortgage lenders. The government used the vast amount of data obtained under the Home Mortgage Disclosure Act (“HMDA”) and compared the activities of various lenders within a geographic area to determine whether a lender was significantly lagging its peers in making loans to certain protected groups. The government then looked to the lender’s branch locations, advertising strategies, the racial/ethnic make-up of its loan officers, and other factors to assert that the lender had discouraged loan applicants from protected classes. Through that redlining initiative, the government settled dozens of cases, resulting in well over $100 million in payments.

HMDA data provides extensive, if imperfect, demographic data on mortgage lending activities and has been key to building claims of lending discrimination, particularly disparate impact. However, that level of data is not generally available for other types of lending, like student loans. Without such data, the Office of the Massachusetts Attorney General (“OAG”) in this case reviewed the lender’s algorithmic rules, its use of judgmental discretion in the loan approval process, and internal communications, which the Attorney General described as exhibiting bias.

Disparate Impact Based on Race, National Origin

In that review, the OAG looked back to the scoring model the lender used prior to 2017, which relied in part on a Cohort Default Rate – the average rate of loan defaults associated with specific higher education institutions. The OAG asserted that use of that factor in its underwriting model resulted in disparate impact in approval rates and loan terms, disfavoring Black and Hispanic applicants in violation of ECOA and the state’s prohibition against unfair or deceptive acts or practices (“UDAP”). The public settlement order did not provide the level of statistical disparities. In addition, until 2023, the OAG asserted that the lender also included immigration status in its algorithm, knocking out applicants who lacked a green card. That factor “created a risk of a disparate outcome against applicants on the basis of national origin,” and as such violated ECOA and UDAP according to the OAG. The settlement order prohibits the lender from using the Cohort Default Rate or the knock-out rule for applicants without a green card (although it appears the lender had discontinued those considerations years ago).

The OAG’s settlement order also asserted that the lender failed to take actions to mitigate fair lending risks, by failing to test the algorithmic models and their weighted inputs for disparate impact; failing to test the judgmental underwriting processes for fair lending concerns; training the model on arbitrary, discretionary human selections on particular variables without appropriately determining whether the variables were predictive of default; and failing to ensure compliance with existing fair lending policies.

The lender agreed to pay $2.5 million to settle the allegations without making an admission to any of the OAG’s allegations, and agreed to develop, implement, and maintain a corporate governance system of fair lending testing, controls, and risk assessments for the use of AI models. The lender also must develop written compliance policies to ensure that its models comply with fair lending requirements. The settlement order included a detailed roadmap for the lender’s use of AI models going forward. Those steps include:

  • Conducting an inventory at least annually of all underwriting models, including:
    • Algorithms used to train each model;
    • Data used to train and test each model;
    • The parameters of each model in active use;
    • The dates each model was in active use; and
    • Fair lending testing results of each model.
  • Testing, monitoring, training, retraining, or otherwise modifying all algorithmic models to ensure compliance with fair lending and consumer protection laws.
  • Identifying trigger events (e.g., model updates, complaints, regulatory changes) that would require additional fair lending testing.
  • Documenting all algorithmic underwriting decisions and retaining that documentation for four years.
  • Maintaining account level data including the underwriting inputs, the model results, the outcome (approval or denial), and the pricing and performance of loans.

For the lender’s judgmental underwriting processes, the OAG is requiring it to ensure the underwriters receive fair lending training and that the lender monitors and documents their decisions, including overrides or adjustments to scores or prices, to ensure they do not violate fair lending or consumer protection laws.

Vermont Addresses Discrimination Based on Citizenship, Immigration Status

As Massachusetts focuses on algorithmic bias in student lending, other states also are prioritizing anti-discrimination in housing and lending. Vermont recently enacted legislation amending its antidiscrimination statute by adding the protected classes of citizenship and immigration status. Accordingly, among other prohibitions, it is unlawful in the state to discriminate against a person in the making of mortgage loans based on the person’s citizenship or immigration status. However, the amendment clarifies that it is not unlawful discrimination for a mortgage lender to consider an applicant’s immigration status to the extent such status has bearing on the lender’s rights and remedies regarding loan repayment, so long as such consideration is consistent with applicable federal law or regulation.

Photo of Kris D. Kully Kris D. Kully
Read more about Kris D. Kully
  • Posted in:
    Financial
  • Blog:
    Consumer Financial Services Review
  • Organization:
    Mayer Brown
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo