On May 2, the U.S. Department of Housing and Urban Development (HUD) released two sets of guidance addressing the applicability of the Fair Housing Act (FHA) to two areas where, in the agency’s view, algorithmic processes and artificial intelligence (AI) pose particular concerns: tenant screening and advertising of housing opportunities through online platforms that use targeted ads. The purpose of HUD’s guidance is to make housing providers, tenant screening companies, advertisers, and online platforms aware that the FHA applies to tenant screening and housing advertising, including when algorithms and AI are used to perform those functions.

Tenant Screening

HUD’s guidance on tenant screening focuses on the increasing use of AI and machine learning in the screening process and describes fair housing issues created by tenant screening processes that use that technology. HUD states that while it is innovative, the use of AI and machine learning in tenant screening can potentially obscure the reasons for a denial from both the housing provider and the applicant, leading to less transparency in the process.

HUD notes that the FHA applies to housing decisions regardless of who makes them and the technology used. As a result, HUD asserts that both housing providers and tenant screening companies have a responsibility to avoid using AI in a discriminatory manner.

According to the guidance, housing providers remain responsible for ensuring their decisions comply with the FHA (and may be vicariously liable), even where they have outsourced screening to a third-party screening company.

To prevent discriminatory uses of AI in tenant screening, the HUD recommends a set of “guiding principles” for housing providers and tenant screening companies to comply with the FHA:

  • Screening applicants only for information relevant to the likelihood that the applicant will comply with the tenancy obligations. Screening in a more precise way may have a less discriminatory outcome.
  • Ensuring records are accurate. Certain types of inaccuracies are more likely to occur for members of some demographic groups compared to others.
  • Considering only records inside the scope of a housing provider’s stated screening policy. For example, misdemeanors or civil violations should not be considered under a policy of screening for felony convictions. Also, housing providers should not ask applicants about any criminal, credit, or housing history that falls outside the scope of screening policies.
  • Being transparent with applicants by ensuring tenant screening policies are in writing, made public, and readily available. Prior to applying, applicants should be given a copy of the policies or told where they can find them online.
  • Allowing applicants the opportunity to challenge any potentially disqualifying information. The applicant should be sent the screening report along with the precise standards at issue and given the opportunity to dispute the accuracy or completeness of any negative information.
  • Designing and testing complex models for FHA compliance. Complex models, such as those that use machine learning, should be programmed following best practices for nondiscriminatory model design.

HUD’s guidance also provides “additional considerations” for housing providers and tenant screening companies when conducting three types of screenings likely to pose fair housing concerns: credit history, eviction history, and criminal records. HUD states that overbroad screenings in these areas, including screenings that fail to account for individual circumstances, are especially likely to have an unjustified discriminatory effect based on personal characteristics protected under the FHA.

Our Take on Tenant Screening Guidance

HUD’s recent guidance continues its focus on tenant screening, including what information can be considered and how. In early April, HUD issued a Notice of Proposed Rulemaking, discussed here, seeking public comment on its proposal to amend existing regulations that govern admission to public housing and housing programs for applicants with criminal records, and eviction or termination of assistance of persons on the basis of illegal drug use, drug-related criminal activity, or other criminal activity. The proposed rule would require that, prior to any discretionary denial or termination for criminal activity, public housing agencies (PHAs) and assisted housing owners take into consideration multiple sources of information, including but not limited to the recency and relevance of prior criminal activity. The proposed rule also seeks to clarify existing PHA and owner obligations and reduce the risk of violation of nondiscrimination laws.

This new guidance on AI in the context of tenant screening continues HUD’s attempt to limit the types of information that it believes should be reviewed, as well as the processes that it believes are required for any such review of data during the screening process. HUD also continues to be focused on what it considers to be the prospect for disparate impacts in the screening process. This latest announcement, therefore, is consistent with HUD’s focus on all aspects of the screening process, which it continues to try and regulate, both through formal rulemaking and informal pronouncements.

Advertising on Online Platforms

HUD’s guidance on the applicability of the FHA to advertising of housing, credit, and other real estate-related transactions on online platforms highlights the potential risks of deploying automated targeted advertisement tools that use algorithms or AI. The agency states that discriminatory advertising can contribute to, reinforce, and perpetuate residential segregation and other harms addressed by the FHA.

HUD states that violations of the FHA may occur when certain ad targeting and delivery functions unlawfully discriminate on the basis of protected characteristics, such as limiting or denying consumers information about housing opportunities. FHA violations may also arise when targeting vulnerable consumers for predatory products or services; discouraging or deterring potential consumers; advertising different prices or conditions to consumers; steering home-seekers to particular neighborhoods; or charging advertisers higher amounts to show ads to some consumers.

To prevent discriminatory uses of AI in advertising, HUD recommends the following practices for advertisers:

  • Utilizing ad platforms that manage the risk of discriminatory delivery of housing-related ads through audience selection tools and algorithmic functions.
  • Following ad platform instructions to ensure that advertisements related to housing are identified as such to ensure the appropriate treatment.
  • Carefully considering the source and analyzing the composition of audience datasets used for custom and mirror audience tools for housing-related ads.
  • Monitoring outcomes of advertising campaigns for housing-related ads to identify and mitigate discriminatory outcomes.

HUD also recommends the following practices for ad platforms:

  • Ensuring that housing-related ads are run in a separate process and specialized interfaces are designed to avoid discrimination in audience selection and ad delivery.
  • Avoiding providing targeting options for housing-related ads that directly describe or relate to FHA-protected characteristics, or that are effectively proxies for such protected characteristics — either alone or in combination.
  • Conducting regular end-to-end testing of advertising systems to ensure that any discriminatory outcomes are detected, such as by running pairs of ads for equivalent housing opportunities at the same time and comparing the demographics of the delivery audience.
  • Proactively identifying and adopting less discriminatory alternatives for AI models and algorithms, including by assessing data used to train AI models and verifying that the technologies measure lawful attributes that predict valid outcomes.
  • Ensuring that algorithms are similarly predictive across protected class groups and making adjustments to correct for any disparities in predictiveness or directing the algorithm to develop additional information that will enhance predictiveness for certain groups.
  • Ensuring that ad delivery systems are not resulting in differential charges on the basis of protected characteristics, or charging more to deliver ads to a non-discriminatory audience.
  • Documenting, retaining, or publicly releasing in-depth information about ad targeting functions and internal audits.

In its press release, HUD states that the issuance of this guidance fulfills its pledge to enforce civil rights laws to new technology in accordance with President Biden’s Executive Order on the safe, secure, and trustworthy development and use of AI, issued in October 2023, and which applies to HUD and other federal agencies. HUD’s tenant screening guidance fulfills its commitment in the Biden-Harris Administration’s Blueprint for a Renters Bill of Rights.

Our Take on the Advertising Guidance

HUD’s advertising guidance goes much further than any similar guidance previously issued by HUD or any other regulator. HUD’s previous charge of discrimination against Facebook in 2019, discussed here, which ultimately led to the settlement between Meta and the Department of Justice in 2023, discussed here, alleged violations of the FHA based on the intentional use of protected characteristics in targeting housing advertisements. Here, HUD goes all-in to suggest that a disparate impact theory can apply to housing and housing-related ads, even asserting that marketing models need to be tested for equally-accurate prediction among various groups and assessed for less discriminatory alternatives. We think it is unrealistic for HUD to take this position with regard to advertising models, which necessarily have to rely on less information about a consumer who has not submitted an application or provided any personal information. HUD’s guidance also implies a duty for housing providers to ensure that advertisements are seen equally across members of all protected classes, which will likely be difficult or impossible to achieve, or even test for. It will be interesting to see if other regulators like the Department of Justice adopt these views, or if HUD will remain an outlier with respect to advertising issues.

More on Artificial Intelligence + the Future of Law.

Photo of David N. Anthony David N. Anthony

David is an experienced trial attorney with a concentration in litigating financial services and business disputes, including class actions related to the FCRA, FDCPA, TCPA and other consumer protection statutes.

Photo of Cindy D. Hanson Cindy D. Hanson

Cindy Hanson focuses her practice on class action defense. Cindy has handled hundreds of matters under the Fair Credit Reporting Act, including over two dozen class actions.

Photo of Lori Sommerfield Lori Sommerfield

With over two decades of consumer financial services experience in federal government, in-house, and private practice settings, and a specialty in fair lending regulatory compliance, Lori counsels clients in supervisory issues, examinations, investigations, and enforcement actions.

Photo of Tim J. St. George Tim J. St. George

Tim St. George defends institutions nationwide facing class actions and individual lawsuits. He has particular experience litigating consumer class actions, including industry-leading expertise in cases arising under the Fair Credit Reporting Act and its state law counterparts.

Photo of Chris Willis Chris Willis

Chris is the co-leader of the Consumer Financial Services Regulatory practice at the firm. He advises financial services institutions facing state and federal government investigations and examinations, counseling them on compliance issues including UDAP/UDAAP, credit reporting, debt collection, and fair lending, and defending…

Chris is the co-leader of the Consumer Financial Services Regulatory practice at the firm. He advises financial services institutions facing state and federal government investigations and examinations, counseling them on compliance issues including UDAP/UDAAP, credit reporting, debt collection, and fair lending, and defending them in individual and class action lawsuits brought by consumers and enforcement actions brought by government agencies.

Chris also leverages insights from his litigation and enforcement experience to help clients design new products and processes, including machine learning marketing, fraud prevention and underwriting models, product structure, advertising, online application flows, underwriting, and collection and loss mitigation strategies.

Chris brings a highly practical focus to his legal advice, informed by balancing a deep understanding of the business of consumer finance and the practical priorities of federal and state regulatory agencies.

Chris speaks frequently at conferences across the country on consumer financial services law and has been featured in numerous articles in publications such as the Wall Street Journal, the New York Times, the Washington PostAmerican BankerNational Law JournalBNA Bloomberg, and Bank Safety and Soundness Advisor.