Tech Trends: The Intersection of Housing and AI

AI

Nicholas Schmidt is partner and artificial intelligence practice leader at BLDS LLC and founder and CTO of SolasAI. This article is adapted and edited from written testimony delivered last month before the U.S. Senate Subcommittee on Housing, Transportation, and Community Development’s hearing “Artificial Intelligence and Housing: Exploring Promise and Peril.”

Before one can hope to craft effective laws or regulations around the use of artificial intelligence, we must first define it and understand its scope. While the term “AI” often conjures a vision of futuristic and sentient machines, in practical terms, AI encompasses a wide array of far less radical technologies.

Contrary to the popular focus on – and hype around – generative AI, AI’s impact on society extends through various types of machine learning (ML) and AI applications, many of which are already transforming the housing industry. What can be defined as AI includes technologies ranging from predictive analytics to automated decision-making systems, all of which impact the affordability, accessibility, and equity of housing in the United States.

In practical terms, ML and AI represent a class of mathematical algorithms that learn patterns and rules from data. These learned rules may then be applied to new data to inform real-world decisions. Thus, it is important to remember that the rules AI develops are based on mathematical (i.e., not human) insight, but that those rules are developed on historical data that encode many types of human biases. However, as I discuss below, despite the computer developing the rules, human decisions affect how the rules are developed and systems are used. Understanding this is essential for writing effective legislation.

Human Decisions Drive Algorithms

Beyond the confusion of what “AI” is, many people are unaware of how much human involvement is required to build and deploy an AI (or “algorithmic decisioning”) system. In fact, particularly coming from technology companies, there seems to be a fatalistic attitude that implies nothing can be done to improve them.

This notion, dangerous in the extreme, is easily proven wrong. There are numerous places where humans interact with AI systems before and during the deployment of the algorithms that shape whether the algorithms are making reasonable, safe, and fair decisions. Understanding the extent to which the output of an algorithm is dependent on the decisions that people who build the models make is essential because – at each of these decision points – there are opportunities for humans to make better decisions that can make the algorithms more fair, accountable, and
transparent.

Using a mortgage delinquency algorithm as an example, the human steps required to make such a model include:

1. Choosing what the model will define as a delinquency. A data scientist might define delinquency as 60 days or 90 days of non-payment. For people with less income security, but who are likely to be able to pay their bills over a longer period, choosing 60 days instead of 90 days may make the difference between being provided a loan or being rejected. Importantly, this decision is an entirely human-based decision.

2. Choosing which data will be used to predict delinquency. The computer only makes its rules based on the data it is provided. The person building the model might only include data clearly related to delinquency (e.g., the existing level of debt, past payment history, etc.), or they may include data that is not clearly causally related to delinquency (e.g., education or purchase history). The choices that the modeler makes will affect the fairness of the model, its accuracy, its reasonableness, and its reliability. While algorithms will choose how to weight different data (and possibly exclude certain data altogether), choosing which data to provide is an entirely human-based decision.

3. Choosing the type of algorithm that will build these rules. Model developers have many options regarding what model architecture they will use to develop the model (i.e., how the model will learn from the data). These include architectures like deep neural networks, gradient-boosted trees, or traditional linear regression. This decision has direct implications in terms of the transparency of the model and whether the model’s decisions will be explainable. Some kinds of models, such as regression models, are inherently explainable. This means it is possible to know exactly how the model arrived at a decision. Others, such as neural networks, are not transparent. The decisions surrounding model architecture have practical and legal implications.

4. Choosing whether the algorithm is sufficiently accurate for the task. No algorithm is perfect. One issue is that a model might be very effective for some people, but not others. For example, it might be highly accurate for high-credit quality individuals but be far less accurate for those with lower credit quality. A person working for the lender will ultimately make a decision whether that trade-off is acceptable before putting the model into production. More generally, whether to test for different forms of inaccuracy, how to balance the varying costs of inaccuracy, and what minimum accuracy requirements are required are all human decisions.

5. Choosing how to implement and use the algorithm. Generally, an AI or ML model does not make a decision on its own. Frequently, only a subset of applicants will be scored by the model; which applicants are scored is a human decision. Once applicants are scored, what that score means and how it is used must be determined. For example, will a cutoff be used, or will the specific model work with another model or subjective rules to make a decision? Will the score be considered in light of other variables or factors? All of these are human decisions. In fact, when all of these decisions are examined together, it is clear that the entire model system, of which the model itself may only play a relatively modest role, is largely made up of interlocking human decisions that result in an ultimate decision.

I take pains to point out each of these decision points because, as NIST-1270 puts it, “A fallacy of objectivity can often surround these processes, and may create conditions where technology’s capacity and capabilities are oversold.”

This fallacy of objectivity has led many to conclude that not much can be done to regulate or effectively manage these technologies. However, because so much of the use of AI is driven by choices that people make, regulators and the law do not need to “surrender” to these emerging technologies; the space is ripe for regulation of human decisions. In fact, effective regulation of these human decisions can create fairer, more equitable outcomes without stifling innovation in this space. But more than just a benefit to consumers, having defined and reasonable regulations would give companies more clarity on how they can safely use AI systems.

The Use of AI Systems in Housing

AI systems are increasingly being used across the housing industry, as companies find that many facets of housing decisions can be made far faster, cheaper, and more reliably than can be performed by humans. While, in past years, there was significant discussion about whether algorithms should be making such decisions, we are now in a world where the use of such algorithms is commonplace; companies that do not employ them are at great risk of losing out to those that do.

Many entities have used algorithms to underwrite and price mortgages for decades. Fannie Mae’s Desktop Underwriter (DU) and Freddie Mac’s Loan Product Advisor (LPA) have histories dating back nearly 30 years, while FICO introduced its first consumer score in 1989. More recently, human appraisals are being replaced with automated valuation models (AVM), allowing fast and – hopefully – accurate assessment of the value of a home. Other ways algorithms are used include the provision and pricing of insurance, providing background screening for rentals, automating the servicing of loans, and pricing rental units.

There are two noteworthy takeaways from these applications of AI in the housing industry. First, there is a long history of the use of algorithms in housing. Correspondingly, there is a wealth of experience in building these algorithms fairly, reliably, and transparently. As such, we do not have to reinvent the wheel when it comes to effectively regulating the responsible use of algorithms.

Second, the reach of algorithms in the housing industry is growing fast, which will profoundly affect people’s housing decisions. It is imperative that we learn from what we know about safely implementing algorithms in housing and apply that knowledge to these newer applications.

The Role of Regulators and Policymakers in Ensuring Responsible Innovation in AI

As we consider regulations for AI in housing, the primary goal should be to maximize the responsible use of this technology: given its potential to cause extreme harm at scale, safe and sound implementation of AI technologies is paramount. However, given its potential to represent a significant technological development that delivers real and meaningful societal benefits, we should also aim to minimize regulation’s potential to be overly burdensome, possibly stifling innovation.

Effectively accomplishing this goal is significantly more likely if we consider two key factors. First, any approach to regulation should not be overly prescriptive. Instead, we should focus on setting clear risk-based principles that encourage and enforce responsible AI development and use, where the most impactful systems (i.e., the systems with the most potential to harm or benefit people, society, or the environment) receive the most oversight.

Second, it will be valuable to recognize that a significant body of existing work, regulation, and industry practice can be applied to AI systems to make them safer. Looking to these tried and validated frameworks and policies should guide our approach to making effective regulations for the use of AI.

The Benefits of a Non-Prescriptive Regulatory Environment

A principles-based and less prescriptive approach to AI regulation can encourage innovation while ensuring the responsible development of AI. It recognizes the dynamic nature of technology and compliance and provides flexibility for continuous improvement and adaptation.

A principles-based framework allows for innovation in both technology and compliance methods. On the compliance side, advancements like improved Less Discriminatory Alternatives (LDA) search and enhanced techniques for providing Adverse Action Notices (AANs) demonstrate how technology and compliance can complement each other and evolve together. Further, innovations in technology, such as Shapley values, explainable boosting machines, and Wells Fargo’s Python Interpretable Machine Learning (PiML) package illustrate the rapid development of new AI tools and methods that encourage responsible model building. A less rigid but strong regulatory environment encourages such advancements.

Overly prescriptive regulations, on the other hand, risk stifling innovation as they may lead to a “design-around” mentality, where the focus shifts from responsible development to merely meeting specific regulatory criteria. This can hinder genuine progress and the exploration of new AI and negate the desired helpful impact of the regulations. It also risks enforcing technical requirements that quickly drift into irrelevance as technology evolves.

Key Principles to Consider for AI Regulation

Four fundamental principles are foundational for adopting effective AI regulation: materiality, fairness, accountability, and transparency. Developing regulations with these as guideposts will help ensure that AI systems serve the public interest while advancing technological progress.

Materiality: This principle advocates for a risk-based approach in governing AI systems. By focusing more stringent regulation on higher-risk AI applications, resources will be allocated more effectively. For example, a company should not spend as much time reviewing a marketing model as they would an underwriting model that enormously impacts both consumers and the business. Adopting such a risk-based approach ensures that systems with the most significant potential impact are carefully monitored and promotes innovation by not overburdening lower-risk initiatives with unnecessary regulatory constraints.

Fairness: The principle of fairness is central to the responsible deployment of AI. Establishing a clear understanding and expectation of fair AI practices is crucial, particularly in applications that significantly impact individuals, such as housing. Regulators should set expectations that bias and discrimination should be identified and mitigated in AI systems. Existing frameworks relating to measuring and mitigating disparate impact, disparate treatment, and proxy discrimination should guide further regulation of AI fairness.

Accountability: AI systems must have accountability mechanisms, especially those with high impact. This involves providing individuals affected by AI decisions a right to appeal, ensuring that there is recourse for those who may be adversely impacted. Additionally, entities that deploy AI systems irresponsibly should face appropriate consequences.

Transparency: The principle of transparency mandates clear explanations for decisions made by AI systems. This is fundamental to building trust in AI systems. Understanding the “why” and “how” behind AI-driven decisions is crucial for public acceptance and confidence in these technologies, and is further crucial to ensure that systems are fair.

By focusing on these critical principles – materiality, fairness, accountability, and transparency – we can create a regulatory environment that encourages the development of innovative AI technologies and safeguards against potential harms and biases. This principles-based approach to AI regulation is particularly pertinent in housing, where the impact of AI can have profound implications on people’s lives and the fabric of communities.

Using Existing Regulations and Frameworks to Guide Further AI Regulation

Congress and regulators will not need to devise laws and regulations from scratch to achieve the objectives that I have laid out. There are many regulations, standards, and frameworks with a proven track record of setting standards for human decisions related to AI, holding relevant actors accountable for those standards, and supporting the development and deployment of these technologies. Importantly, in many (but, of course, not all) cases, the industry has welcomed these for providing clear and reasonable standards.

● SR 11-7, a supervision and regulation letter from the Board of Governors of the Federal Reserve System, constructs accountability mechanisms and organizational structures to ensure adequate and risk-based governance of credit modelers. While the document highlights the importance of technical processes such as testing and monitoring, its primary focus is on principles such as effective governance structures (e.g., independent validation teams with high stature and strong financial incentives), risk management executives with independent reporting chains, and documentation requirements.

● NIST SP 127012, a special publication from the National Institute of Standards and Technology (NIST), describes technical, process, and cultural problems and solutions relating to AI bias. It highlights that many aspects of data and AI systems are strongly influenced by human behavior and decisions, and suggests that approaches from model risk management (e.g., SR 11-7) coupled with more novel approaches, such as structured feedback activities (e.g., bug bounties or red teaming), human-centered design, and information sharing are strong mitigants for managing bias in AI systems. Another prominent theme of NIST SP1270 is that basic scientific rigor in AI needs to be improved.

● The disparate impact, disparate treatment, and proxying framework is a legal doctrine that has been developed over the course of decades under the Fair Housing Act (FHA), the Equal Credit Opportunity Act (ECOA), and Title VII of the Civil Rights Act of 1964. This framework sets forth requirements for measuring discrimination that could be used in any decision tool (including AI or ML models) and provides a conceptual framework for mitigating any meaningful disparities found through conducting an LDA search. Many AI tools that affect consumers in the housing market are likely covered by this framework via the FHA. Setting the
expectation that this framework would be extended to all high-risk AI use cases throughout the housing industry would ensure that companies move towards adopting fairer models.

● The NIST AI Risk Management Framework puts forward four risk management functions for organizations: (1) Govern, (2) Map (understanding the risk of AI systems in their operational contexts with less emphasis on their development), (3) Measure, and (4) Manage; and seven trustworthy characteristics for AI systems: (1) Safe, (2) Valid and Reliable, (3) Accountable and Transparent, (4) Explainable and Interpretable, (5) Privacy-enhanced, (6) Fair with Harmful Bias Managed, and (7) Secure and Resilient. Governance guidance is largely aligned with the risk-based principles laid out in SR 11-7 but introduces additional governance concepts from data privacy, information security, and more recent academic research. The AI Risk Management Framework has two distinct strengths: it acknowledges (1) that many AI risks arrive from real-world problems, not computer code bugs; and (2) that overlaps and connections between risks, risk controls, and governance must be recognized. It does all this while orienting governance toward traditional risk-based principles focusing on human accountability.

The most significant harms associated with AI are not the fantastical scenarios often depicted in science fiction, but real-world issues such as discrimination, data privacy violations, unaccountable decision-making, and fraudulent activities. Effectively regulating AI systems requires recognizing these facts. As AI evolves and impacts more aspects of housing, policymakers, regulators, public advocacy groups, and industry professionals must remain vigilant and proactive. We each play a key role in ensuring that AI systems are not only technically sound and effective, but are also fair, transparent, and accountable.

ENB
Sandstone Group