Title industry examines risks, rewards of AI

As existing-home sales continue to slide and mortgage rates rise ever higher, all but eliminating any hope for refinance transactions, the title insurance industry is doubling down on technology. At the forefront, is artificial intelligence, or AI.

At the American Land Title Association’s annual ALTA One conference, several title industry experts and analysts have broken down the hype, use cases and risks posed by using AI.

“AI has made it so all of us, no matter how big or small our businesses really are, can act big,” Sam Trimble, a vice president at WFG, told ALTA One attendees.

By utilizing AI, title companies can greatly decrease the amount of time it takes to complete tasks such as creating marketing collateral, including, text, audio and video.

During a one-hour presentation, Trimble, along with Bill Svoboda, the co-founder of CloseSimple, created a fake title company for Snoop Dogg, called Snoop Settlements, as well as a variety of marketing materials, including 52 inspirational quote posts, a mission statement, core values and a website.

Svoboda and Trimble used a variety of free or low-cost AI applications, including Canva, ChatGPT, Lumen5 and Synthesia to create the materials.

“AI will not take your job, but people who know how to use AI will,” Svoboda said.

In addition to using these AI tools for themselves, Svoboda and Trimble told attendees to share this information with their lender and real estate agent partners.

“With things like this, people don’t really know about them yet and if you’re the purveyor of that kind of information you are absolutely going to win their attention and win a conversation and a shifting market is a battle for conversations,” Trimble said.

AI simplifies work, but can dampen problem-solving skills

While AI tools can be incredibly helpful, experts warned of the various legal and regulatory risks that could arise if they are not used responsibly.

In her Omni Session address Thursday morning, Rahaf Harfoush, a digital anthropologist, noted that most of what companies focus on with AI is how its usage could impact a firm’s marketing or recruiting efforts.

At the same time, firms overlook the cultural impact the technologies could have, causing us to underestimate their impact on society.

With the introduction of generative AI platforms such as ChatGPT, Harfoush said that society is moving from a “searching culture of problem-solving to a generating culture of problem-solving.”

In the past, when faced with a problem, people would type queries into Google and slowly configure a solution to an issue. But now with generative AI, people prompt a chatbot for a solution to their problem. Harfoush said this results in people “risking losing mastery of thinking.”

“We’re risking losing our ability to understand and to try to solve some of our own problems, if we are not careful about how we apply these tools” Harfoush said. “So, what that means is, that we have to invest in building intentional expertise.”

Harfoush said this is incredibly important as all technology is built off of a belief system and that AI is a manifestation of a specific belief system.

“Every technological tool is somebody saying ‘this is what I think the world should look like,’ and programming those expectations, beliefs and ideas, and then having technology that executes on that,” Harfoush said.

While it may be very easy to create content using generative AI platforms, Harfoush said users should ask themselves if they know what belief system is built into the tool and if they, as a user agree with those beliefs.

Be mindful of misinformation, bias risk

From a regulatory perspective, Elizabeth Riley, a senior vice president and chief privacy officer at Fidelity National Financial, and Genady Vishnevetsky, the chief information security officer at Stewart, it is imperative AI users be mindful of this and keep an eye out for potential biases, as well as false information generated by AI bots.

“It is really important to remember with generative AI that there is no root source of truth,” Riley said. “They work because they have consumed a ton of data and they’ve learned to identify patterns.”

As an example, Riley highlighted how a history of redlining or discriminatory covenants in land records could impact an AI decision-making tool’s ability to correctly assess if a particular transaction will have any issues closing.

“If it is trained on older data that has evidence of those things, then you could wind up with a discriminatory outcome and not even realize it,” Riley said.

In addition to potential discriminatory issues, Riley and Vishnevetsky warned that AI could easily be used by fraudsters and other bad actors to create more believable and well-rounded scams.

“We are entering an era where personal and interpersonal communications will be more important than ever,” Vishnevetsky said after showing attendees how easy it is to clone voices and create authentic sounding fake voice messages. “You can’t trust anything anymore.”

Despite his ominous warning, Vishnevetsky said there are plenty of opportunities to be had with AI.

“AI is here to stay,” Vishnevetsky said. “I encourage you to try it, but understand the risks of the ecosystem.”

ENB
Sandstone Group