Medical experts recruited to advise Crisis Text Line say they weren’t consulted about its data sharing and voiced objection to the arrangement. “This would never have passed a sniff test,” one says.
Five years after a top suicide prevention hotline got off the ground, it spun out a for-profit arm, in part to help keep its lights on.
The organization, Crisis Text Line, provides around-the-clock support through text messaging for people struggling with mental health emergencies including thoughts of suicide. The then-fledgling nonprofit decided it would collect and anonymize data from its text conversations with people in crisis and share that information with its for-profit spinoff, Loris.ai, to help it build and sell customer service software. Crisis Text Line was then Loris’ majority shareholder, and the plan was for Loris to eventually share some of its revenue with the nonprofit.
Crisis Text Line started as the passion project of a tech entrepreneur with close ties to Silicon Valley and backing from some of its best-known founders and billionaires. The nonprofit would also strike partnerships with Meta and Google — some of the most powerful companies on the planet that also helped pioneer the collecting and selling of individuals’ personal information as a standard way to generate revenue.
It’s perhaps no surprise, then, that Crisis Text Line’s Loris spinoff, formed in 2018, was supported by the tech industry and widely applauded in tech publications from the Bay Area to New York, where the nonprofit is based. And against the backdrop of Silicon Valley, where data sharing had become the norm, it also may not have occurred to Crisis Text Line that some might view what it was doing as wrong.
The backlash ultimately forced Crisis Text Line to cease data-sharing with the company it had created years earlier with little pushback. But had the organization consulted its own data ethics advisory board, it may have been able to serve its own mission differently.
Megan Ranney, an emergency room doctor, academic dean for Brown University’s School of Public Health and herself a member of Crisis Text Line’s ethics advisory board, called the nonprofit’s sharing of data with its for-profit subsidiary “a poor decision on multiple levels.”
“It was not an illegal decision; it was just not a decision that I would have ever advised anyone to make,” she told Forbes. “It is nothing that I would have wanted to be associated with.”
The advisory board that wasn’t
Crisis Text Line assembled a data, ethics and research advisory board in 2015 to, in the words of its own website, “advise us on data privacy and security.” It was a who’s who of top academics and medical experts affiliated with Ivy League universities and prestigious health institutions like the Kaiser Family Foundation. Their names and titles were featured prominently on Crisis Text Line’s website as recently as this month.
Yet the ethics board was not consulted on the organization’s decision to share sensitive data from people’s darkest moments with its for-profit spinoff, members of the board say. Some members told Forbes they did not know about the data-sharing arrangement and that the committee itself hadn’t been convened in years. And one of its members, whose name was listed on the organization’s website until this month, died a year ago.
“We are all profoundly disappointed that we were not engaged at any point in discussions about Loris — like, literally at any point,” Ranney says. “This feels like a betrayal of the trust of the folks that used Crisis Text Line,” she adds. “It’s also a betrayal of the trust of those of us that were willing to lend our name and our expertise to the organization.”
Within about a day of Forbes contacting ethics board members in February, their names and titles were removed from Crisis Text Line’s website. One of the names that had remained on the site’s ethics board list until this month was that of Dr. Robert J. Levine of the Yale School of Medicine, who died last February.
Crisis Text Line told Forbes in an emailed statement that advisory board members provide guidance “on a periodic basis.” Asked when the committee had last met, Crisis Text Line did not respond.
“In the nonprofit sector, advisory board members offer advice and opinions without governing power or responsibility,” the email said. Asked why Crisis Text Line had not, in the words of members of its data ethics board, been engaged in discussions about its data-sharing with Loris, it did not respond directly. Instead, it emphasized that “the primary purpose of assembling the data, ethics and research advisory group was to seek advice on the design of the organization’s academic research collaboration program.”
Crisis Text Line’s CEO is Dena Trujillo. Loris CEO Etie Hertz did not respond to a request for comment. Some members of the data ethics committee did not respond to multiple requests for comment or declined to do so.
Eric Perakslis, the chief science and digital officer at the Duke Clinical Research Institute, says that while Crisis Text Line is in no way obligated to use the experts they’d assembled, the Loris case is one where “they really would have benefited from the use of that.” Perakslis went on to describe Crisis Text Line’s relationship with its advisory board to date as possible “window dressing theater.”
“I would argue that this [organization] would be considered guilty of that, even if it was done unintentionally,” he told Forbes, by having the names of “these really, really smart, really knowledgeable people who they were not including in business decisions — in quite frankly, business decisions that totally horrify a lot of those people now that they realize what they were.”
Last month, days after publication of the Politico report, Crisis Text Line announced that it had ended its data-sharing with Loris and would ask the for-profit entity to delete the anonymized data it had been given by the crisis intervention hotline. “During these past days, we have listened closely to our community’s concerns,” the hotline said in a statement posted on its website. “We understand that you don’t want Crisis Text Line to share any data with Loris.”
Microsoft researcher and Crisis Text Line board member danah boyd that same day shared a personal blog post to Twitter (“not the perspective of the organization or the board”) about how the Loris data-sharing team-up had given her pause. “I struggled with this decision at the time and ever since. … This decision weighed heavily on me, but I did vote in favor of it,” she wrote. “Knowing what I know now, I would not have.”
Late Wednesday, Crisis Text Line sent a follow up email to Forbes saying that “we are incredibly grateful to our advisors and the expertise they have offered so we can continually become a better and stronger organization that supports an increasing number of people in need.” The group added that “we are committed to reconstituting our advisory groups and meaningfully engaging our advisors more consistently going forward.”
The nonprofit, for-profit team-up
Crisis Text Line and Loris shared the same CEO, Dress for Success founder Nancy Lublin, for at least a year and a half. Until this month, Loris’ website described its relationship with Crisis Text Line as “a blueprint for ways for-profit companies can infuse social good into their culture and operations, and for nonprofits to prosper.”
Anonymized data the hotline had collected from its text conversations with people in crisis became an important tool toward that goal. Crisis Text Line, which has grown rapidly on both sides of the Atlantic, says it wields “the largest mental health data set in the world.” In an April 2021 press release, Loris said that insights gleaned from nearly 200 million of Crisis Text Line’s messages had helped the company develop AI-powered customer service tools that are aimed at assisting customer service agents in live chats with upset customers.
Loris’ “AI assistant” plugs into platforms like Zendesk, Salesforce and Twilio to help coach companies’ customer service reps through tough conversations and de-escalate disputes — interpreting language in the chats and providing real-time recommendations on how an agent can respond. The AI was trained in part by analyzing Crisis Text Line techniques and troves of its anonymized data, a fact Loris has touted in marketing as its “proprietary edge.”
“We utilize our extensive experience working through the most challenging conversations in the crisis space to create a machine-learning based software platform that helps companies handle their hard customer conversations with empathy,” Loris says on its LinkedIn page. “Through millions of conversations with people in extreme crisis situations, this approach has proven to lead to consistently positive outcomes that are now being applied to the business arena.”
Crisis Text Line told Politico that the two entities were “mission-aligned” — that it viewed the relationship as “a valuable way to put more empathy into the world” — and wrote on its website that it founded Loris “to leverage the lessons learned from operating our service to make customer support more human.”
“Simply put,” Crisis Text Line said of the Loris arrangement in 2018, “why sell t-shirts when you can sell the thing your organization does best?”
Crisis Text Line’s data-sharing with Loris was legal, and it was also disclosed in the hotline’s terms of service, which texters receive a link to when they reach out for help. (“By texting further with us, you agree to our Terms,” says the first, automated message.)
“Someone seeking help in a crisis shouldn’t have to worry about their data being sold for a giant corporation’s profit.”
Yet independent privacy and ethics experts expressed concerns. Some argued that Crisis Text Line may not have gotten meaningful consent from texters — who they said were unlikely to read some 50 paragraphs of disclosures in the midst of an emergency — and that making commercial use of this particular data, even if anonymized, was wrong. (Crisis Text Line’s general counsel, Shawn Rodriguez, told Politico that “sensitive data from conversations is not commercialized, full stop.”) Although it’s common for medical data to be anonymized and put to use, experts said that’s typically for research purposes or to improve care for patients, and that non-medical use of the data by a for-profit company was unusual.
Regulators and politicians also raised objections to this data sharing practice. Federal Communications Commissioner Brendan Carr was among those who spoke out against the nonprofit’s “data-sharing and monetization practices” after the initial report in Politico. Carr took to Twitter to demand that Crisis Text Line and its for-profit spinoff end their “disturbingly dystopian” sharing of sensitive mental health data. And though he welcomed their eventual decision to cease data-sharing, Carr said in an email to Forbes last week that “questions remain regarding their current practices” and that as a result, “my office is continuing to meet with Crisis Text Line leadership to ensure that its data practices comply with the law.” Members of his staff met with Crisis Text Line leaders last week to examine their practices more closely, and Carr himself will attend another meeting on the matter in March. Members of Congress, including Democratic Senator Kirsten Gillibrand of New York, have also condemned the data-sharing, writing on Twitter that “someone seeking help in a crisis shouldn’t have to worry about their data being sold.”
Some of the same concerns were later raised by members of the ethics board.
“Data-sharing with a for-profit company? That’s bad… and you can’t get consent from people for that in the moment when they’re signing up with [a] crisis,” Ranney says. “This would never have passed a sniff test.”
Members of the ethics board also said they had not been consulted on Crisis Text Line’s response to outcry after the Politico story was published. Crisis Text Line put out a statement on Twitter emphasizing that “experts in the fields of data and mental health have praised our data ethics, policies and practices” and pointing to “leading academics” who’d validated Crisis Text Line’s “ethical data sharing” through a study published in the Journal of Medical Internet Research in 2019. (More than half of the authors of that paper, which focused on data-sharing with academic researchers rather than with Loris, were members of Crisis Text Line’s data ethics advisory board.)
“There was never any discussion of sharing data with a for-profit entity — like zero ever, nothing.”
That response drew fire from experts in technology ethics.
“What bothers me most about this statement is that, in addition to not addressing the core concern about for-profit data sharing, it implies (seemingly inaccurately) that this practice has been externally vetted,” Casey Fiesler, an assistant professor on tech ethics and internet law at the University of Colorado, Boulder, responded on Twitter.
Ranney, an author of the article, says the nonprofit’s defense of its actions had taken their research out of context.
“There was never any discussion of sharing data with a for-profit entity — like zero ever, nothing,” she told Forbes. She says academic researchers’ access to data in a restrictive, privacy-protected setting, for the benefit of Crisis Text Line’s population, “is in no way, shape or form the same thing as providing unfettered access to data for a for-profit company.” (Crisis Text Line said Loris did not have open-ended access to its data and had told Politico that such sharing might occur every few months. The organization said later that Loris had not accessed any data since the beginning of 2020.)
Ranney said she and others on the advisory board still “believe strongly” in the value of Crisis Text Line and its original mission to help people. The pandemic has intensified the need for mental health services, and the organization “plays a critically important role” in helping to meet that demand, she said. “The question is: was this a mistake that they are now in good faith going to address?”