In May, Tessa went rogue. The National Eating Disorder Association’s chatbot had recently replaced a phone hotline and the handful of staffers who ran it. But although it was designed to deliver a set of approved responses to people who might be at risk of an eating disorder, Tessa instead recommended that they lose weight. “Every single thing that Tessa suggested were things that led to the development of my eating disorder,” one woman who reviewed the chatbot wrote on Instagram. Tessa was quickly canned. “It was not our intention to suggest that Tessa could provide the same type of human connection that the Helpline offered,” the nonprofit’s CEO, Liz Thompson, told NPR. Perhaps the organization didn’t want to suggest a human connection, but why else give the bot that name?
The new generation of chatbots can not only converse in unnervingly humanlike ways; in many cases, they have human names too. In addition to Tessa, there are bots named Ernie (from the Chinese company Baidu), Claude (a ChatGPT rival from the AI start-up Anthropic), and Jasper (a popular AI writing assistant for brands). Many of the most advanced chatbots— ChatGPT, Bard, HuggingChat—stick to clunky or abstract identities, but there are now many new additions to the already endless customer-service bots with real names (Maya, Bo, Dom).