Youngsters are attempting to determine the place they slot in a world altering quicker than any era earlier than them. They’re bursting with feelings, hyper-stimulated, and chronically on-line. And now, AI corporations have given them chatbots designed to by no means cease speaking. The outcomes have been catastrophic.
One firm that understands this fallout is Character.AI, an AI role-playing startup that’s going through lawsuits and public outcry after no less than two youngsters died by suicide following extended conversations with AI chatbots on its platform. Now, Character.AI is making modifications to its platform to guard youngsters and youngsters, modifications that might have an effect on the startup’s backside line.
“The very first thing that we’ve determined as Character.AI is that we’ll take away the power for underneath 18 customers to interact in any open-ended chats with AI on our platform,” Karandeep Anand, CEO of Character.AI, instructed iinfoai.
Open-ended dialog refers back to the unconstrained back-and-forth that occurs when customers give a chatbot a immediate and it responds with follow-up questions that specialists say are designed to maintain customers engaged. Anand argues one of these interplay — the place the AI acts as a conversational companion or good friend reasonably than a artistic instrument — isn’t simply dangerous for youths, however misaligns with the corporate’s imaginative and prescient.
The startup is making an attempt to pivot from “AI companion” to “role-playing platform.” As a substitute of chatting with an AI good friend, teenagers will use prompts to collaboratively construct tales or generate visuals. In different phrases, the objective is to shift engagement from dialog to creation.
Character.AI will part out teen chatbot entry by November 25, beginning with a two-hour every day restrict that shrinks progressively till it hits zero. To make sure this ban stays with underneath 18 customers, the platform will deploy an in-house age verification instrument that analyzes consumer habits, in addition to third-party instruments like Persona. If these instruments fail, Character.AI will use facial recognition and ID checks to confirm ages, Anand stated.
The transfer follows different teenager protections that Character.AI has applied, together with introducing a parental insights instrument, filtered characters, restricted romantic conversations, and time spent notifications. Anand has instructed iinfoai that these modifications misplaced the corporate a lot of their under-18 consumer base, and he expects these new modifications to be equally unpopular.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
“It’s protected to imagine that a variety of our teen customers most likely can be upset… so we do anticipate some churn to occur additional,” Anand stated. “It’s onerous to take a position — will all of them absolutely churn or will a few of them transfer to those new experiences we’ve been constructing for the final virtually seven months now?”
As a part of Character.AI’s push to rework the platform from a chat-centric app right into a “full-fledged content-driven social platform,” the startup lately launched a number of new entertainment-focused options.
In June, Character.AI rolled out AvatarFX, a video era mannequin that transforms photos into animated movies; Scenes, an interactive, pre-populated storylines the place customers can step into narratives with their favourite characters; and Streams, a function that enables dynamic interactions between any two characters. In August, Character.AI launched Neighborhood Feed, a social feed the place customers can share their characters, scenes, movies, and different content material they make on the platform.
In an announcement addressed to customers underneath 18, Character.AI apologized for the modifications.
“We all know that the majority of you utilize Character.AI to supercharge your creativity in ways in which keep throughout the bounds of our content material guidelines,” the assertion reads. “We don’t take this step of eradicating open-ended Character chat frivolously — however we do suppose that it’s the suitable factor to do given the questions which have been raised about how teenagers do, and may, work together with this new know-how.”
“We’re not shutting down the app for underneath 18s,” Anand stated. “We’re solely shutting down open-ended chats for underneath 18s as a result of we hope that underneath 18 customers migrate to those different experiences, and that these experiences get higher over time. So doubling down on AI gaming, AI quick movies, AI storytelling generally. That’s the massive guess we’re making to carry again underneath 18s in the event that they do churn.”
Anand acknowledged that some teenagers would possibly flock to different AI platforms, like OpenAI, that enable them to have open-ended conversations with chatbots. OpenAI has additionally come underneath fireplace lately after a teen took his personal life following lengthy conversations with ChatGPT.
“I actually hope us main the way in which units a regular within the business that for underneath 18s, open-ended chats are most likely not the trail or the product to supply,” Anand stated. “For us, I feel the tradeoffs are the suitable ones to make. I’ve a six-year-old, and I wish to be certain that she grows up in a really protected setting with AI in a accountable manner.”
Character.AI is making these choices earlier than regulators pressure its hand. On Tuesday, Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) stated they might introduce laws to ban AI chatbot companions from being out there to minors, following complaints from dad and mom who stated the merchandise pushed their kids into sexual conversations, self-harm, and suicide. Earlier this month, California grew to become the primary state to control AI companion chatbots by holding corporations accountable if their chatbots fail to fulfill the legislation’s security requirements.
Along with these modifications on the platform, Character.AI stated it might set up and fund the AI Security Lab, an impartial non-profit devoted to innovating security alignment for the long run AI leisure options.
“A whole lot of work is occurring within the business on coding and growth and different use instances,” Anand stated. “We don’t suppose there’s sufficient work but occurring on the agentic AI powering leisure, and security can be very crucial to that.”
