Character.AI, a number one platform for chatting and roleplaying with AI-generated characters, unveiled its forthcoming video technology mannequin, AvatarFX, on Tuesday. Accessible in closed beta, the mannequin animates the platform’s characters in quite a lot of types and voices, from human-like characters to 2D animal cartoons.
AvatarFX distinguishes itself from opponents like OpenAI’s Sora as a result of it isn’t solely a text-to-video generator. Customers also can generate movies from preexisting photographs, permitting customers to animate images of actual individuals.
It’s instantly evident how this type of tech may very well be leveraged for abuse — customers may add images of celebrities or individuals they know in actual life and create realistic-looking movies during which they do or say one thing incriminating. The expertise to create convincing deepfakes already exists, however incorporating it into common client merchandise like Character.AI solely exacerbates the potential for it for use irresponsibly.
We’ve reached out to Character.AI for remark.
Character.AI is already dealing with points with security on its platform. Mother and father have filed lawsuits in opposition to the corporate, alleging that its chatbots inspired their youngsters to self-harm, to kill themselves, or to kill their dad and mom.
In a single case, a 14-year-old boy died by suicide after he reportedly developed an obsessive relationship with an AI bot on Character.AI based mostly on a “Sport of Thrones” character. Shortly earlier than his loss of life, he’d opened as much as the AI about having ideas of suicide, and the AI inspired him to comply with by way of on the act, based on courtroom filings.
These are excessive examples, however they go to indicate how individuals will be emotionally manipulated by AI chatbots by way of textual content messages alone. With the incorporation of video, the relationships that folks have with these characters may really feel much more reasonable.
Character.AI has responded to the allegations in opposition to it by constructing parental controls and extra safeguards, however as with every app, controls are solely efficient once they’re truly used. Oftentimes, youngsters use tech in ways in which their dad and mom don’t learn about.