
In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights
A federal judge in Florida has rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now. The developers behind Character.AI have been seeking to dismiss the case, which alleges that the company’s chatbots pushed a teenage boy to kill himself. In an order issued Wednesday, U.S. Senior District Judge Anne Conway is allowing the case to go forward, in what legal experts say is among the latest constitutional tests of artificial intelligence.