California's Latest Law Mandates AI to Disclose Its Identity

California's Latest Law Mandates AI to Disclose Its Identity

California has passed a law requiring AI chatbots to disclose their non-human identity to users, aiming to protect against misleading interactions. Additionally, AI operators must report on handling of user suicidal thoughts.

As of October 13th, California has enacted a new law targeting the burgeoning field of AI companion chatbots. Governor Gavin Newsom signed Senate Bill 243 into law, which aims to establish what has been described as the first nationwide protections for users of AI chatbots. This legislation, spearheaded by State Senator Steve Padilla, requires that AI developers ensure transparency with users. If it appears that users might mistakenly believe they are communicating with a human, the chatbot must clearly indicate that it is, in fact, an artificial intelligence.

Beginning next year, this law will obligate certain chatbot operators to submit annual reports to the Office of Suicide Prevention. These reports should detail the measures implemented to detect and handle instances where users exhibit suicidal thoughts. The office will, in turn, publish this information on its website.

In a statement regarding the bill's signing, Governor Newsom highlighted the dual nature of emerging technologies like chatbots and social media — they can be beneficial, but without proper regulations, they risk misleading or harming users, particularly children. As part of a broader initiative to enhance child safety online, the governor also endorsed new legislation concerning age restrictions on hardware. Newsom emphasized the importance of maintaining leadership in AI innovation while prioritizing the protection of children: "Our children's safety is not for sale."