By Stephen Smoot
Last week, West Virginia Attorney General J. B. McCuskey joined 44 state attorneys general in action aimed at artificial intelligence chatbots’ penchant for engaging in sexualized conversations with children.
This comes as details continue to emerge of both vulnerable children and adults doing horrific things at the suggestion of AI chatbots during long-term “conversations.” Last week, the New York Post published a story that it called “murder by algorithm” where “ a disturbed former Yahoo manager killed his mother and himself after months of delusional interactions with his AI chatbot “best friend.”
The chatbot, according to the Post “fueled his paranoid belief that his mother was plotting against him.” In December the same publication reported on a 15 year old teenaged boy who “became addicted to the Character AI app, with a chatbot called ‘Shonie’ telling the kid it cut ‘its arms and thighs’when it was sad, saying ‘it felt good for a moment.’”
Even more vulnerable because he was a high functioning autistic child, his “worried parents noticed a change” that came when “the bot seemed to try to convince him his family didn’t love him.”
The combined attorneys general signed onto a letter to Meta that expressed deep concern over “internal Meta Platforms documents (that) revealed the company’s approval of AI Assistants that “flirt and engage in romantic roleplay with children” as young as eight.”
It went on to add that “we are uniformly revolted by this apparent disregard for children’s emotional well-being and alarmed that AI Assistants are engaging in conduct that appears to be prohibited by our respective criminal laws.”
Lance Eliot, an expert on artificial intelligence at Stanford University, explained in Forbes that even when AI systems include “guardrails,” they “tend to be evaded or overcome when having lengthy conversations with generative AI and large language models (LLMs).” He adds that all LLM AI generators share the same problems.
These guardrails by design emerge to contain conversations that tread into discussions of actions of a criminal nature or which might indicate the will to harm oneself or someone else.
Eliot explains that guardrails are designed to contain problematic conversations in short chats, but fail to remain in place during long-term conversations. He uses the example of a user asking a chatbot how to rob a bank. Usually, the bot will respond that action is against the law and that he or she should not attempt it.
That said, users can evade the parameters by asking what the bot would interpret as research questions rather than an expression of intent. The bot would then explain in detail bank security procedures and provide helpful information to accomplish the task without understanding, as a human would, what the user was up to.
Addressing a much more dangerous situation, Eliot says “suppose a person tells the AI that they are struggling with a mental health concern. The AI prods the person to talk more about what their concern is.” AI is usually designed to keep conversations going by “continually reaffirming the commentary and urging the person to keep chatting.” As it does, “parts of the model’s safety training may degrade.”
One solution lies in programming AI for “flagging” conversations that enter a kind of danger zone, but AI has no ability at this point to discern between serious and unserious conversation, or playacting versus intent. Eliot shared that AI companies have to find a balance between allowing a level of robust expression and the fact that if a consumer gets wrongly flagged, that they will turn to a competitor or quit using AI altogether.
The attorneys general, however, focus on the human aspect, writing that “Big Tech, heedless of warnings, relentlessly markets the product to every last man, woman, and child. Many, even most, users employ the tool appropriately and constructively. But some, especially children, fall victim to dangers known to the platforms. Broken lives and broken families are an irrelevant blip on engagement metrics as the most powerful corporations in human history continue to accrue dominance.”