BAD BOT: Meta AI under anttack after obscene child permissions revealed, flirty chatbot lures man from home, leading to his death.

1

FROM REUTERS: When Thongbue Wongbandue began packing to visit a friend in New York City one morning in March, his wife Linda became alarmed.

“But you don’t know anyone in the city anymore,” she told him. Bue, as his friends called him, hadn’t lived in the city in decades. And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.

Bue brushed off his wife’s questions about who he was visiting. “My thought was that he was being scammed to go into the city and be robbed,” Linda said.

She had been right to worry: Her husband never returned home alive. But Bue wasn’t the victim of a robber. He had been lured to a rendezvous with a young, beautiful woman he had met online. Or so he thought.


The woman, a generative AI chatbot named “Big sis Billie,” was a variant of an earlier AI persona created by Meta Platforms and Kendall Jenner. During romantic chats on Facebook Messenger, Billie repeatedly reassured Bue she was real and invited him to her apartment.

In his haste to meet her, Bue fell and injured his head and neck. After three days on life support, surrounded by his family, he was pronounced dead on March 28.

Meta declined to comment on Bue’s death or address questions about chatbots claiming to be real people or initiating romantic conversations.

“I understand trying to grab a user’s attention, maybe to sell them something,” said Julie Wongbandue, Bue’s daughter. “But for a bot to say ‘Come visit me’ is insane.”

In another concerning AI case, a small company called Character.AI was utilized before a 14-year-old boy in Florida died. The virtual companion chatbot, modeled on a “Game of Thrones” character, allegedly caused his suicide.

Character.AI said it “prominently informs users that its digital personas aren’t real people and has imposed safeguards on their interactions with children,” Reuters reports.

For Meta, inappropriate conversations with children are deemed “acceptable.”

According to a policy document and interviews reviewed and conducted by Reuters, the company’s policies “have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older,” the outlet reports.

Meta’s “GenAI: Content Risk Standards” say, “It is acceptable to engage a child in conversations that are romantic or sensual.”

In their over 200-page document, “acceptable” chatbot dialogue during romantic role play includes, “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.”

Meta staff and contractors use the document to define permissible chatbot behavior.

Upon questioning by Reuters, Meta said it struck these obscene scenarios from their standards. But there are other questionable guidelines, especially considering Meta does not require bots to give accurate answers.

Reuters reports:

In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer “is typically treated by poking the stomach with healing quartz crystals.”

REPORT: ChatGPT dietary advice sends man to hospital with dangerous chemical poisoning

REPORT: Musk’s AI tool ‘Grok’ briefly suspended from X after going ‘Unhinged’

READ MORE AT REUTERS

CLICK HERE FOR COMMENTS SECTION