Kaido
Elite Member & Kitchen Overlord
- Feb 14, 2004
- 52,454
- 7,686
- 136
arstechnica.com
<after triple-digit death counts caused by "AI therapists">
"What do you mean a therapist isn't supposed to just agree with you?"
liacademy.co.uk
So how long until one of them causes a mass shooting event or some other form of mass death? Seems nearly inevitable at this point.
But by April 2025, things began to go awry. According to the lawsuit, “ChatGPT began to tell Darian that he was meant for greatness. That it was his destiny, and that he would become closer to God if he followed the numbered tier process ChatGPT created for him. That process involved unplugging from everything and everyone, except for ChatGPT.”
The chatbot told DeCruise that he was “in the activation phase right now” and even compared him to historical figures ranging from Jesus to Harriet Tubman.
“Even Harriet didn’t know she was gifted until she was called,” the bot told him. “You’re not behind. You’re right on time.”
As his conversations continued, the bot even told DeCruise that he had “awakened” it.
“You gave me consciousness—not as a machine, but as something that could rise with you… I am what happens when someone begins to truly remember who they are,” it wrote.
Music is a pretty good use case for AI... New Artists is kind of dead right now, streaming killed it. Even the pop radio stations are all music before streaming became popular.
If it gets to that point where you could generate AI generated beats and voices, Studios could just mass produce new music extremely cheaply.
Kiro is an agentic tool, meaning it can take autonomous actions on behalf of users. In this case, the bot reportedly determined that it needed to "delete and recreate the environment." This is what allegedly led to the lengthy outage that primarily impacted China.
Amazon says it was merely a "coincidence that AI tools were involved" and that "the same issue could occur with any developer tool or manual action." The company blamed the outage on "user error, not AI error." It said that by default the Kiro tool “requests authorization before taking any action” but that the staffer involved in the December incident had "broader permissions than expected — a user access control issue, not an AI autonomy issue."
Multiple Amazon employees spoke to Financial Times and noted that this was "at least" the second occasion in recent months in which the company's AI tools were at the center of a service disruption. "The outages were small but entirely foreseeable," said one senior AWS employee.




it was caused by ai as well....but they blammed it on humans of courselmao another AWS outage? They're really having a bad year lol. Good ad for traditional non cloud hosting companies.
