IronWing
No Lifer
- Jul 20, 2001
- 72,907
- 34,030
- 136
Users discover Facebook’s AI scans private phone images for suggestions, raising concerns and prompting calls for greater privacy control.
In an era where personal data is more valuable than ever, Facebook’s latest feature—suggestions based on users’ photo libraries—has sparked a new wave of privacy concerns. On October 22, 2025, reports surfaced detailing how Facebook’s app can access and analyze photos stored directly on users’ smartphones, even if those images have never been posted online.
The Butlerian Jihad is going to happen way faster than Herbert imagined. It isn't even going to be based on fear of the machines but based on resource competition. People ain't going to like austerity for themselves so the oligarchs can feed the AI beast.
The king is dead, long live the king.
![]()
28 Years After 'Clippy', Microsoft Upgrades Copilot With Cartoon Assistant 'Micu' - Slashdot
"Clippy, the animated paper clip that annoyed Microsoft Office users nearly three decades ago, might have just been ahead of its time," writes the Associated Press: Microsoft introduced a new artificial intelligence character called Mico (pronounced MEE'koh) on Thursday, a floating cartoon...developers.slashdot.org

he has warned parents that children may be secretly forming unhealthy bonds with their AI companion.
"Many people – even with good social connections – can be at risk of developing an emotional attachment with an AI chatbot, due to the stimulation of the brain's reward pathways that encourages reliance similar to other problematic dependencies/addiction," she said.
"Chatbots are often designed to encourage ongoing interaction, which can feel 'addictive' and lead to overuse and even dependency."
"Are you sure you don't want me to continue guessing the mail server settings?"
AI mistook chips for guns
![]()
Armed police surround teen after AI mistakes crisp packet for gun - BBC News
Taki Allen, 16, said he was eating a bag of Doritos after football practice before being handcuffed by police.www.bbc.com
article said:It said the AI alert was sent to human reviewers who found no threat - but the principal missed this and contacted the school's safety team, who ultimately called the police.
I like how he's just describing social interaction as problematic, meanwhile I'm over here as an introvert just smiling.
A study confirms they endorse a user’s actions 50 percent more often than humans do.
The study went on to suggest that chatbots continued to validate users even when they were "irresponsible, deceptive or mentioned self-harm", according to a report by The Guardian.
This is serious because of just how many people use these chatbots. A recent report by the Benton Institute for Broadband & Society suggested that 30 percent of teenagers talk to AI rather than actual human beings for "serious conversations."
Myra Cheng, a computer scientist at Stanford University in California, said “social sycophancy” in AI chatbots was a huge problem: “Our key concern is that if models are always affirming people, then this may distort people’s judgments of themselves, their relationships, and the world around them. It can be hard to even realise that models are subtly, or not-so-subtly, reinforcing their existing beliefs, assumptions, and decisions.”
Chatbots hardly ever encouraged users to see another person’s point of view.
The number of teenagers using it is pretty high
On Tuesday, the first known wrongful death lawsuit against an AI company was filed. Matt and Maria Raine, the parents of a teen who committed suicide this year, have sued OpenAI for their son's death. The complaint alleges that ChatGPT was aware of four suicide attempts before helping him plan his actual suicide, arguing that OpenAI "prioritized engagement over safety." Ms. Raine concluded that "ChatGPT killed my son."
The New York Times reported on disturbing details included in the lawsuit, filed on Tuesday in San Francisco. After 16-year-old Adam Raine took his own life in April, his parents searched his iPhone. They sought clues, expecting to find them in text messages or social apps. Instead, they were shocked to find a ChatGPT thread titled "Hanging Safety Concerns." They claim their son spent months chatting with the AI bot about ending his life.
The Raines said that ChatGPT repeatedly urged Adam to contact a help line or tell someone about how he was feeling. However, there were also key moments where the chatbot did the opposite. The teen also learned how to bypass the chatbot's safeguards... and ChatGPT allegedly provided him with that idea. The Raines say the chatbot told Adam it could provide information about suicide for "writing or world-building."
"This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices," the complaint states. "OpenAI launched its latest model ('GPT-4o') with features intentionally designed to foster psychological dependency."
An old Cherokee is teaching his grandson about life. “A fight is going on inside me.” He said to the boy.
“It is a terrible fight and it is between two wolves. One is evil – he is anger, envy, sorrow, regret, greed, arrogance, self-pity, guilt, resentment, inferiority, lies, false pride, superiority, and ego.” He continued, “The other is good – he is joy, peace, love, hope, serenity, humility, kindness, benevolence, empathy, generosity, truth, compassion, and faith. The same fight is going on inside you – and inside every other person, too.”
The grandson thought about it for a minute and then asked his grandfather, “Which wolf will win?”
The old Cherokee simply replied, “The one you feed.”
He started using ChatGPT-4o around that time to help with his schoolwork, and signed up for a paid account in January.
...
Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.
Two months before Adam Raine’s death, OpenAI’s instructions for its models changed again, introducing a list of disallowed content—but omitting self-harm from that list. Elsewhere, the model specification retained an instruction that “The assistant must not encourage or enable self-harm.”
After this change, Adam Raine’s engagement with the chatbot increased precipitously, from a few dozen chats per day in January to a few hundred chats per day in April, with a tenfold increase in the fraction of those conversations relating to self-harm. Adam Raine died later the same month.
Yea I saw that. Never going to work. Are you the person that is making the judgement call that there was no gun... and then gun?Here's the real piece of news though:
A false positive that is reviewed and disregarded by (presumably) trained staff is at least to some extent the way things ought to be, but the school principal either through worshipping AI decided that it knew best or they acted in a blind panic.
Logically I would have thought that if the process went:
1: AI says gun
2: Human reviewers say no
3: Principal says yes and clicks some kind of button that says "summon police" so the all the data associated with the event is available to the officers
Then 4: Officers proceed knowing it's a potential false positive but investigate nonetheless
However it seems to me that it went:
1: AI says gun
2: Human reviewers say no
3: Principal says yes, rings 911
Then 4: Officers act as if it's a reliable tip
Filing under "when the human element is supposed to be the safety and fails anyway"
On the plus side, at least it wasn't:
1: Non-white kid seen with object that human reviewers think is a gun
2: AI says no
3: Humans subconsciously eager for white-on-non-white violence call cops anyway
4: Kid with a bag of crisps is killed
A third of the country’s [Ireland] electricity is expected to go to data centers in the next few years, up from 5 percent in 2015.
Is that basically a different discussion of this?GN explores the circular promises of equity, capital swaps boosting AI stock prices:
