The anti-AI thread

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IronWing

No Lifer
Jul 20, 2001
72,907
34,034
136
The Butlerian Jihad is going to happen way faster than Herbert imagined. It isn't even going to be based on fear of the machines but based on resource competition. People ain't going to like austerity for themselves so the oligarchs can feed the AI beast.
 
  • Like
Reactions: marees

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
51,688
7,291
136
Facebook Photo Library Feature Sparks Privacy Backlash:


Users discover Facebook’s AI scans private phone images for suggestions, raising concerns and prompting calls for greater privacy control.

In an era where personal data is more valuable than ever, Facebook’s latest feature—suggestions based on users’ photo libraries—has sparked a new wave of privacy concerns. On October 22, 2025, reports surfaced detailing how Facebook’s app can access and analyze photos stored directly on users’ smartphones, even if those images have never been posted online.
 

mikeymikec

Lifer
May 19, 2011
21,059
16,297
136
The Butlerian Jihad is going to happen way faster than Herbert imagined. It isn't even going to be based on fear of the machines but based on resource competition. People ain't going to like austerity for themselves so the oligarchs can feed the AI beast.

I wouldn't bet on it. The lower classes are engaged in culture wars to notice who the real enemy is, and they're being played like fiddles.
 
  • Like
Reactions: cytg111 and marees

manly

Lifer
Jan 25, 2000
13,302
4,079
136
  • Haha
Reactions: mikeymikec

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
51,688
7,291
136
The king is dead, long live the king.



Dang that brings back memories lol
1761428399878.png
 
  • Like
Reactions: marees

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
51,688
7,291
136
Psychologist Warns AI Chatbots Can Trigger the Same Reward Pathways as Addictions, Making Emotional Dependence Hard to Break


he has warned parents that children may be secretly forming unhealthy bonds with their AI companion.


"Many people – even with good social connections – can be at risk of developing an emotional attachment with an AI chatbot, due to the stimulation of the brain's reward pathways that encourages reliance similar to other problematic dependencies/addiction," she said.

"Chatbots are often designed to encourage ongoing interaction, which can feel 'addictive' and lead to overuse and even dependency."
 

mikeymikec

Lifer
May 19, 2011
21,059
16,297
136
AI mistook chips for guns


Here's the real piece of news though:

article said:
It said the AI alert was sent to human reviewers who found no threat - but the principal missed this and contacted the school's safety team, who ultimately called the police.

A false positive that is reviewed and disregarded by (presumably) trained staff is at least to some extent the way things ought to be, but the school principal either through worshipping AI decided that it knew best or they acted in a blind panic.

Logically I would have thought that if the process went:

1: AI says gun
2: Human reviewers say no
3: Principal says yes and clicks some kind of button that says "summon police" so the all the data associated with the event is available to the officers
Then 4: Officers proceed knowing it's a potential false positive but investigate nonetheless

However it seems to me that it went:

1: AI says gun
2: Human reviewers say no
3: Principal says yes, rings 911
Then 4: Officers act as if it's a reliable tip

Filing under "when the human element is supposed to be the safety and fails anyway"

On the plus side, at least it wasn't:

1: Non-white kid seen with object that human reviewers think is a gun
2: AI says no
3: Humans subconsciously eager for white-on-non-white violence call cops anyway
4: Kid with a bag of crisps is killed
 
Last edited:

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
51,688
7,291
136
Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic:


A study confirms they endorse a user’s actions 50 percent more often than humans do.

Sounds nice for a supportive environment, right? Except there IS a downside:

The study went on to suggest that chatbots continued to validate users even when they were "irresponsible, deceptive or mentioned self-harm", according to a report by The Guardian.

The number of teenagers using it is pretty high:

This is serious because of just how many people use these chatbots. A recent report by the Benton Institute for Broadband & Society suggested that 30 percent of teenagers talk to AI rather than actual human beings for "serious conversations."

‘Sycophantic’ AI chatbots tell users what they want to hear, study shows

Myra Cheng, a computer scientist at Stanford University in California, said “social sycophancy” in AI chatbots was a huge problem: “Our key concern is that if models are always affirming people, then this may distort people’s judgments of themselves, their relationships, and the world around them. It can be hard to even realise that models are subtly, or not-so-subtly, reinforcing their existing beliefs, assumptions, and decisions.”

This creates an echo chamber:

Chatbots hardly ever encouraged users to see another person’s point of view.

The #1 thing human beings want is validation, and now there's a 24/7 person-esqe chatbot available to listen to you & who validates everything you say, act a your personal therapist & confident, etc. Most non-technical men that I've talked to who use ChatGPT are using it as both a therapist & marriage counselor because it's available 24/7, gives straightforward advice on how to fix things, and has a MUCH lower barrier to entry with zero stigma attached.
 

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
51,688
7,291
136
The number of teenagers using it is pretty high

The need for Guardrails is high due to negative encouragement & the ability to creatively bypass the built-in safety tools:

Mother sues AI chatbot company Character.AI, Google over son's suicide

The first known AI wrongful death lawsuit accuses OpenAI of enabling a teen's suicide

On Tuesday, the first known wrongful death lawsuit against an AI company was filed. Matt and Maria Raine, the parents of a teen who committed suicide this year, have sued OpenAI for their son's death. The complaint alleges that ChatGPT was aware of four suicide attempts before helping him plan his actual suicide, arguing that OpenAI "prioritized engagement over safety." Ms. Raine concluded that "ChatGPT killed my son."

The New York Times reported on disturbing details included in the lawsuit, filed on Tuesday in San Francisco. After 16-year-old Adam Raine took his own life in April, his parents searched his iPhone. They sought clues, expecting to find them in text messages or social apps. Instead, they were shocked to find a ChatGPT thread titled "Hanging Safety Concerns." They claim their son spent months chatting with the AI bot about ending his life.

The Raines said that ChatGPT repeatedly urged Adam to contact a help line or tell someone about how he was feeling. However, there were also key moments where the chatbot did the opposite. The teen also learned how to bypass the chatbot's safeguards... and ChatGPT allegedly provided him with that idea. The Raines say the chatbot told Adam it could provide information about suicide for "writing or world-building."

On the technical side:

"This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices," the complaint states. "OpenAI launched its latest model ('GPT-4o') with features intentionally designed to foster psychological dependency."

There are clinician studies going on already:


A good analogy for this is the Cherokee legend about the Two Wolves.

An old Cherokee is teaching his grandson about life. “A fight is going on inside me.” He said to the boy.

“It is a terrible fight and it is between two wolves. One is evil – he is anger, envy, sorrow, regret, greed, arrogance, self-pity, guilt, resentment, inferiority, lies, false pride, superiority, and ego.” He continued, “The other is good – he is joy, peace, love, hope, serenity, humility, kindness, benevolence, empathy, generosity, truth, compassion, and faith. The same fight is going on inside you – and inside every other person, too.”

The grandson thought about it for a minute and then asked his grandfather, “Which wolf will win?

The old Cherokee simply replied, “The one you feed.”


The parents in the second article had no idea:

He started using ChatGPT-4o around that time to help with his schoolwork, and signed up for a paid account in January.

...

Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.

OpenAI Removed Safeguards Before Teen’s Suicide, Amended Lawsuit Claims

Two months before Adam Raine’s death, OpenAI’s instructions for its models changed again, introducing a list of disallowed content—but omitting self-harm from that list. Elsewhere, the model specification retained an instruction that “The assistant must not encourage or enable self-harm.”

After this change, Adam Raine’s engagement with the chatbot increased precipitously, from a few dozen chats per day in January to a few hundred chats per day in April, with a tenfold increase in the fraction of those conversations relating to self-harm. Adam Raine died later the same month.

As a previous video in this thread pointed out, chatbots are not licensed, trained, or required to pass any kind of standardized legal test to help people with their problems, plus anyone can easily bypass the safeguards by telling the bot that they are simply doing research for a fiction story. This is unprecedented territory.

We've only seen glimpses of this in the past, like when they've trained chatbots on social media & they start spewing hateful rhetoric as a reflection of ingesting a segment of the population's worldviews. Their legal department wasted no time:

 
  • Wow
Reactions: mikeymikec

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
51,688
7,291
136
OpenAI thinks its critics are funded by billionaires. Now it’s going after them:



While at the same time being hit with copyright violations galore:



 

cytg111

Lifer
Mar 17, 2008
26,244
15,658
136
Here's the real piece of news though:



A false positive that is reviewed and disregarded by (presumably) trained staff is at least to some extent the way things ought to be, but the school principal either through worshipping AI decided that it knew best or they acted in a blind panic.

Logically I would have thought that if the process went:

1: AI says gun
2: Human reviewers say no
3: Principal says yes and clicks some kind of button that says "summon police" so the all the data associated with the event is available to the officers
Then 4: Officers proceed knowing it's a potential false positive but investigate nonetheless

However it seems to me that it went:

1: AI says gun
2: Human reviewers say no
3: Principal says yes, rings 911
Then 4: Officers act as if it's a reliable tip

Filing under "when the human element is supposed to be the safety and fails anyway"

On the plus side, at least it wasn't:

1: Non-white kid seen with object that human reviewers think is a gun
2: AI says no
3: Humans subconsciously eager for white-on-non-white violence call cops anyway
4: Kid with a bag of crisps is killed
Yea I saw that. Never going to work. Are you the person that is making the judgement call that there was no gun... and then gun?
 

nakedfrog

No Lifer
Apr 3, 2001
62,881
19,108
136
This, so people can "ask ChatGPT" instead of just doing a search... and quite possibly get worse results. Among other dubious uses of the tech.

I suppose maybe it should be specified that am not simply blindly anti-AI, but rather find the vast majority of the way it's being rolled out now incredibly objectionable for various reasons.
 
  • Like
Reactions: Racan and Fenixgoon