The anti-AI thread

Page 22 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IronWing

No Lifer
Jul 20, 2001
73,516
35,206
136
I don't mind AI generated stuff that is presented as such. As of last week my birder group has been inundated with bots posting AI bird photos. The noise destroys communication, which is the intent.
 
  • Like
Reactions: Jon-T and marees

mikeymikec

Lifer
May 19, 2011
21,470
16,698
136
I hate that it also says "monitors". That was my fear, that they start bringing "smart TV" garbage to monitors too. Ugh. I just want my screen to be a screen.

How are they going to give a monitor an Internet connection in a manner not easily avoidable?

I suspect this is simply going to be an opt-in for "technology enthusiasts".

On a separate note - I did a file search in Win11 recently (on a customer's PC) and the Explorer status bar informed me that my search was being powered by AI which may lead to incorrect results being delivered, along with a feedback link. Has anyone else seen this?
 

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
52,214
7,550
136

Per Microsoft, Copilot Checkout will allow users to have a conversation with Copilot about products and make a decision to purchase right from the chat. When they decide to buy, an AI agent powered by Copilot navigates to the retailer and completes the transaction for them. Copilot Checkout, which will be available through Copilot.com, will start with partnerships with Etsy, PayPal, Shopify, and Stripe. Microsoft says it’ll ramp up available retailers this month.

Retailers who operate using PayPal or Stripe can apply to become an eligible Copilot Checkout merchant, but Shopify sellers will automatically be enrolled in the program and have to manually opt out if they don’t want to participate

 
  • Wow
Reactions: Red Squirrel

mikeymikec

Lifer
May 19, 2011
21,470
16,698
136
NVIDIA CEO has the sads because people are saying mean things about AI and therefore denying him the chance to build another 27 swimming pools. Also, it'll be the naysayers' fault when the AI bubble pops, all this poor little puppy needed to survive was a few GDPs, was that too much to ask.

 
Last edited:

nakedfrog

No Lifer
Apr 3, 2001
63,369
19,748
136
NVIDIA CEO has the sads because people are saying mean things about AI and therefore denying him the chance to build another 27 swimming pools. Also, it'll be the naysayers' fault when the AI bubble pops, all this poor little puppy needed to survive was a few GDPs, was that too much to ask.

" I think we're scaring people from making the investments in AI that makes it safer, more functional, more productive, and more useful to society"
MF, people invested over $200 billion in AI last year. How about you invest in a bit of GFY?
 

IronWing

No Lifer
Jul 20, 2001
73,516
35,206
136
My company just pushed another MS Office update that keeps us from turning off Copilot. I'm tempted to use the shit out of it. "You want shit products? Here's your shit products."

There is an option to tell Copilot to only be annoying if it thinks something is really important. This tells me that Copilot is real-time analyzing everything I type. I wonder how the c suite folks feel about MS having access to everything every employee types as they type it?
 
  • Like
Reactions: marees

nakedfrog

No Lifer
Apr 3, 2001
63,369
19,748
136
My company just pushed another MS Office update that keeps us from turning off Copilot. I'm tempted to use the shit out of it. "You want shit products? Here's your shit products."

There is an option to tell Copilot to only be annoying if it thinks something is really important. This tells me that Copilot is real-time analyzing everything I type. I wonder how the c suite folks feel about MS having access to everything every employee types as they type it?
"Well, it's good for my stock portfolio, so I'll allow it"
 

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
52,214
7,550
136
I guess it's a "fun" preview of what we're in for.

imo AI slop will be the 2026 "pandemic". Wild goose chases, "swatting", false accusations, fabricated evidence, companies pushing data-collection overreach, environmental abuses from datacenters, electricity prices jacked up, computer hardware shortages, etc.


A big part of the problem is simply human nature:

1. People believe what they see

2. People tend to only read headlines

3. People are already pre-disposed to certain worldviews (left or right politics, conspiracy believers, etc.)

Hard to fight fake news when people WANT to buy into whatever reality is being peddled to them ¯\_(ツ)_/¯

 
Last edited:
  • Like
Reactions: KompuKare

Red Squirrel

No Lifer
May 24, 2003
71,164
14,016
126
www.anyf.ca
I saw that a while back, pretty interesting phenomenon but also not surprising. I always feel like using AI is cheating but do find myself using it and it's 10x faster and better than googling. I don't know what to think about it, I don't necessarily like it but I also can't ignore a resource that can help me.

Will be interesting to see what happens long term, because AI relies on non AI data to train. When people are just posting AI responses and AI trains on that, it will cause some kind of feedback loop where no new data is actually learned. I'm sure this is something even the AI companies are going to have trouble with.
 

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
52,214
7,550
136
I saw that a while back, pretty interesting phenomenon but also not surprising. I always feel like using AI is cheating but do find myself using it and it's 10x faster and better than googling. I don't know what to think about it, I don't necessarily like it but I also can't ignore a resource that can help me.

Will be interesting to see what happens long term, because AI relies on non AI data to train. When people are just posting AI responses and AI trains on that, it will cause some kind of feedback loop where no new data is actually learned. I'm sure this is something even the AI companies are going to have trouble with.

I was wondering about that too:

1. Gathering existing data

2. Vetting existing data

3. Creating NEW data

Some stuff it can solve (ex. coding) through simulations, but it needs data for health, human emotions, physical effects in the universe, etc. Will be interesting to see how they prevent a data-rot spiral, as no human can validate ALL of the data in the AI databases!
 

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
52,214
7,550
136
This is pretty cool, but I'm putting it in this thread because, you know...yikes

This is off-the shelf Nano Banana Pro & Kling 2.6:


 
Last edited:

Steltek

Diamond Member
Mar 29, 2001
3,448
1,190
136
Well, so much for Microslop security:

Reprompt: The Single-Click Microsoft Copilot Attack that Silently Steals Your Personal Data

What is worse is that the method is likely applicable to most AI chatbots.

And, remember how all the AI companies claimed in court that their products were learning "patterns" and were not storing the actual contents of the utilized training material?

A Stanford research paper that just dropped pretty much proves beyond a doubt that the AI companies are purposely lying through their teeth to the courts in all the legal filings they have made related to the issue. Either that, or they truly don't understand how what they are creating actually works. I don't know which possibility is worse......

Abstract paper is here:


 

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
52,214
7,550
136
And, remember how all the AI companies claimed in court that their products were learning "patterns" and were not storing the actual contents of the utilized training material?

"Harrison Ford as Rambo"

Uh-huh :colbert:

1768678722208.png
 
  • Love
Reactions: Steltek