• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

The anti-AI thread

Page 32 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

There's a number of of things about this story:

1 - It doesn't seem puzzling at all and I'm a layman, I mean surely an intelligent person could have reached the same conclusion without AI?
2 - Why were (presumably fully qualified and experienced) investigators asking AI questions they should already know the answers to, or if they don't, then surely they should be seeking a more definitive source than Google + one stage of separation?

In conclusion I wonder if the reason why some people are so hot for AI is because like AI, they're firm believers in "fake it until you make it".
 
I am a software developer and I recently had an experience with Claude code for the first time. I didn't use it but the customer I am working with used it in his free time over 2 weeks to come up with something. I had been working on this application for about 6 months for him and it's moving along at a good pace.

Well in 2 weeks, this dude, with Claude, made a version of the application, with like EVERY SINGLE feature he wanted down the road. And it is pretty damn functional and works well.

I will admit when I saw this, I was like oh shit, this is kind of scary. Thankfully I'm a government contractor so this can't just be used without a lot of integration behind the scenes.

I've been tasked with looking at the code and giving an estimate on how long some integrations would take. He developed this in a nice manner where you can develop plugins that are drop in and it'll find them.

After looking at the code for the past few days, I feel safer about my job as a software developer lol. The code it generates is not very human maintainable, at least not the way he did it. The UI is just one big HTML, JS, and CSS file. It's using raw JS. I am not the most experienced python developer but the backend also had quite a bit of problems as far as making it human maintainable.

I ran into some bugs as well that were very random and tracking down bugs like that would be a chore.

This is a very odd time as a developer lol.
This has been my experience as well. I'm a sysad, not a dev, but I have to write crap together periodically for more advanced scripting and whatnot. AI can make something functional but it's often really, really hard to maintain. No standards really, and if anything ever needs to be edited by the AI you probably are just going to end up having to get it to re-write the whole thing.
 

There's a number of of things about this story:

1 - It doesn't seem puzzling at all and I'm a layman, I mean surely an intelligent person could have reached the same conclusion without AI?
2 - Why were (presumably fully qualified and experienced) investigators asking AI questions they should already know the answers to, or if they don't, then surely they should be seeking a more definitive source than Google + one stage of separation?

In conclusion I wonder if the reason why some people are so hot for AI is because like AI, they're firm believers in "fake it until you make it".
If you haven't observed the pattern, many people seem more than happy to abdicate their thinking to an LLM so they don't have to do it themselves. It also seems to be a self-perpetuating cycle, and they then rely on it more and more instead of engaging their own critical thinking skills.
And then there's the assumption that they're fully qualified and experienced...
 

There's a number of of things about this story:

1 - It doesn't seem puzzling at all and I'm a layman, I mean surely an intelligent person could have reached the same conclusion without AI?
2 - Why were (presumably fully qualified and experienced) investigators asking AI questions they should already know the answers to, or if they don't, then surely they should be seeking a more definitive source than Google + one stage of separation?

In conclusion I wonder if the reason why some people are so hot for AI is because like AI, they're firm believers in "fake it until you make it".
What gets me is ai is either right or it's wrong. It's a coin toss. Might as well say "This is what we think happened. Let's flip a coin to verify it". You have no more verifiable knowledge than you had before the query. Ask the cat. Cats are free, and prettier.

I could maybe see plugging in a bunch of evidence, and see if ai came up with something novel that wasn't considered. Might be worth pursuing with actual human eyes, but ai data is garbage.
 
What gets me is ai is either right or it's wrong. It's a coin toss. Might as well say "This is what we think happened. Let's flip a coin to verify it". You have no more verifiable knowledge than you had before the query. Ask the cat. Cats are free, and prettier.

I could maybe see plugging in a bunch of evidence, and see if ai came up with something novel that wasn't considered. Might be worth pursuing with actual human eyes, but ai data is garbage.
Eh, with modern LLMs you can ask them for citations, which they will usually provide. It depends a lot on whether the info you're looking for is more specialized (what's the shear strength of granite) or more 'colloquial' (how often should I change my oil).
 
Eh, with modern LLMs you can ask them for citations, which they will usually provide. It depends a lot on whether the info you're looking for is more specialized (what's the shear strength of granite) or more 'colloquial' (how often should I change my oil).
It won't be long now (if we aren't already there) until they're citing papers generated by another LLM, which may contain citations to studies that don't exist...
 
Eh, with modern LLMs you can ask them for citations, which they will usually provide. It depends a lot on whether the info you're looking for is more specialized (what's the shear strength of granite) or more 'colloquial' (how often should I change my oil).
Pretty sure ai is already making up citations. There's a nice list of lawyers having their asses reamed by submitting bullshit to the courts. So you turn on your magic 8ball to get verification of your results, and then you're supposed to chase down the citations it provides? That's just doing your job with extra steps and energy consumption.

Like I said, looking for novel answers might be worth the effort. Asking it to backup an already accepted conclusion is just jerking off.
 
Pretty sure ai is already making up citations. There's a nice list of lawyers having their asses reamed by submitting bullshit to the courts. So you turn on your magic 8ball to get verification of your results, and then you're supposed to chase down the citations it provides? That's just doing your job with extra steps and energy consumption.

Like I said, looking for novel answers might be worth the effort. Asking it to backup an already accepted conclusion is just jerking off.
Dunno if 5 is better about it, but yes, 3.5 & 4 were.

 
its so cool to live in the era where everyone has the equivalent of a top-of-the-line cinematic camera from 20 years ago in their pocket, connected to the entire world and capable of digital transmission instantly, and yet it is still functionally impossible to know what the fuck is going on at even a fairly surface level

this is one of the main purposes of AI and why it is being pushed down our throats so hard
 
If you haven't observed the pattern, many people seem more than happy to abdicate their thinking to an LLM so they don't have to do it themselves. It also seems to be a self-perpetuating cycle, and they then rely on it more and more instead of engaging their own critical thinking skills.
And then there's the assumption that they're fully qualified and experienced...

My daughter started college last September and she sees this a lot in her classmates. She's currently in a composition class where the students have to review and critique each other's work, and says you definitely see a lot of similar writing patterns similar to what AI churns out. And if she notices, then a professor who's been watching it build up for a few years will definitely notice.

Plus, why are you even in a class to learn writing techniques if you're just gonna have chatgtp do it for you? If you need a prereq, just pick another one you'll actually get something out of.
 
Pretty sure ai is already making up citations. There's a nice list of lawyers having their asses reamed by submitting bullshit to the courts. So you turn on your magic 8ball to get verification of your results, and then you're supposed to chase down the citations it provides? That's just doing your job with extra steps and energy consumption.

Like I said, looking for novel answers might be worth the effort. Asking it to backup an already accepted conclusion is just jerking off.
Well, yeah, you still have to check the fucking citations, just as you're supposed to for anything else. Do you just google stuff and assume the first few links are accurate or actually check it, assuming it's important?
 
Well, yeah, you still have to check the fucking citations, just as you're supposed to for anything else. Do you just google stuff and assume the first few links are accurate or actually check it, assuming it's important?
That's the paralegal's job. I'd literally punch my assistant if I gave them research to do, and they outright made up citations. The paralegal may be good, or they may be bad, but I doubt there's many that make up data.
 
That's the paralegal's job. I'd literally punch my assistant if I gave them research to do, and they outright made up citations. The paralegal may be good, or they may be bad, but I doubt there's many that make up data.
And if a lawyer firm is using an LLM as a paralegal they deserve to get their asses handed to them by the board, in the same way that if I used only an LLM to do my job, I'd deserve it. It's a tool like any other, misuse will lead to expected consequences.
 
It's a tool like any other
Most tools aren't random. You get exact results when used as directed. If a fact as basic as paper reference has to be checked, it isn't fit for purpose.

The one use I proposed for ai was in the hypothesis stage, where it could've gave a solution that wasn't dirty ice. If it passes the initial smell test, you could then have humans follow up on it. That's dubious utility, but maybe.
 
Saw a discussion from a Suno group the other day, people talking about how they got the best results using ChatGPT to generate prompts to feed into Suno. And then talking about how they were worried someone might rip off their music. It's ridiculous.
 
If you type your password in Notepad it will warn you that it's an unsecure thing to do. This means that your password is somewhere in memory and is constantly being checked as you are typing. I suppose it might be checking everything you type against a hash of your password too. Either way, I feel whatever is happening to do that could potentially be the source of a major vulnerability at some point. Will be fun to see if I end up right.

I would also be curious if it's possible to coax copilot into telling you your password or other company sensitive info, that could be very exploitable too. Don't really want to mess with that at work though.
 
Back
Top