• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Skynet goes online today, starts its attack on humanity April 21

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I approve of your comments sir 🙂

I think we are far FAR away from having the computer power for a genuine sentient AI. Our best super computers right now require several months to emulate 10% of a mouse brain for a couple seconds.

More importantly no matter how fast CPUs get, it does no good when our buses and CPUS can consume and process 100s of GB of data per second and our shitty storage devices barely manage a couple MB /sec. We are going to have to come up with some non volitile random storage that is 10000x faster and higher capacity than DRAM before we can even HOPE to have decent computers... spinning platters and flash memory isn't cutting it.

Until we can move and copy 500 GB of random files from C: to E: in 2 seconds with a 100% spike in CPU load, we aren't going to have anything remotely comparable to a human brain. Our non volatile storage media is primative.

If only people who say these things about AI taking over the world soon knew just how impossible something as simple as all weather all lighting conditions any object angle visibile spectrum instant recognition of a moving, obstructed, distant, and shadowed object on a noisy moving background is in computer vision... regardless of computing power. Shit it's demanding enough with multi million dollar liquid cooled IR detectors with no backdrop and specific polygonal object outlines and frequency domain spectra to look for in grey scale, let alone "seeing" and "recognizing" and "thinking" about any arbitrary object on some shitty low res visible spectrum RGB web cam equal to the human eye down on the ground. Getting a computer to see objects like "circles", "tire", "car", cat", "red car from 3/4 angle behind a bush" the way people do and not see in pixels is damn near impossible even at resolutions equal too or exceeding the eye. Your eye has limited "pixels" but you do not see in "pixels".

Yet a 2 year old human child can instantly glance at a photograph in the page of a magazine across the isle of a plane in .001 seconds and exclaim "kitty!" while pointing to some blob of dithered pixels in the leaves of a tree; something a trillion dollar computer the size of a house cannot.

We are decades from AI , if not centuries. I'm talking real sentient AI, not just really well programmed unmanned weapons and auto pilot.
 
Last edited:
The power of the human brain never ceases to amaze me. I deal on the perimeter of optics in an optics laboratory (I'm not an optical engineer or physicist, but we have a lot of them), and our ability to both see and identify imagery is leaps and bounds beyond what AI can do. There are many optical systems that can do one thing very well, say seeing in IR, or in the dark, or at long distances, but none of them can do all of the things our eye does as good as our eye does it, for anywhere close to the same size. Look at a SLR camera. You need a 3" x 3" lens made of heavy glass to be able to capture the same image that your eyeball can capture (quality dependent).

But don't forget the pace of innovation. I remember the first DARPA sponsored autonomous vehicle challenge, in 2003 or 2004. The goal was to have a car navigate a course fully uncommanded, using only its own cameras. The first year of the challenge, not a single competitor finished. No one even got more than a few hundred feet into the course (it was a mile or two long). The very next year, you had competitors completing the entire track. In one year!
 
If I remember correctly, that's the date used in the Terminator: The Sarah Connor Chronicles. I started watching it fairly recently and noticed the date was coming up 😱.

Yet a 2 year old human child can instantly glance at a photograph in the page of a magazine across the isle of a plane in .001 seconds and exclaim "kitty!" while pointing to some blob of dithered pixels in the leaves of a tree; something a trillion dollar computer the size of a house cannot.

It's kind of odd, but I rather enjoy thinking about those aspects of AI.

Looking at your example, I ask myself the usual question... "how does the child make that observation?" What process does our mind go through when we see the picture? I'd say the biggest thing we do as humans is "what's different?" or simply comparing contrasting things. If you look at an apple tree (without knowing that it's actually an apple tree), the first thing you'll probably notice are the red apples scattered throughout the green leaves.

Our minds are essentially designed to notice differences. We talked about this in the thread about whether using "the big black guy" was a racist remark.

Now, the next thing comes... how does the child know that it's a kitty in the tree? I'm guessing shape recognition or simply certain features. You could probably replace that domestic house cat with a panther cub, and the child would probably still call it a kitty.

The one thing I've never come close to figuring out a way is the one thing you mention... data. I have no idea how I would go about noticing certain features and automatically pinpointing certain data "snippets" about that. I can store data, but how do I organize it for fast retrieval and then again there's how to decide what to store. As an example, I've noticed that if someone tells me there name and I don't use it at least twice in a short period of time, I won't remember it.
 
It was the Sarah Connor cronicles that put the date to today. The movies stated Aug 29th, 1997. Already happened and no one noticed.
 
terminator.jpg
 
If I remember correctly, that's the date used in the Terminator: The Sarah Connor Chronicles. I started watching it fairly recently and noticed the date was coming up 😱.



It's kind of odd, but I rather enjoy thinking about those aspects of AI.

Looking at your example, I ask myself the usual question... "how does the child make that observation?" What process does our mind go through when we see the picture? I'd say the biggest thing we do as humans is "what's different?" or simply comparing contrasting things. If you look at an apple tree (without knowing that it's actually an apple tree), the first thing you'll probably notice are the red apples scattered throughout the green leaves.

Our minds are essentially designed to notice differences. We talked about this in the thread about whether using "the big black guy" was a racist remark.

Now, the next thing comes... how does the child know that it's a kitty in the tree? I'm guessing shape recognition or simply certain features. You could probably replace that domestic house cat with a panther cub, and the child would probably still call it a kitty.

The one thing I've never come close to figuring out a way is the one thing you mention... data. I have no idea how I would go about noticing certain features and automatically pinpointing certain data "snippets" about that. I can store data, but how do I organize it for fast retrieval and then again there's how to decide what to store. As an example, I've noticed that if someone tells me there name and I don't use it at least twice in a short period of time, I won't remember it.

Its because the brain doesn't store and process things as pixels and use image processing kernels to approximate line edges and correlate with a database of pre recorded line sets, and guess potential objects etc. You know this for example because you are aware that you don't see lines of resolution or pixels, you have what consciencely seems to be infinite resolution and interpolation to"reality vision" where you see and store objects instantly.

Despite the eye being built of pixels, the brain does not see and store pixels or lines, it sees abstract discrete objects of infinite resolution. That step from pixels from the optical sensor to "reality vision" escapes our current technology and wil continue to do so for decades.
 
Last edited:
Its because the brain doesn't store and process things as pixels and use image processing kernels to approximate line edges and correlate with a database of pre recorded line sets, and guess potential objects etc. You know this for example because you are aware that you don't see lines of resolution or pixels, you have what consciencely seems to be infinite resolution and interpolation to"reality vision" where you see and store objects instantly.

Despite the eye being built of pixels, the brain does not see and store pixels or lines, it sees abstract discrete objects of infinite resolution. That step from pixels from the optical sensor to "reality vision" escapes our current technology and wil continue to do so for decades.

I think you're interpreting what I'm saying improperly. I'm not suggesting that the brain stores images of any type... I actually said quite the opposite. I stated that has humans, we tend to notice differences, and I'll also go out on a limb and say that we notice shapes over most other things (and the different shapes).

My thought is that thing shouldn't be stored in the typical movie fashion. Data is better suited to be stored as a sequence of attributes that reflect the nature of a perceived object. Attributes can reflect the emotional or physical nature of an object such as whether or not you actually like said object or help to describe what said object is like.

Let's look back at the example of the apple tree. Let's say I'm going to look at the tree and see these weird fairly round (some parts of the edges could be obscured by the leaves), red objects in the tree. If I've learned of an apple, chances are I would have an association of things like "red, tree, round". Of course, this is a very elementary set of associative properties... we've learned more and know that apples can vary in colors from green to shades of yellow. What would interest me is how I would discern an apple tree from a peach tree with more advanced knowledge. I know that some types of apples look kind of like peaches, so given that depiction in a picture... how do I discern from the set of data that I have?

I'm probably ramblin' on a bit by this point... any of that make sense?
 
I was gonna say... I distinctly recall that date passing in 1997. Did not realize an alternate universe with different skynet dates was dreamt up.

well, it had obviously moved past that because of the dates events of T2--the film left it open, anyway, but that the possibility of the same date was much less probable. T3 introduces the more logical notion that the date is mroe or less inevitable, it can only be delayed. (let's be real--one man in the modern world isn't going to do a whole lot to alter the progression, the desires, the money and power behind military want).

all that man can do is survive the threats on his life and in the new world, maybe, become that one man that makes a difference. T3 is about ensuring that that one man survives the inevitable--Judgment Day (which has already passed 1997 by a few years).

I guess the TV show--which was excellent--pushes it back further? I don't remember the specifics of that.
 
Hmm. I'm not really so sure. We're nothing more than biological machines anyway, carbon-based and electrochemically powered. Complex for sure, but I think obsolescence is unavoidable, even if it's at our own hands. Everything is merging in the area, and if Kurzweil is essentially correct, a lot of the early work will be with cybernetics and human enhancement.

I think the real tipping point is when an AI is created with the capability of self-correction and autonomous self-creation (given access to contemporary tools, materials, elements, etc, with the ability to add and modify itself at will). Such a creation with the capacity to self-analyze and hold it's entire knowledge in context simultaneously, processing many hundreds of trillions of operations per second, would probably evolve faster than you could watch and understand it. The early days shall be, and always have been, filled with half-baked fumbling failures though. Computer processing power, though it is almost immeasuarbly greater than even a decade ago, is still quite weak, and the fundamentals of writing artificial intelligence are also in their relative infancy. What a difference a few decades make though. Think ~1940-~1970, from no nuclear power to moonwalks and thermonuclear weapons. Or ~1970~2000, from basically zero public computing devices to a worldwide global network that spreads information ceaselessly by volumes that would have seen laughable to anyone that hadn't seen it in person. Hell, we carry cell phones that have considerably more computational and storage power than Nasa had it totality during the 1960s, and it's commonplace. Now we are on the heels of the Higgs-boson, generating artficially created organs, able to create simple organic computing devices, and generally on the precipice of a wide range of somewhat daunting possibilities.

It's safe to say that over the next 20-30 years, we're gonna see some serious shit 😀

maybe, but give it about 1k more years....maybe

the current processing power of the grandest super computer that we can perceive is about 1/32000 the "processing power" of our single nooddle.

it simply ain't gonna happen in our lifetime, our grandkids' lifetime, and several generations hence.


and who knows...we just might get very close to this sooner than we think, and really not like what we see, and kill it before we hit that point.

many things are possible.
 
Last edited:
And exon replacement/resequencing therepy to fix... well... anything.

Explain this, please, it makes no sense to me.


mainly, we do (what I think this means) naturally. also, wtf would anyone gain from "exon replacement?"

this sounds like a bunch of new age BS if you ask me. like...people talking about vibrations.

replication occurs at an incredibly efficient rate and frankly--mistakes are essential but in the end, almost meaningless to a species of our potential lifespan on the geologic scale.

exon replacement? what are you inferring by "replacing genes?" It's not like they ever disappear, unless we're talking about the Y chromosome (mostly useless, anyway).

you mean transgenics? taking out what you don't want, shoving something in?

I'd say we're still a good 2-3 decades from doing that successfully in humans; and not just for political reasons.

(Well, money would be the biggest block, so yeah, could be done faster if more wanted to create a transgenic human. But I think that you'll find the real life desire for such a thing does not follow the science fiction assumptions of such a thing. ...There is very little reason to do this.)
 
dun dun dun dun dun

dun dun dun dun dun

dooo dooo dooooo, doooo doooo doooooooo

dooo dooo dooooo, doooo doooo doooooooo

dun dun dun dun dun

dun dun dun dun dun


Go ahead and admit it - I got you singing the theme in your head
 
I think we are far FAR away from having the computer power for a genuine sentient AI. Our best super computers right now require several months to emulate 10% of a mouse brain for a couple seconds.

More importantly no matter how fast CPUs get, it does no good when our buses and CPUS can consume and process 100s of GB of data per second and our shitty storage devices barely manage a couple MB /sec. We are going to have to come up with some non volitile random storage that is 10000x faster and higher capacity than DRAM before we can even HOPE to have decent computers... spinning platters and flash memory isn't cutting it.

Until we can move and copy 500 GB of random files from C: to E: in 2 seconds with a 100% spike in CPU load, we aren't going to have anything remotely comparable to a human brain. Our non volatile storage media is primative.

If only people who say these things about AI taking over the world soon knew just how impossible something as simple as all weather all lighting conditions any object angle visibile spectrum instant recognition of a moving, obstructed, distant, and shadowed object on a noisy moving background is in computer vision... regardless of computing power. Shit it's demanding enough with multi million dollar liquid cooled IR detectors with no backdrop and specific polygonal object outlines and frequency domain spectra to look for in grey scale, let alone "seeing" and "recognizing" and "thinking" about any arbitrary object on some shitty low res visible spectrum RGB web cam equal to the human eye down on the ground. Getting a computer to see objects like "circles", "tire", "car", cat", "red car from 3/4 angle behind a bush" the way people do and not see in pixels is damn near impossible even at resolutions equal too or exceeding the eye. Your eye has limited "pixels" but you do not see in "pixels".

Yet a 2 year old human child can instantly glance at a photograph in the page of a magazine across the isle of a plane in .001 seconds and exclaim "kitty!" while pointing to some blob of dithered pixels in the leaves of a tree; something a trillion dollar computer the size of a house cannot.

We are decades from AI , if not centuries. I'm talking real sentient AI, not just really well programmed unmanned weapons and auto pilot.

OK, yeah, pretty much this.
:thumbsup:

I think we are many, many centuries away from Terminator-like AI. at least, something that can actually go out and effectively take on a squad of trained humans.

It';s funny how so many uber sci fi geeks and dreamers love to speculate about such things, and tend to hobby themselves with science[ yet have such little respect and give such tiny credit to "simple" human ability.

🙂


and today, it's easy to look at the startling progressions we've made in the previous decades--truly startling, no doubt. But I can't help but see that when it comes to computing power, we are quickly approaching a wall. I think we need a MAJOR paradigm shift to overcome that wall, soon, or we'll stagnate for some time in this kind of development. I mean, most people tend to agree now that Moore's law has already outlived it's lifetime.

Sure, money helps, and the tragic fear and hatred of science that seems to pervade this country over the previous decade does us no service. The type of money, focus, and BALLS that existed in the 60s-80s--no questions asked--simply doesn't exist anymore.

I wish that were not true.
 
Last edited:
There's a reason no one noticed. Terminator 2 happened...........

The TV show was damn good. you should watch it. the christian Bale BS was a steaming pile of turdburgers. don't waste your time.

T3 had corny silliness at parts, but it wasn't horrible. Dude was a pretty damn good John Conner for that situation. ignore the fools. 😉
 
The TV show was damn good. you should watch it. the christian Bale BS was a steaming pile of turdburgers. don't waste your time.

T3 had corny silliness at parts, but it wasn't horrible. Dude was a pretty damn good John Conner for that situation. ignore the fools. 😉

what i meant was nobody noticed anything happening on august 29th 1997, because the lab at Cyberdyne Systems was destroyed, and all of the leftover parts were thrown into molten steal. thus disrupting the timeline. terminator 2 was not just for entertainment, it really happened.
 
Explain this, please, it makes no sense to me.


mainly, we do (what I think this means) naturally. also, wtf would anyone gain from "exon replacement?"

"Omega-1 restarting, Alpha-1 genome restructuring. Confirming exon replacement. Basecode 85 million... 100 million! It's speed is overwhelming! Alpha-1 to Raziel Central, access confirmed!"


Speaking of AI and genetic programming...

http://www.youtube.com/watch?v=KyVRSim-fQI

😀

"exon replacement" is just my fancy term for reprogramming ourselves on the fly. Eliminating cancers, regrowing tissues, etc. Biology becomes limitless once we can fully manipulate the machine code. Similar to how sci fi "transporters" can detect certain patterns like viruses or defective genes and filter them out during reconstruction, etc.
 
Last edited:
Back
Top