I have always wondered this since teens:

Zeze

Lifer
Mar 4, 2011
11,395
1,189
126
I love pondering stuff about the future.

Now, human mind and behavior is impossibly complex and sophisticated right? But isn't something complex only until we have the capacity to map it out?

Just like we have fully mapped human gnome and super data centers already archive all the tremendous data they have of us (Gmail as a bad example), isn't it only a matter of time until we completely PREDICT human behavior?

Since birth, combination of nature & nurture, who you are is a product of quadrillions of internal and external factors.

Isn't it only a matter of time we map out every possible factors of your daily life & physiological workings- then document and calculate real time which will allow it to PREDICT your next action?

Sure it will be a rough fuzzy logic at first. But as with anything, in time we will be able to predict everything you will do with great accuracy. They'll know what you'll have for lunch, what you will say, what fight you'll have, what 'unpredictable' thing you'll try to do- they'll even call that.

What are your thoughts?
 

dank69

Lifer
Oct 6, 2009
37,375
33,021
136
I love pondering stuff about the future.

Now, human mind and behavior is impossibly complex and sophisticated right? But isn't something complex only until we have the capacity to map it out?

Just like we have fully mapped human gnome and super data centers already archive all the tremendous data they have of us (Gmail as a bad example), isn't it only a matter of time until we completely PREDICT human behavior?

Since birth, combination of nature & nurture, who you are is a product of quadrillions of internal and external factors.

Isn't it only a matter of time we map out every possible factors of your daily life & physiological workings- then document and calculate real time which will allow it to PREDICT your next action?

Sure it will be a rough fuzzy logic at first. But as with anything, in time we will be able to predict everything you will do with great accuracy. They'll know what you'll have for lunch, what you will say, what fight you'll have, what 'unpredictable' thing you'll try to do- they'll even call that.

What are your thoughts?
Still can't predict weather 100% and human behavior is much more complex.
 

lxskllr

No Lifer
Nov 30, 2004
60,123
10,585
126
Yes, and that's why it's dangerous to feed the data machine. Prediction is currently primitive, but it'll get better over time, and people who shouldn't have the data in the first place will know everything about you, and will be able to predict future actions.
 

biostud

Lifer
Feb 27, 2003
19,940
7,044
136
When you measure this kind of data, the data can change because it's being measured.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
No, the number of connections and permutations for neural paths is practically infinite. Then throw in that the brain is an analog system...
 

SMOGZINN

Lifer
Jun 17, 2005
14,359
4,640
136
We can already predict human behavior with a fairly large degree of accuracy, it is why marketing has become so powerful.
We are a lot less accurate with individual behavior, but we are still pretty good. I think given time we will improve on that quite a bit as well. I don't know if we will ever get perfect, simply because there are to many variables. But I think it is fair to say that we will get to a level of prediction that will pretty well proximate telepathy.

I also think this will happen so slowly that we won't even notice, and it will happen in the next 20 years or so.
 

Puppies04

Diamond Member
Apr 25, 2011
5,909
17
76
I love pondering stuff about the future.

Now, human mind and behavior is impossibly complex and sophisticated right? But isn't something complex only until we have the capacity to map it out?

Just like we have fully mapped human gnome and super data centers already archive all the tremendous data they have of us (Gmail as a bad example), isn't it only a matter of time until we completely PREDICT human behavior?

Since birth, combination of nature & nurture, who you are is a product of quadrillions of internal and external factors.

Isn't it only a matter of time we map out every possible factors of your daily life & physiological workings- then document and calculate real time which will allow it to PREDICT your next action?

Sure it will be a rough fuzzy logic at first. But as with anything, in time we will be able to predict everything you will do with great accuracy. They'll know what you'll have for lunch, what you will say, what fight you'll have, what 'unpredictable' thing you'll try to do- they'll even call that.

What are your thoughts?

The day they announce that they can do this I will answer every question with the word potato until the machine explodes.
 

GrumpyMan

Diamond Member
May 14, 2001
5,780
266
136
If we could predict human behavior at all then Aurora and Newtown wouldn't of happened. Humans can just snap after years of normal behavior.
 

lxskllr

No Lifer
Nov 30, 2004
60,123
10,585
126
If we could predict human behavior at all then Aurora and Newtown wouldn't of happened. Humans can just snap after years of normal behavior.

That makes it all the more dangerous. A statistical analysis will predict actions that are accurate for large groups, but won't be accurate for individuals. Proactive intervention will violate privacy and rights, and instill in people the fear of thought.
 

Jeff7

Lifer
Jan 4, 2001
41,596
20
81
"Complex" is a matter of perspective. :)

A plastic bottle cap is simple, right? Until you decide that you're going to need a billion of them a year, and they need to be entirely fluid-tight, be pressure-resistant, be easy to unscrew by hand, be food-grade-safe, have a very low failure rate, and be very cheap to injection mold.
Companies will invest a fair amount of time to shave a tiny fraction of a second off of the cycle time for something like that. "I need you to reduce the cycle time to make each bottlecap by 50 milliseconds."
Sounds ridiculous?
0.05 seconds * 1 billion = 578.7 days saved. That's 578.7 days of machine and worker time that you could be using to make something else that's profitable.

Now suddenly your simple little bottlecap has an entire team of engineers supporting it, with all kinds of sophisticated computer models and statistical analysis going into it.

Or, it's just a little piece of plastic that keeps soda in a bottle. :)


Human brain: Same thing. It's complex now because it's got a whole lot of neurons, each of which can form multiple connections, on which scale the action of small clusters of molecules can make a big difference. It's just a lot of moving parts to account for. Get a computer that can do that accounting, and you've got something that can simulate a human brain.

Fun philosophical stuff then: Suppose you do go and simulate a fully-sentient human brain. Is it ethical to turn off the simulation? Is it murder? :hmm:
"It's just a simulation!"
The simulated mind might disagree with that assessment.
 

SphinxnihpS

Diamond Member
Feb 17, 2005
8,368
25
91
I love pondering stuff about the future.

Now, human mind and behavior is impossibly complex and sophisticated right? But isn't something complex only until we have the capacity to map it out?

Just like we have fully mapped human gnome and super data centers already archive all the tremendous data they have of us (Gmail as a bad example), isn't it only a matter of time until we completely PREDICT human behavior?

Since birth, combination of nature & nurture, who you are is a product of quadrillions of internal and external factors.

Isn't it only a matter of time we map out every possible factors of your daily life & physiological workings- then document and calculate real time which will allow it to PREDICT your next action?

Sure it will be a rough fuzzy logic at first. But as with anything, in time we will be able to predict everything you will do with great accuracy. They'll know what you'll have for lunch, what you will say, what fight you'll have, what 'unpredictable' thing you'll try to do- they'll even call that.

What are your thoughts?

Think of the universe as a computer, and it is computing reality. If it has taken 14B years to calculate us, then it is reasonable to assume that we are too complex to reliably predict using a man-made, and therefore inferior computer in any less time. If the universe itself is a quantum computer, then it would take at least another universe (and all the time) to make the calculation for one iteration of everyone's lives. For each variable, you would need another universe and all the time again. This quickly approaches infinity (as if it already weren't practically infinite).

That's if we can't make a computer that's better than the universe.

Now isolating a single human mind to predict certainly pares down the number of variables that need to be accounted for, but the difference to me seems to be the difference between infinity and some just South of infinity, which is still too much.

We can't even predict markets. We can't even predict the shape a protein will take inside a cell. We can't predict weather. I can't even predict what I will have for lunch.
 

SphinxnihpS

Diamond Member
Feb 17, 2005
8,368
25
91
Fun philosophical stuff then: Suppose you do go and simulate a fully-sentient human brain. Is it ethical to turn off the simulation? Is it murder? :hmm:
"It's just a simulation!"
The simulated mind might disagree with that assessment.

I'm a huge fan of the thought experiment. Here's one of my favorite on the subject.

Mindless Thought Experiments

(A Critique of Machine Intelligence)



by Jaron Lanier

Thought Experiment #1: Your Brain in Silicon

Since there isn't a computer that seems conscious at this time, the idea of machine consciousness is supported by thought experiments. Here's one old chestnut: "What if you replaced your neurons one by one with neuron-sized and shaped substitutes made of silicon chips that perfectly mimicked the chemical and electric functions of the originals? If you just replaced one single neuron, surely you'd feel the same. As you proceed, as more and more neurons are replaced, you'd stay conscious. Why wouldn't you still be conscious at the end of the process, when you'd reside in a brain-shaped glob of silicon? And why couldn't the resulting replacement brain have been manufactured by some other means?"

OK, let's take this thought experiment even further. Instead of physical neuron replacements, what if you used software? Every time you plucked a neuron out of your brain you'd put in a radio transceiver that talked to a nearby computer that is running neuron simulations. When enough neurons had been transferred to software they could start talking to each other directly in the computer and you could start throwing away the radio links. When you're done your brain would be entirely on the computer.

If you think consciousness doesn't travel into software you've got a problem. What is so special about physical neuron replacement parts? After all, the computer is made of matter too, and it's performing the same computation. If you think software lacks some needed essence, you might as well believe that authentic, original, brand name human neurons from your very own head are the only source of that essence. In that case, you've made up your mind: You don't believe in AI. But let's assume that software is a legitimate medium for consciousness and move on.

So now your consciousness exists as a series of numbers in a computer; that is all a computer program is, after all. Let's go a little further with this. Let's suppose you have a marvelous new sensor that can read the positions of every raindrop in a storm. Gather all those raindrop positions as a list of numbers and pretend those numbers are a computer program. Now start searching through all the possible computers that could exist up to a certain very large size until you find one that treats the raindrop positions as a program that is exactly equivalent to your brain. Yes, it can be done: The list of possible computers of any particular size is large but finite, and so is your brain, according to the earlier steps in the thought experiment, anyway.

OK, so is the rainstorm conscious? Is it conscious as being specifically you, since it implements you? Or are you going to bring up an essence argument again? You say the rainstorm isn't really doing computation- it's just sitting there as a passive program- so it doesn't count? Fine, then we'll measure a larger rainstorm and search for a new computer that treats a larger collection of raindrops as implementing BOTH the computer we found before that runs your brain as raindrops AS WELL AS your brain in raindrops. Now the raindrops are doing the computation. Maybe you're still not happy with this because it seems the raindrops are only equivalent to a computer that is never turned on.

We can go further. The thought experiment supply store can ship us an even better sensor that can measure the motions, not merely the instant positions, of all the raindrops in a storm over a period of time. Now we'll look for a computer that treats the numerical readings of those motions as an implementation of your brain changing over time. Once we've found it, we can say that the raindrops are doing the same work of computation as your brain for at least a specified amount of time. The rainstorm computer has been turned on. The raindrops won't cohere forever, but no computer lasts forever. Every computer is gradually straying into entropy, just like our thunderstorm. During a few minutes, a rainstorm might implement millions of minds; a whole civilization might rise and fall before the water hits the dirt.

And further still: You might object that the raindrops are not influencing each other, so they are still passive, as far as computing your brain is concerned. Let's switch instead, then, to a large swarm of asteroids hurdling through space. They all exert gravitational pull on each other. Now we'll use a sensor for asteroid swarm internal motion and use it to get data that will be matched to an appropriate computer to implement your brain. Now you have a physical system whose internal interactions perform the computation of your mind.

But we're not done. You should realize by now that your brain is simultaneously implemented everywhere. It's in a thunderstorm, in the birth rate statistics, in the dimples of gummy bears.

Enough! I hope the reader can see that my game can be played ad infinitum. I can always make up a new kind of sensor from the supply store that will give me data from some part of the physical universe that is related to itself in the same way that your neurons are related to each other by a given AI proponent.

AI proponents usually seize on some specific stage in my reducto ad absurdum to locate the point where I've gone too far. But the chosen stage varies widely from proponent to proponent. Some concoct finicky rules for what matter has to do to be conscious; be the minimum physical system isomorphic to a conscious algorithm, for instance. The problem with such rules is that they have to race ahead of my absurdifying thought experiments, so they become stringent to the point that they no longer allow the brain itself to be conscious. The brain is almost certainly not the minimum physical system isomorphic to its thought processes, for instance.

A few DO take the bait and choose to believe there are a myriad of consciousnesses everywhere. This has got to be the least elegant position ever taken on any subject in the history of science. It would mean that there is a vast superset of consciousnesses sort of like you, for instance the one that includes both your brain plus the plate of pasta you're eating.

Some others object that an asteroid swarm doesn't DO anything, while a mind acts in the world in a way that we can understand. I would respond that to the right alien, it might appear that people do nothing, and asteroid swarms are acting consciously. Even on Earth we can see enough variation in organisms to doubt the universality of the human perspective. How easy would it be for an intelligent bacteria to notice people as integral entities? We might appear more as slow storms moving into the bacterial environment. If we are relying solely on the human perspective to validate machine consciousness, we're really only putting human-ness on an even higher pedestal than it might have been at the start of our thought experiment.

The variation among responses from AI proponents is what should be taken as the meaningful product of my flight of fancy. I don't claim to know for certain where consciousness is or isn't, but I hope I've at least shown that there is a real problem.

Thought experiment two: The Turing Test

Sometimes the idea of machine intelligence is framed in moral terms: Would you deny equal rights to a machine that seemed conscious? This question will serve to introduce the mother of all AI thought experiments, the Turing Test. Before I go on, a note on terminology: In the following discussion, I'll let the terms "smart" and "conscious" blur together, even though I profoundly disagree that they are interchangeable. This is the claim of machine intelligence, however; that consciousness "emerges" from intelligence. To constantly point out my objection would make the tale too tedious to tell. That is a danger in thought experiments: You might find yourself adopting some of the preliminary thoughts while you're distracted by the rest of the experiment.

At any rate, Alan Turing proposed a test in which a computer and a person are placed in isolation booths and are only allowed to communicate via media that conceal their identities, such as typed emails. A human subject is then asked to determine which isolation booth holds a fellow human, and which holds a machine. Turing's interpretation was that if the test subject cannot tell the human and machine apart, then it would be improper to impose a distinction between them when the true identities are revealed. It would be time to give the computer "equal rights".

I have long proposed that Turing misinterpreted his thought experiment. If a person cannot tell which is machine and which is human, it does not necessarily mean that the computer has become more human-like. The other possibility is that the human has become more computer-like. This is not just a hypothetical point of argument, but a serious concern in software engineering.

Part 3: Pragmatic opposition to machine intelligence

When a piece of software is deemed autonomous to some degree, the only test of its status is whether users believe it. AI developers would certainly agree that humans are more mentally agile than any existing software today, so today it's more likely than not that a person is changing in order to make the software seem smart. Ironically, the harder a problem area is, the easier it can be for humans to believe that a computer is smart at it.

An AI program that attempts to make decisions about something we understand easily, like basic home finance, is booted out the door immediately, because it is perceived as ridiculous, or even dangerous. Microsoft's "Bob" program was an example of the ridiculous. But an AI program that teaches children is acceptable because we don't know much about how children learn, or how teachers teach. Furthermore, children will adapt to the program, making it seem successful. Such programs can already be found in many homes and schools. The less we understand a problem, the more ready we are to suspend disbelief.

There is no functional gain in making a program "intelligent". Exactly the same capabilities as are found in an "intelligent" or "autonomous" program (such as the ability to recognize a face) could just as well be inclusively packaged within a "non-autonomous" user interface. The only real difference between the two approaches is that if users are told a computer is autonomous, then they are more likely to change themselves to adapt to the computer.

This means that software packaged as being "non-intelligent" is more likely to improve, because the designers will receive better critical feedback from users. The idea of intelligence removes some of the "evolutionary pressure" from software, by subtly indicating to users it is they, rather than the software, that should be changing.

As it happens, machine decision making is already running our household finances to a scary degree, but it's doing so with a Wizard of Oz-like remote authority that keeps us from questioning it. I'm referring to the machines that calculate our credit ratings. Most of us have decided to change our habits in order to appeal to these machines. We have simplified ourselves in order to be comprehensible to simplistic data-bases, making them look smart and authoritative. Our demonstrated willingness to accommodate machines in this way is ample reason to adopt a standing bias against the idea of artificial intelligence.

Inserting a judgment-making machine into a system allows individual humans to avoid responsibility. If a trust-worthy, gainfully employed person is denied a loan, it's because of the algorithm, not because of another specific person. The loss of personal responsibility can be seen most clearly in the military's continued fascination with intelligent machines. AI has been one of the most funded, and least bountiful, areas of scientific inquiry in the second half of the twentieth century. It keeps on failing and bouncing back with a different name, only to be over-funded once again. The most recent marketing moniker was "Intelligent Agents". Before that were "Expert Systems". The lemming-like funding charge is always lead by the defense establishment. AI is perfect research for the military to fund. It lets strategists imagine less gruesome warfare and avoid personal responsibility at the same time.

AI proponents object that a Turing Test-passing computer would be spectacularly, obviously intelligent and conscious, and that my arguments only apply to present day, crude computers. The argument I'm presenting relates to the way computers change, however. The AI fantasy causes people to change more than computers; therefore it impedes the progress of computers. If there IS a potential for conscious computers, I wouldn't be surprised if the idea of AI is what turns out to prevent them from appearing.

AI boosters believe that computers are getting better so quickly that we will inevitably see qualitative changes in them, including consciousness, before we know it. I'm concerned by the attitude implied in this position; that machines are essentially improving on their own. This is a "trickled down" version of the retreat from responsibility implied by AI. I think we in the computer science community need to take more responsibility than that. Even though we're used to seeing spectacular progress in the hardware capabilities of computers, software improves much more slowly, and sometimes not at all. I saw a novice user the other day complain that she missed her old text-only computer because it felt faster than her new pentium machine at word processing. Software awkwardness will always be able to outpace gains in hardware speed and capacity, however spectacular they may be. Once again, emphasizing human responsibility instead of machine capability is much more likely to create better machines.

Even strong AI enthusiasts worry that humans might not agree on whether the Turing Test is passed by a future machine. Some of them bring up the moral "equal rights" argument for the machine's benefit. After the thought experiments fail to turn in definitive results, the machine is favored anyway, and it's rights are defended.

This is where AI crosses a boundary and turns into a religion. A new form of mysterious essence is being proposed for the benefit of machines. When I say religion, I mean it. The culture of machine consciousness enthusiasts often includes the expressed hope that human death will be avoidable by actually enacting the first thought experiment above, of transferring the human brain into a machine. Hans Moravec (<-check spelling<) is one researcher who explicitly hopes for this eventuality. If we can become machines we don't have to die, but only if we believe in machine consciousness. I don't think it's productive to argue about religion in the same way we argue about philosophy or science, but it is important to understand when religion is what we are talking about.

I will not argue religion here, but I will restate the heart of my objection to the idea of machine intelligence. The attraction, and the danger, of the idea is that it lets us avoid admitting how little we understand certain hard problems. By creating an umbrella category for "everything brains do", it's possible to feel we are making progress on problems we don't even know how to frame yet.

Even though the question of machine consciousness is both undecidable and lacking in consequence until some hypothesized future time when an artificial intelligence appears, attitudes towards the question today nonetheless have a tangible effect. We are vulnerable to making ourselves stupid in order to make possibly smart machines seem smart.

Artificial Intelligence enthusiasts like to characterize their opponents as inventing a problem of consciousness where there needn't be one in order to preserve a special place for people in the universe. They often invoke the shameful history of hostile receptions to Galileo and Darwin in order to dramatize their plight as shunned visionaries. In their view, AI is resisted only because it threatens humanity's desire to be special in the same way the ideas of these hallowed scientists once did. This "spin" on opponents was first invented, with heroic immodesty, by Freud. While Freud was undeniably an decisive original thinker, his ideas have not held up as well as Darwin's or Galileo's. In retrospect he doesn't seem to have been a particularly objective scientist, if he was a scientist at all. It's hard not to wonder if his self-inflation contributed to his failings.

Machine consciousness believers should take Freud's case as a cautionary tale. Believing in Freud profoundly changed generations of doctors, educators, artists, and parents. Similarly, belief in the possibility of AI is beginning to change present day practices both in areas I have touched on- software engineering, education, and military planning- and in many other fields, including aspects of biology, economics, and social policy. The idea of AI is already changing the world, and it is important for everyone who is influenced by it to realize that its foundations are every bit as subjective and elusive as those of non-believers.

From: http://www.jaronlanier.com/aichapter.html
 

Jaskalas

Lifer
Jun 23, 2004
35,823
10,120
136
This is not possible.

The computer system will attempt to be perfect, but the brain is anything but perfect. It is flawed, and these flaws are unique and random. You cannot predict them.
 

clamum

Lifer
Feb 13, 2003
26,256
406
126
I'm a huge fan of the thought experiment. Here's one of my favorite on the subject.
From: http://www.jaronlanier.com/aichapter.html
I don't agree with his first premise, about replacing neurons with chips. While I think the end result of the replacement might be the same, I don't agree with his statement "why couldn't the resulting replacement brain have been manufactured by some other means?". Two completely different things. In replacing neurons in a brain with chips, you're starting out with the complete "system" (a functioning conscious brain); manufacturing it you do it from scratch. To me that's a ridiculous comparison.
 

JTsyo

Lifer
Nov 18, 2007
12,035
1,134
126
You would only be able to predict at the the time the human mind was making the decision. Things like how well you slept and if you're hungry might affect decisions you make.
 

Jeff7

Lifer
Jan 4, 2001
41,596
20
81
This is not possible.

The computer system will attempt to be perfect, but the brain is anything but perfect. It is flawed, and these flaws are unique and random. You cannot predict them.
"Random" and "flaw" are as subjective as well. (I guess "random" does have some good mathematical basis though, but it is commonly used in a subjective way.)

Random flaw in the brain: It may merely appear to be random because you don't have all the information.
"Wow, that was unexpected! What a random event!"
It may only have been random to you because you didn't see what preceded it.

A sufficiently complex :awe: computer system may indeed be limited to binary at the most basic level, but even simple programs can behave in unexpected ways. And at the most basic level, we're limited by what our neurons are capable of, and their feature set currently includes the ability to screw up due to abnormalities in their DNA, or something down at the molecular level. So both computer systems - something we create artificially out of silicon or optical pathways or who knows what else, or a squishy human brain - have their basic hardware limitations and features. Either one can do unexpected and seemingly random things. They're only random or unexpected for as long as the system remains complex, in the sense that its full workings remain beyond our understanding.

If you want, you could inject random data into the processing. We've built true random number generators already. Intel's got something that they integrate into some (or maybe all) chipsets, which uses thermal noise to generate random numbers. For faster number generation, there's a hardware module that fires photons at a semi-silvered mirror. Some pass through, some get reflected. From that, you can get up to 4 million random bits per second.

Or just fire up a Tesla Coil next to it. The EM noise should induce all kinds of randomness into it. :D
 
Last edited:

Mr. Pedantic

Diamond Member
Feb 14, 2010
5,027
0
76
I love pondering stuff about the future.

Now, human mind and behavior is impossibly complex and sophisticated right? But isn't something complex only until we have the capacity to map it out?

Just like we have fully mapped human gnome and super data centers already archive all the tremendous data they have of us (Gmail as a bad example), isn't it only a matter of time until we completely PREDICT human behavior?

Since birth, combination of nature & nurture, who you are is a product of quadrillions of internal and external factors.

Isn't it only a matter of time we map out every possible factors of your daily life & physiological workings- then document and calculate real time which will allow it to PREDICT your next action?

Sure it will be a rough fuzzy logic at first. But as with anything, in time we will be able to predict everything you will do with great accuracy. They'll know what you'll have for lunch, what you will say, what fight you'll have, what 'unpredictable' thing you'll try to do- they'll even call that.

What are your thoughts?
This basically sums up my view of the situation. However, I also believe that as with current psychology, it will be easy for us to predict the brain - but predicting a brain will be significantly harder, because as you point out, there are a multitude of probably largely unknown and unquantifiable factors to consider for any single person.

Still can't predict weather 100% and human behavior is much more complex.
Weather is a much, much more complex system, since it is itself dependent on the actions of every single human being on Earth, among other things.

Yes, and that's why it's dangerous to feed the data machine. Prediction is currently primitive, but it'll get better over time, and people who shouldn't have the data in the first place will know everything about you, and will be able to predict future actions.
This is unrealistic and naively paranoid.
 

lxskllr

No Lifer
Nov 30, 2004
60,123
10,585
126
This is unrealistic and naively paranoid.

How do you figure? It's already being done. You don't have to do anything other than open your browser to see it in action. Technology isn't stagnant. Today's impossible is next year's routine. Who knew when I was booting my first computer off a floppy drive, I would some day be able to store GBs of data in a chip smaller than my pinky nail?

Edit:
I've posted this video a few times now, but it's worth posting again, and should be watched by anyone that uses technology...

https://www.youtube.com/watch?v=sKOk4Y4inVY
 
Last edited:

Mr. Pedantic

Diamond Member
Feb 14, 2010
5,027
0
76
How do you figure? It's already being done. You don't have to do anything other than open your browser to see it in action. Technology isn't stagnant. Today's impossible is next year's routine. Who knew when I was booting my first computer off a floppy drive, I would some day be able to store GBs of data in a chip smaller than my pinky nail?

Edit:
I've posted this video a few times now, but it's worth posting again, and should be watched by anyone that uses technology...

https://www.youtube.com/watch?v=sKOk4Y4inVY

Who currently "know everything about you, and [is] able to predict future actions"?

Bear in mind, by everything, I mean everything.

And also, Moore's law makes your analogy a false one because of argument from personal incredulity.
 

lxskllr

No Lifer
Nov 30, 2004
60,123
10,585
126
Who currently "know everything about you, and [is] able to predict future actions"?

Bear in mind, by everything, I mean everything.

And also, Moore's law makes your analogy a false one because of argument from personal incredulity.


Must be nice living in your magical world. I guess in NZ cars appeared with stereos, fuel injection, airbags, and horsepower rated in the hundreds of horses. Here in America, our cars started with no stereo, carburetor, and you sat on top like a carriage. The safety equipment consisted of 'jump quick if the shit hits the fan' which wasn't entirely unreasonable since they only moved at human running speed.

All this is to say that technology moves in stages, and you don't get advanced notice. Btw, Moores law isn't a law. It's just an interesting saying that works more times than it doesn't. You might as well cite Murphys law for your proof ;^)