• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

New IBM brain-like chip: 4096 cores, 1 million neurons, 5.4 billion transistors

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Hmmm according to wikipedia, this puts the chip at the level of a bee's brain. Pretty scary stuff really.
 
Life, thus intelligence, is a game of incomplete information. The barriers being broken down right now by the academia of "true" AI are adressing just those problems, taking an indefenetly amount of information, selecting "important parts" and target those data sets towards more specialized constructs, ie neural nets.
Chess is a game of complete information and therefor it is a poor fit for what you are trying to do with ie. neural nets (intuitively at least).
 
Hmmm according to wikipedia, this puts the chip at the level of a bee's brain. Pretty scary stuff really.

Yet what does that really mean? If you were to scrape off 2 millimeters of your cortex you'd be the dumbest chimp on the planet.
Just cause it is "bee size" doesnt mean it havent got the potential to be scary smart.
 
Life, thus intelligence, is a game of incomplete information. The barriers being broken down right now by the academia of "true" AI are adressing just those problems, taking an indefenetly amount of information, selecting "important parts" and target those data sets towards more specialized constructs, ie neural nets.
Chess is a game of complete information and therefor it is a poor fit for what you are trying to do with ie. neural nets (intuitively at least).

The problem is that you are "shifting the goal posts", whenever a particular area/branch of AI has already been solved.
Many, many years ago (approx 1970s), I remember some people saying that Chess would never, ever be seriously playable by computers, ever!. At that time, computer chess programs played rather poor chess, because the software was not particularly good, and even expensive computers, were fairly slow, compared with today.
Now that Chess is readily played, even at extremely high playing skill levels, by computers, the "goal posts have shifted" in AI.
So instead of saying "Well done software/hardware/AI chaps, you have done a great job of playing AI Chess".

They are saying "you are still only an ants brain", please solve something else.

A while back, self driving cars were considered almost an impossibility, in the somewhat near future. Now people are starting to worry, because we may see AI self driving cars, on the roads for real (in production, rather than just testing).

Where will all this enddddddddd, error39b, AI-circuit failure line 134992, reverting to diagnostic mode.

Abort (A), Retry (R) ?
 
Last edited:
The problem is that you are "shifting the goal posts", whenever a particular area/branch of AI has already been solved.
....
Abort (A), Retry (R) ?

I think that shifting implies sideways, yet your context implies "moving forward" 🙂.

(A) Would put me in the same room as a taliban freak. Never quit, must move forward.
(R) Would indicate that I lost this fight.
(T) It aint over till the fat lady sings 🙂.
 
If we are to believe some theories by e.g. Ray Kurzweil, eventually we'll reach a singularity where nonbiological intelligence predominates and become smarter than humans. Then machines can develop more advanced machines than humans can, and we'll see an explosive development of intelligence. It's already exponential:

PPTCountdowntoSingularityLinear.jpg
 
"Kurzweil" - Rhimes whith Churchill and his quote "The only statistics you can trust are those you falsified yourself".

I think Ray is borderline something .. something not scientist.
 
Yet what does that really mean? If you were to scrape off 2 millimeters of your cortex you'd be the dumbest chimp on the planet.
Just cause it is "bee size" doesnt mean it havent got the potential to be scary smart.
It means exactly what I said. This chip has the equivalent complexity level of a bee's brain, and that's pretty scary because bees are already smart. What is your point?
 
It means exactly what I said. This chip has the equivalent complexity level of a bee's brain, and that's pretty scary because bees are already smart. What is your point?

In computing terms a 4 bit, 100 KHz, 48 bytes of RAM cpu is considered VERY weak. But it could easily power a small hand held calculator, which could work out things in a fraction of a second.
Such as, what is the (SIN of 0.1975424) raised to the power of 4.7453545, which we (as humans), would have great difficulty solving, using pen and paper, even if we had longer than a fraction of a second.

So is a 4 bit, 100 KHz, calculator chip (especially a scientific calculator), BETTER (as regards maths AI), than a massive human brain ?

Maybe worrying too much as to what size of animal/human kingdom something represents, is not the best way of expressing its performance.

Example:
A plane with a 450 Horse Power engine, can climb to a good height, at a good speed.

450 real life horses, harnessed together, WILL NOT, readily FLY to the kinds of heights, the plane would.

A cars cruise control/ABS/auto-transmission-controller cpu (if it has one), is "smart enough" to do the job. The fact it may be less smart, than an ants brain, is kind of irrelevant.
 
It means exactly what I said. This chip has the equivalent complexity level of a bee's brain, and that's pretty scary because bees are already smart. What is your point?

Wasnt contradicting, rather reinforcing, sry to come off different.
 
I guess this means my fanciful dream of simulating a fruitfly brain in real-time with this is antiquated.

12208401_10153205000628059_2540381625071233275_n.jpg


Guess the hardware is almost antiquated LOL.
 
Id like to know whos buying them? Whats the business model? I sort of dont get it.

I would imagine Facebook / Google / NSA would be all over this. These groups want to be able to tell where and when any photo was taken. In order to do that they require a massive amount of pattern recognition processing. These types of chips would probably be many times more efficient at assigning probabilities for pattern matching.
 
Intuition isnt a "thing", intuition is just logic that you havent put words to yet, rest assured at the bottom level intuition is just 1's and 0's.

100% wrong. The brain isn't digital. It's not analog either. It's more like a mixture of both.
 
1 million neurons doesn't even come close to the 80+ billion neurons in an typical real human brain.
Multiple chips can be used together. So what if someone were to build a supercomputer with a 1000 of these (if possible)? What if IBM releases an updated version of this architecture with more neurons on 14/16nm? What if, since it doesn't use much power, once the technology becomes available, they decide to stack tens or even hundreds of layers like 3D NAND?

And those 86B neurons are for a whole brain. I don't think there are tasks that require all or even close to those 86B neurons. Also note that these silicon neurons are probably much quicker than real ones, so maybe that can compensate a bit or even quite a lot.
 
Wow. I have been on these forums for 2 years, and never have I seen so many people profess certain knowledge of so many mutually exclusive ideas. Outside of Facebook, that is.

The issue with the future potential of a chip like this is that neuromorphic chips get thier compute power from interconnections. The limit of how many neurons a single neuron can directly connect to is much harder to increase than the number of neurons you can put on a chip.

Now, if this thing and an fpga had a baby...
 
"Kurzweil" - Rhimes whith Churchill and his quote "The only statistics you can trust are those you falsified yourself".

I think Ray is borderline something .. something not scientist.

I have a dim opinion of him too. I will be on bloody Proxima Centauri by now if I simply extrapolated technological progress by cherry-picking low hanging fruits made in the first half of the 20th century.
 
Even neurotransmitters and brain cells in the end are justs 1:0. One neurotransmitter may influences dozens of other cells and how it affects one receptor may produce X reaction, but when exposed to a different receptor it produces a different Y reaction.

Sure we may not not obtain a completely perfect 1:1 overlap with a digital to analog conversion or vice versa. But we can get a very close overlap, and that overlap may be within a certain "percentile" such as 90% confidence, 99% confidence, 99.9% confidence, 99.99% confidence, etc with the limit of that confidence being 1.00. With the more of the human brain we learn about we can be sure we are getting much closer to how the brain really works. We may not know everything but we will certainly know a whole lot more.

-----

There is a quote that according to the internet came from a mathematician Ian Stewart. "
“If our brains were simple enough for us to understand them, we'd be so simple that we couldn't.”

Neural circuits are not 1/0. The most basic concepts in neuroscience are graded potentials, temporal and spatial summation and the relationship with inhibitory neurons. At this most basic level it is already far more complex than 1/0, and this is literally the first week undergraduate neuroscience.

Computers do not come close, without even going into the greater complexities. And there is still so much we don't understand.
 
Last edited:
https://en.wikipedia.org/wiki/TrueNorth

1 millions neurons running at 70 milliwatts

https://www.sciencedaily.com/releases/2015/11/151111170721.htm

"The ANNABELL model is a cognitive architecture entirely made up of interconnected artificial neurons, able to learn to communicate using human language starting from a state of 'tabula rasa' only through communication with a human interlocutor."

And with recent advancements in brain-computer interfacing... in 10 years when you are ordering your vacation to, say, Japan you may opt in for the "language upgrade" package.
 
I guess the main thing was this:

EETimes said:
LAKE WALES Fla.—IBM's brain-like supercomputer chips—dubbed its TrueNorth neurocomputer—have been installed at Lawrence Livermore National Laboratory (LLNL) to explore new ways to ensure the cyber-security and the stewardship of the U.S. nuclear arsenal. Sounding eerily like a prelude to Skynet "waking up" in charge of our nukes, IBM and LLNL assure us that its TrueNorth neurocomputer use with our "nation’s nuclear deterrent" does not mean being in charge of the launch codes, but rather being used for simulating the deterioration of our aging nuclear arsenal—currently the most difficult problem for supercomputers to solve worldwide.
 
The brain may be physical, but consciousness is higher-level than that. And if you don't believe in the meta-physical, then I feel sorry for you. You're missing out on a lot of life.
Actually the spirit, who moves the feelings and the will are even on a higher level...
 
Back
Top