Me must buy Intel related products. Me must hate AMD. Me must have brains. *drool*Didn't Intel also "promise" to implant chips in the human brain by 2020?
Me must buy Intel related products. Me must hate AMD. Me must have brains. *drool*Didn't Intel also "promise" to implant chips in the human brain by 2020?
Maybe a software company will randomly switch to hardware and miraculously perform 20 years' worth of R&D by tomorrow so I can get more gigahertz for to play runescape? lol
Corporations have to milk every mini improvement in tech. They are actually anti-innovation. That's why all this future talk will be delayed 10x longer than it really needs to be. It would be nice to have real competition where one company goes straight for graphene,nanotubes etc. Maybe Google, Facebook or Apple wll run with it huh? lol
I got a friend who swears its going into the hand or forehead i loved your response so much ill pay him a visit today to tell him how stupid he is for thinking the devil is gonna do what everyone knows he will do.
Love messing with this guy and your my new best friend.
Yes, on Netburst. If they gave us a single core Netburst chip now, it probably would have that sort of speed.LOL, Intel also predicted ten years ago we'd have 10ghz chips by now.
If that was the case, it would be written as such. But it isn't written like that.Revelation is a spiritual book. We got sidetracked when we interpreted it literally. It should be interpreted spiritually. The mark of the beast is already here alive and well; the beast is greed.
Yes, on Netburst. If they gave us a single core Netburst chip now, it probably would have that sort of speed.
I thought 11nm was the limit for silicon ...
Quantum computers aren't some kind of magic bullet that will replace all other computers. That's a star trek type fantasy. They are unbeatable for processing some types of programs, and slow for others.
Theoretically spintronics can combine the best of both worlds in hybrid computers using both classical and quantum circuitry. Instead of pushing current through circuits, they use the spin of the electron. In this way they can approach the speed of light using very small amounts of energy and without frying everything in sight or be entangled for quantum computing.
The trick is engineering precision. Just as shrinking circuits or making them faster requires greater precision, so does designing spintronic devices. Modern hard disk drives are spintronic devices, but the ability to make complex circuits is still some years away. Room temperature superconductors, carbon nanotubes, graphine, plasmonics, etc. all have potential, but have their theoretical limitations that spintronics don't.
I remember once Gordon Moore gave a speech and he said that at any point in his career he would look forward 10 years and think "I don't so how it's possible for scaling to continue" and then 10 years later we would find a way forward and things would move on. When he gave one his early "Moore's Law" speeches, he used to say that he thought we could double for 10 years plus or minus two years, which would mean the end of Moore's Law in 1975 or thereabouts. At some point, quite a few people in the industry thought it was impossible to make transistors with a minimum feature size smaller than 1um.
Clearly you can't keep on the path that we are on of shrinking transistor sizing forever, but to me, we have a path 10 years forward. Which is normal. Once we hit the scaling limit, I have no doubt, things will continue - either with new materials or we'll move to 3D with TSV or something else. If it all goes horribly wrong, I'm pretty good at gardening... which is always a useful skill.
Given the parametric transistor improvements Intel has made on each successive node since 90nm, I would not be surprised at all if a single-core netburst CPU could be clocked at 10GHz (or more) on 32nm.
If room temperature superconductors are ever created then heat should cease to be a critical limiting parameter for ultra high end computing platforms. What will be the next limiting factor? Are we going to 60 thz optical computers?
I'm pretty sure the lack of scaling won't put engineers totally out of a job (I hope it doesn't at least
). I would predict that if we hit a wall with scaling, the future of engineer work would be implementing application logic into hardware. In other words, we have MPEG4 decoders in hardware, I could see us starting to take it to the extreme. For example, an Email hardware component and a HTML5 translating component.
That would give us jobs forever.
The big problem is how would you keep it from burning a hole down to the center of the earth with the heat that it would generate? While netburst wasnt a great performance processor, the thing that really got me with it was the heat that it would generate. So thank god for the Intel® Core 2 Duo processors.
At a macro-economic perspective this position certainly holds true, but at the micro-economic level (think Detroit) or the individual-level (the laid off CMOS process engineer) this provides no job security or comfort that a changing future will hold a place for them in the job market.
This is the downside to having specialized skills and industries while not having much in the way of a social safety net. Look at the auto industry, or perhaps even more poignant the NASA space shuttle agency and ex-employees.
A few percent of them will transition well into the private sector, but how many thousands of refueling technicians and launch-pad specialists does the private sector really need?
i doubt that, they are planning to release their last silicon chip by 2025
If room temperature superconductors are ever created then heat should cease to be a critical limiting parameter for ultra high end computing platforms. What will be the next limiting factor? Are we going to 60 thz optical computers?
The biggest technical problem with room temperature superconductors is they don't exist and no one knows if is possible to make them at all, much less turn them into viable commercial products with billions of complex parts crammed into a few millimeters of space. Likewise, there are other ways around heat issues so it isn't an advantage exclusive to room temperature superconductors.
Circuits that approach the speed of light and quantum computing are already being developed by countless laboratories around the world and it is inevitable we will have computers that use them at some point. Exactly what might be the next big limiting factor I have no clue. Its a bit like asking who will win the world's series in 2040.
Right now the biggest limiting factor and darkest horse in this race is heterogeneous computing. The traditional cpu is being replaced with something more like the human brain with all its specialized parts and, yet, massive parallel processing capacity. The architectural theory for the cpu goes back to WWII, but we don't even have a basic theory for heterogeneous computing yet. People are rushing to develop all the specialized parts and parallel processors required, but otherwise blindly feeling their way forward as to how to combine them all in the most effective and efficient manner.
Thankfully the current research into the brain is progressing by leaps and bounds, but the mathematics behind this kind of computing are mind-numbingly complex and related to physical theories of everything. I'm reminded of Einstein drawing the first diagram of a laser on a napkin and then telling his dinner companion it would be another fifty years before it could be built. We don't even have the theory yet, much less any way of telling how long it might take to implement.
I'm might be alone on this but I don't see heterogenous computing to be any more complex or challenging than the circuit-balancing efforts that go into managing microarchitectures that have >700 instructions in the ISA already.
![]()
Implementing the fetch, decoders, schedulers, pipeline, etc as needed to manage the fact that code can be requesting the CPU to issue any number of the instructions in the ISA within a given core is, itself, a rather heterogenous computing environment.
The same goes for the programmers and the compilers, they somehow already manage a world that offers the flexibility of these ridiculously large instruction sets.
The challenge I see facing the industry is not "how do we do hetergenous computing?" but rather "how do we convince people to want to pay for our heterogenous computing solution?".
I'm might be alone on this but I don't see heterogenous computing to be any more complex or challenging than the circuit-balancing efforts that go into managing microarchitectures that have >700 instructions in the ISA already.
Implementing the fetch, decoders, schedulers, pipeline, etc as needed to manage the fact that code can be requesting the CPU to issue any number of the instructions in the ISA within a given core is, itself, a rather heterogenous computing environment.
The same goes for the programmers and the compilers, they somehow already manage a world that offers the flexibility of these ridiculously large instruction sets.
The challenge I see facing the industry is not "how do we do hetergenous computing?" but rather "how do we convince people to want to pay for our heterogenous computing solution?".
That's the challenge for any new industry. When the laser was finally developed people said, "That's nice, but what can you do with it?" It took decades to figure out widespread commercial applications, but now they are ubiquitous and we are still rapidly finding new commercial applications for the damned things.
The first widespread commercial applications for heterogeneous computing are likely to be in video games and other consumer products. The ability to run physics and AI on the cpu and not lug the graphics card is one possible application. Gamers want that extra eyecandy and if they don't have to pay an arm and leg to get it they'll demand it. Its only been in the last few weeks that MS released its new C++AMP so the idea that multicore computing and parallel processing programming are somehow mature sciences is laughable. So far the question has been how to justify developing the technology for specific applications, but soon it may be can we justify ignoring the possibilities.
FTFY
