Building Technology the Wrong way.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

waggy

No Lifer
Dec 14, 2000
68,143
10
81
In the 90s and early 2000s, hardware performance gained very rapidly. People used to say computers were obsolete as soon as they left the factory. In the mid 2000s, they hit a "good enough" plateau. The point when low and middle end computers were capable of doing what most people used them for without choking. Checking email, doing homework, surfing the web, watching videos, casual online gaming. Those computers started lasting longer without noticeable performance impacts.

There is still rapid hardware development. It's just taking place in the mobile world. Five years ago, if someone had shown me an iPad and said it would replace desktops and laptops, I'd have laughed. Mobile hardware was a joke at the time. Now I can actually edit video and photos on my phone. That was unheard of not too long ago.

in the 90's and such they ramped up the speed. it was all about that.

now? speed is not so much a issue. they cpu's do so much more. You really can't compare them to now.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Why are we still building giant video cards and hardware machines to run our games and software on?? For example battlfield 3 needs a pretty new and modern video card to run on decently, shouldn't it be the opposite? Shouldn't programmers be making awesome games with spectacular graphics that are able to run on really old hardware?
They do, just not games like BF3. It's like complaining that music sucks, today, because 99% of pop is pablum.

Wouldn't that be more ideal and environmentally friendly? It seems to me that not enough effort is put into this, instead economics trumps efficiency and the urge to make a buck keeps us in this need to buy bigger and better hardware to run over bloated games. Its kind of kinda of a like a paradox because the computer hardware business also funds innovation of better technology, but at the same time you would think there would have been some sort of technological break through kind of like computing and hardware's version of cold fusion yet nothing, maybe technology is being held back I dont know.
Programming is holding a lot back, including the hardware, but good programmers don't get to make those decisions, so :| c'est la vie. Another problem there is that good ways to serve smaller devs, letting them get more features and better artwork for a given price and time, might not be the best fit for the big guys with Hollywood-sized budgets, yet it's the latter who drive the creation and improvements in development tools and hardware.

The closest effort Ive seen to the correct computing model as I see it in the vision of a natural advanced society would be like the http://www.raspberrypi.org/ project and that group is non for profit. So a non for profit organization is the only way for technology to evolve naturally and efficiently with minimal resources?
No. There was already the stamps, the PICs, then the Atmels. On the side, there is the Sheeva, Beagleboard, and Pandaboard. The RPi pretty much blows an Arduino out of the water, but what needed a non-profit was the fact that it's an answer (high SoC inventory? :)) in search of a question. There aren't tons of people that want a product like a Raspberry Pi. There are tons of tinkers who look at it and go, "that would be damn cool to do something with...but what?" The RPi was not made in a vacuum. What makes it special is that it costs what an Arduino, PIC system, etc., does, while having nearly capabilities (mainly lacking AIB options) of industrial SBCs.

It seems to me that capital is holding innovation back more than its helping, mimicking all aspects of our wasteful inefficient society as a whole. It seems to me that big business does not want efficient computing because they would not be able to make any money off of it, also because it would probably evolve on its own without the need to for the newest and fastest hardware around.
That much is true, but it's less efficient computing than it efficient technology in general. Most software gets way too complex, and there is a massive sub-industry in IT to help maintain and increase that complexity by leaps and bounds in the name of fixing very minor problems that are best worked around each time encountered (ORM and persistence frameworks being poster children for this phenomenon).

I've long thought the way that the big hardware companies slowly dole out the new technology is suspect. What is stopping Intel from making much, much faster chips tomorrow other than them having an effective duopoly in the market?
x86's success. Seriously, a better RISC could be made, and Intel could make it faster than any x86 (make the RISC good at everything x86 is good at, then use the new ISA to take care of x86 weaknesses). But, they'd have to make a virtual x86 layer that was 100% compatible, and the design effort for such a RISC CPU actually faster then their best x86 CPUs would cost more than the next generation of improvements on x86 CPUs...so why not just make better native x86 CPUs?

So it's because it costs too much and therefore has to be done in steps? I dunno. I just don't buy it.
Then become an academic researcher, where you can hand-wave away that people have do imperfect work to make ideas happen.

A perfect design from the start, in a field that is full unknowns, is not going to happen. CPU and GPU development from the likes of ARM, AMD, IBM, and Intel is cutting-edge stuff. They are trying combinations of features in each major generation that haven't been achieved, before, in the kinds of contexts they are using, only predicted possible (though, in some cases, like Pentium Pro's effective OOOE, thought too difficult to achieve!). Every time, there will be something less than optimal, and fixing that takes time and effort, along with adding new features.

They can't make perfect CPUs in one shot any more than perfect automobiles. New innovations keep coming, and every design needs some tweaks every year.

Major innovations, these days, come in the form of shrinking parts of processors, gluing them and their peripherals together more efficiently, and running them at lower power levels.

Don't you agree that there is a huge disincentive for development of powerful, cheap, ubiquitous hardware, seeing as how that would put the entire industry out of business? I see no reason, other than the planned obsolescence that I'm talking about, that we shouldn't have real-time photo-realistic 3d rendering on cheap hardware. Is there some sort of barrier in the physics of it, in the same way that we can't have free energy?
More or less. There is a barrier in mathematics. It's not something that is easy to do. Chasing photo-realism is trying to win a race by running half the distance left each generation. You'll eventually get close enough, but the number of generations needed more than 50% more, once you reach the 50% mark.

I know that's the general feeling, but there's no reason I can see that it should be that way. What is the physical constraint on more raw processing power? Technology is amazing these days... you'd think we would have something.
We do. Nobody in the mass market wants it, though, because it doesn't suit their needs. Disruptive technology is not always better at enough of what it dos to displace the tried and true technology.

Tilera and Renesas (68K and SuperH had a baby, they named it RX, and sent it to the finest school in town), FI, have some outstanding hardware, that can whoop some ass...for relative values of whooping :).

But, really, the constraint is that memory is slow. A modern CPU with an IMC generally needs 50-200 cycles, depending, to get something from RAM. That can be a lot of time where nothing gets done. But, if you slow the CPU down so it's not so many cycles, you can't get easy ALU-bound or easy LSU-bound code to run fast. So, you need a CPU to be able to stay many cycles ahead of memory, for most problems, which have fairly limited concurrency. There's no free lunch to be had. I do think a new dense ISA (16-, 21-*, 24-bit, no MIPS-like purity) made for superscalar OOO from the beginning, with some form of micro-threads on top of that (it's using them to replace ILP that they keep trying and failing at), could exhibit some serious efficiency and scalability...but that'd be a major undertaking to try to come up with, and a big economic risk, even if its technical merits all worked out.

If main memory could be brought down to say, 20-50 cycles at 3GHz, we would see massive CPU improvements resulting from it.

GPUs and other parallel-above-all-else processors don't have that problem, but they don't have tons of cache, or powerful schedulers, either.

* 3 in a 64-bit word, using 1 bit for something else, like maybe whether it's a scalar or vector bundle.
 
Last edited:

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
I think multiple people in this thread believe that if they don't understand the problem the solution must be easy.

Nonono, it is. Chip design is incredibly easy.

What, only 5-6 years of planning to implementation of a new architecture? That's pretty much tomorrow.

5-10 billion dollaroos for a 20nm fab? Pocket change.

A year + of joint work with the fab to develop the mask and work out the kinks? Pretty much done before you begin.

Easy.
 

PlasmaBomb

Lifer
Nov 19, 2004
11,636
2
81
So it's because it costs too much and therefore has to be done in steps? I dunno. I just don't buy it.

Don't you agree that there is a huge disincentive for development of powerful, cheap, ubiquitous hardware, seeing as how that would put the entire industry out of business? I see no reason, other than the planned obsolescence that I'm talking about, that we shouldn't have real-time photo-realistic 3d rendering on cheap hardware. Is there some sort of barrier in the physics of it, in the same way that we can't have free energy? I suppose such rendering would require very powerful information processing, but isn't that possible with sufficiently advanced engineering?

This is a lot of conjecture I admit, and I'm not 100% on it, but I just don't think the slow technological advancement we see is cutting edge.

I can see how the development of computer graphics in the past may have been constrained by these huge amounts of research necessary, since it's a big deal to build 2d graphics or 3d graphics from scratch. But now we are the point where 3d graphics are already very realistic, so it doesn't seem to me to be that revolutionary to just crank up the processing power (or make the coding more efficient). It's just a matter of degree, not a whole different concept needing to be invented.

Let me know when you figure out how to build a cutting edge fab for less than $6-10 billion...
 

Bignate603

Lifer
Sep 5, 2000
13,897
1
0
Let me know when you figure out how to build a cutting edge fab for less than $6-10 billion...

Or can figure out how to make massive gains in performance without a complete change in how our processors are currently made. They're running into very real physical limits on what they can do to increase speed of a single core.
 

Red Squirrel

No Lifer
May 24, 2003
70,573
13,804
126
www.anyf.ca
I will never give up having a full blown workstation computer. Though I do agree it seems some of the components are getting bigger and bigger and less efficient. Computers use way more power than they ever did and video cards use 2 or even 3 slots, heat sinks are ridiculously huge etc... but guess you just can't get away from that for more performance.

Software on the other hand is being written more and more sloppy because it's all about faster deployment for more money and fixing it later with tons of updates that are also rushed.
 

sdifox

No Lifer
Sep 30, 2005
100,249
17,895
126
more important question is really why isn't there a video card that acts as the mb.
 

sdifox

No Lifer
Sep 30, 2005
100,249
17,895
126
How would that be any different from a motherboard with integrated graphics?

proportion. those have the video logic baked into the mb chipset. This would be your mammoth video card with some sockets for cpu and ram :biggrin:
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
proportion. those have the video logic baked into the mb chipset. This would be your mammoth video card with some sockets for cpu and ram :biggrin:

Build it into an enclosure an add a PSU and you've got your system :D

The reason some components are getting bigger has to do with heat dissipation and the sheer amount of transistors. While the die sizes are getting smaller (or at least more packed per-mm), the cooling required increases. A new CPU from Intel/AMD is over 1 billion transistors on a die that's 130mm^2? Considering the moderate TDP and the high frequencies, you need a large and efficient HSF to be able to displace all of that heat. Same goes for GPUs as well but they've also got vRAM and VRMs to worry about. The heat has gotten so bad that the chip designers have to increasingly plan for dark silicon (unused portions of the chip that are powered down during certain tasks) in order to keep that heat and power in check.
 

sdifox

No Lifer
Sep 30, 2005
100,249
17,895
126
Build it into an enclosure an add a PSU and you've got your system :D

The reason some components are getting bigger has to do with heat dissipation and the sheer amount of transistors. While the die sizes are getting smaller (or at least more packed per-mm), the cooling required increases. A new CPU from Intel/AMD is over 1 billion transistors on a die that's 130mm^2? Considering the moderate TDP and the high frequencies, you need a large and efficient HSF to be able to displace all of that heat. Same goes for GPUs as well but they've also got vRAM and VRMs to worry about. The heat has gotten so bad that the chip designers have to increasingly plan for dark silicon (unused portions of the chip that are powered down during certain tasks) in order to keep that heat and power in check.

you would think they'll be able to bury nanotubes in there by now to wick away the heat.
 

xSkyDrAx

Diamond Member
Sep 14, 2003
7,706
1
0
i like when people thing technology runs on magic and fairy dust. Of course things like that don't work on rules based on reality. Want faster? Just make it happen! It's a computer after all.
 

CountZero

Golden Member
Jul 10, 2001
1,796
36
86
So it's because it costs too much and therefore has to be done in steps? I dunno. I just don't buy it.

Don't you agree that there is a huge disincentive for development of powerful, cheap, ubiquitous hardware, seeing as how that would put the entire industry out of business? I see no reason, other than the planned obsolescence that I'm talking about, that we shouldn't have real-time photo-realistic 3d rendering on cheap hardware. Is there some sort of barrier in the physics of it, in the same way that we can't have free energy? I suppose such rendering would require very powerful information processing, but isn't that possible with sufficiently advanced engineering?

This is a lot of conjecture I admit, and I'm not 100% on it, but I just don't think the slow technological advancement we see is cutting edge.

I can see how the development of computer graphics in the past may have been constrained by these huge amounts of research necessary, since it's a big deal to build 2d graphics or 3d graphics from scratch. But now we are the point where 3d graphics are already very realistic, so it doesn't seem to me to be that revolutionary to just crank up the processing power (or make the coding more efficient). It's just a matter of degree, not a whole different concept needing to be invented.

Intel could theoretically make a cpu that was say 10x faster than what they are doing now but it would 100x the cost. It would have a much higher engineering/design cost, it would likely be on a cutting edge, immature process and be very large so material costs would be extremely high not to mention the cost of building a cutting edge fab, it would be higher power and so require more esoteric cooling. No one would pay 100x more, for that matter few would pay 10x more and ultimately Intel has to make products that people will buy not just products that are absolute the best that is feasible. If no one buys the product then Intel is out of business.

Secondly if you think the cadence of releases for CPUs and GPUs is reflecting the design time you are sorely mistaken. I guarantee you that nearly every fabless semiconductor company has a road map that looks out at least five years with multiple projects going in parallel at different stages of development. For a company like Intel that does process research as well it might be as far ahead looking as 10 years. These designs aren't turnkey, you don't just hit a button that makes them faster and spit out a design.

What you are asking is like asking why can't I have a car that gets 150MPG, does 0-60 in 1 sec, guarantees complete safety of all passengers no matter the severity of the accident and goes 500k miles before having any breakdowns or maintenance. Oh and it has to be cheap.