Why are we still building giant video cards and hardware machines to run our games and software on?? For example battlfield 3 needs a pretty new and modern video card to run on decently, shouldn't it be the opposite? Shouldn't programmers be making awesome games with spectacular graphics that are able to run on really old hardware?
They do, just not games like BF3. It's like complaining that music sucks, today, because 99% of pop is pablum.
Wouldn't that be more ideal and environmentally friendly? It seems to me that not enough effort is put into this, instead economics trumps efficiency and the urge to make a buck keeps us in this need to buy bigger and better hardware to run over bloated games. Its kind of kinda of a like a paradox because the computer hardware business also funds innovation of better technology, but at the same time you would think there would have been some sort of technological break through kind of like computing and hardware's version of cold fusion yet nothing, maybe technology is being held back I dont know.
Programming is holding a lot back, including the hardware, but good programmers don't get to make those decisions, so :| c'est la vie. Another problem there is that good ways to serve smaller devs, letting them get more features and better artwork for a given price and time, might not be the best fit for the big guys with Hollywood-sized budgets, yet it's the latter who drive the creation and improvements in development tools and hardware.
The closest effort Ive seen to the correct computing model as I see it in the vision of a natural advanced society would be like the
http://www.raspberrypi.org/ project and that group is non for profit. So a non for profit organization is the only way for technology to evolve naturally and efficiently with minimal resources?
No. There was already the stamps, the PICs, then the Atmels. On the side, there is the Sheeva, Beagleboard, and Pandaboard. The RPi pretty much blows an Arduino out of the water, but what needed a non-profit was the fact that it's an answer (high SoC inventory?

) in search of a question. There aren't tons of people that want a product like a Raspberry Pi. There
are tons of tinkers who look at it and go, "that would be damn cool to do something with...but what?" The RPi was not made in a vacuum. What makes it special is that it costs what an Arduino, PIC system, etc., does, while having nearly capabilities (mainly lacking AIB options) of industrial SBCs.
It seems to me that capital is holding innovation back more than its helping, mimicking all aspects of our wasteful inefficient society as a whole. It seems to me that big business does not want efficient computing because they would not be able to make any money off of it, also because it would probably evolve on its own without the need to for the newest and fastest hardware around.
That much is true, but it's less efficient computing than it efficient technology in general. Most software gets way too complex, and there is a massive sub-industry in IT to help maintain and increase that complexity by leaps and bounds in the name of fixing very minor problems that are best worked around each time encountered (ORM and persistence frameworks being poster children for this phenomenon).
I've long thought the way that the big hardware companies slowly dole out the new technology is suspect. What is stopping Intel from making much, much faster chips tomorrow other than them having an effective duopoly in the market?
x86's success. Seriously, a better RISC could be made, and Intel could make it faster than any x86 (make the RISC good at everything x86 is good at, then use the new ISA to take care of x86 weaknesses). But, they'd have to make a virtual x86 layer that was 100% compatible, and the design effort for such a RISC CPU actually faster then their best x86 CPUs would cost more than the next generation of improvements on x86 CPUs...so why not just make better native x86 CPUs?
So it's because it costs too much and therefore has to be done in steps? I dunno. I just don't buy it.
Then become an academic researcher, where you can hand-wave away that people have do imperfect work to make ideas happen.
A perfect design from the start, in a field that is full unknowns, is not going to happen. CPU and GPU development from the likes of ARM, AMD, IBM, and Intel is cutting-edge stuff. They are trying combinations of features in each major generation that haven't been achieved, before, in the kinds of contexts they are using, only predicted possible (though, in some cases, like Pentium Pro's effective OOOE, thought too difficult to achieve!). Every time, there will be
something less than optimal, and fixing that takes time and effort, along with adding new features.
They can't make perfect CPUs in one shot any more than perfect automobiles. New innovations keep coming, and every design needs some tweaks every year.
Major innovations, these days, come in the form of shrinking parts of processors, gluing them and their peripherals together more efficiently, and running them at lower power levels.
Don't you agree that there is a huge disincentive for development of powerful, cheap, ubiquitous hardware, seeing as how that would put the entire industry out of business? I see no reason, other than the planned obsolescence that I'm talking about, that we shouldn't have real-time photo-realistic 3d rendering on cheap hardware. Is there some sort of barrier in the physics of it, in the same way that we can't have free energy?
More or less. There is a barrier in mathematics. It's not something that is easy to do. Chasing photo-realism is trying to win a race by running half the distance left each generation. You'll eventually get close enough, but the number of generations needed more than 50% more, once you reach the 50% mark.
I know that's the general feeling, but there's no reason I can see that it should be that way. What is the physical constraint on more raw processing power? Technology is amazing these days... you'd think we would have something.
We do. Nobody in the mass market wants it, though, because it doesn't suit their needs. Disruptive technology is not always better at enough of what it dos to displace the tried and true technology.
Tilera and Renesas
(68K and SuperH had a baby, they named it RX, and sent it to the finest school in town), FI, have some outstanding hardware, that can whoop some ass...for relative values of whooping

.
But, really, the constraint is that memory is slow. A modern CPU with an IMC generally needs 50-200 cycles, depending, to get something from RAM. That can be a lot of time where nothing gets done. But, if you slow the CPU down so it's not so many cycles, you can't get easy ALU-bound or easy LSU-bound code to run fast. So, you need a CPU to be able to stay many cycles ahead of memory, for most problems, which have fairly limited concurrency. There's no free lunch to be had. I do think a new dense ISA (16-, 21-*, 24-bit, no MIPS-like purity) made for superscalar OOO from the beginning, with some form of micro-threads on top of that (it's using them to replace ILP that they keep trying and failing at), could exhibit some serious efficiency and scalability...but that'd be a major undertaking to try to come up with, and a big economic risk, even if its technical merits all worked out.
If main memory could be brought down to say, 20-50 cycles at 3GHz, we would see massive CPU improvements resulting from it.
GPUs and other parallel-above-all-else processors don't have that problem, but they don't have tons of cache, or powerful schedulers, either.
* 3 in a 64-bit word, using 1 bit for something else, like maybe whether it's a scalar or vector bundle.