Originally posted by: ronnn
Back to the op, Beyond3d seems to suggest that the r520 may be a bit disapointing.
Link
Originally posted by: MisterChief
Since when has Beyond3d had any accurate info? JK![]()
Originally posted by: Rollo
Errr, it's nice you wrote all that Russian Sensation, but I was talking about buying two 6800GTs or 6800Ultras now.
What performance upgrade offers more framerate increase to a high end machine?
Originally posted by: ribbon13
HDMI to VGA adapters would be prohibitively expensive. That would actually take a power supply, DSP and DACs. DVI-I still has analog video signal, HDMI has none.
:thumbsup:Originally posted by: sharkeeper
Too bad this thread has become another SLI vs. non testosterone match. :|
SLI has its merits. Of course when GPU cycles pick up to a six month update and become plentiful once again (if ever!) SLI may not be as attractive as it is now. GPU's will go dual core and they'll have more than one on a board, etc. Things will on;ly get better and faster. Let's hope the game developers can keep up and actually use this technology!
Multiple GPU's is easy, as what GPU's do it basically the same operations over and over, so it is basically inifintely parallelisable (?), whereas CPU code is different.Originally posted by: IamTHEsnake
I keep hearign about the hassle of developing software for multiple cores, whether it be for GPUs or CPUs, what makes it so hard?
What in the world are you talking about. You're totally confusing a set of high-level API specs and features, with low-level shader pipeline features. Some of the more important hardware features that are going to be required for DXNext, are fast GPU state save/restore/reset commands, as well as "fence" opcodes. Kind of hard to explain, but it's almost like read/write fence opcodes for disk cache coherence, except that in this case (AFAIK thus far), they will be used to intermingle software rendering operations with GPU rendering operations, esp. for the 3D layered, alpha-blended, desktop UI in Longhorn.Originally posted by: housecat
This product is too expensive and has too much advanced technology.
Why DXNext support when we barely have DX9C technology??
You are paying for technology that will be very slow when DXNext games are released!
And there probably wont be any visual difference from DX9C to DXNext.. and anything that can be done under DXNext can be done under DX9C with more passes.
Are you suggesting that ATI is totally stupid, and cannot learn from NV's first-gen SM3.0-capable GPU parts, such that when ATI's SM3.0 parts are finally released, that they somehow will not be better than NV's first-gen SM3.0 parts? I doubt that, ATI generally carefully plans their hardware features for maximal impact, while minimizing additional costs. (One reason why they chose 24bit FP modes, instead of NV's 32-bit, because it was both cheaper, and faster, and entirely appropriate at the time for the games on the market. NV wanted to push for EXIF and HDR rendering instead, but it cost them dearly as they had to incur a negative performance/IQ tradeoff either way, because they only implemented 16 and 32-bit FP mode support.) ATI is smarter than that.Originally posted by: housecat
I'd suggest sticking with older technology, because when this card can use its features, it will be too slow.
Actually, the logical programming model is no different, the difference is in the performance/space tradeoff of the CPU die itself, whether or not the CPU implements in-order or OOO execution pipelines. But programming for SMT/SMP/multi-core, is definately going to be a completely different world for some game devs. Early first-gen titles, will likely be ports that will only utilize a single thread/core (and perhaps one for handling I/O and the system libraries). It would be similar to many games for the ill-fated, heavily multi-CPU Atari Jaguar system. Although it contained several 64-bit pieces of hardware, a pair of 32-bit RISC DSP chips, and a standard boring old 68000 "control" CPU, most of the games that were released for it, were ports of Amiga games, and the game code ran entirely on the 68000, the slowest CPU in the machine, only because it was the easiest way to program games for the system.Originally posted by: Lonyo
Not a walk in the park?
Understatement of the century.
Not only do they have to code for moltiple cores with the next gen. consoles, they also have to do in order programming, which will be a pain compared to Out of order (on PC's).
Originally posted by: MisterChief
Originally posted by: ronnn
Back to the op, Beyond3d seems to suggest that the r520 may be a bit disapointing.
Link
Since when has Beyond3d had any accurate info? JK![]()
Originally posted by: Regs
Maybe then HL2 can release the SM3 patch.
You thought HL2 wasn't really taxing? Hah, just wait.
Originally posted by: housecat
This product is too expensive and has too much advanced technology.
Why DXNext support when we barely have DX9C technology??
You are paying for technology that will be very slow when DXNext games are released!
And there probably wont be any visual difference from DX9C to DXNext.. and anything that can be done under DXNext can be done under DX9C with more passes.
I'd suggest sticking with older technology, because when this card can use its features, it will be too slow.
/end play on modern day braindead ATI fanboy rant on superior technology