Was this reported anywhere in the press? I know at least one website used a GTX 960 but they didn't report multiple crashes.
Here is AMD's gcn (1.0) whitepaper.
.
Is GCN 1.2 the same design as GCN 1.0..?.
According to your curious methodology we can use Kepler s white papers to explain how Maxwell works, or whatever previous uarch to explain current uarch iteration..
The pipeline is (basically) the same. There may be tweaks but on a high level its the same functionality (GCN 1.2 would have enhanced certain things but would not have changed major properties).
Changing the pipeline is a major, major arch change.
GCN 1.0 -> 1.2: minor evolution
Kepler -> Maxwell: major change, though I would bet there are a lot of similarities in how data is managed. Nvidia is very tightlipped about this so I really don't know.
Out of Order for a GPU pipeline simply doesn't make sense. Its very power hungry, and thus of limited use in a device that scales well with die size. GPUs are designed for thoroughput so it makes sense to go for more execution units rather than die and power hungry OoO resources.
But what is the evolution..?.Isnt it precisely the ACEs..??
But what is the evolution..?.Isnt it precisely the ACEs..??.
Ultimately the differences between GCN 1.0 and GCN 1.1 are extremely minor, but they are real.
So they published nothing but still you re saying that there s a big change, i say that the big change is GPU frequency, wich is proved, and that the rest is minor..
I m not a GPU freak but from i did read from Zlatan it s not the pipeline that is out of order but the queues management, indeed i wouldnt expect you to be accurate...
http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/5
http://www.anandtech.com/show/7457/the-radeon-r9-290x-review/2
What changes was made to that structure? Not much it seems besides the hard limit.
One effect of having the ACEs is that GCN has a limited ability to execute tasks out of order. As we mentioned previously GCN is an in-order architecture, and the instruction stream on a wavefront cannot be reodered. However the ACEs can prioritize and reprioritize tasks, allowing tasks to be completed in a different order than they’re received. This allows GCN to free up the resources those tasks were using as early as possible rather than having the task consuming resources for an extended period of time in a nearly-finished state. This is not significantly different from how modern in-order CPUs (Atom, ARM A8, etc) handle multi-tasking.
It’s no surprise then that one of the first things we find on AMD’s list of features for the GCN 1.2 instruction set is “improved compute task scheduling”.
But what is the evolution..?.Isnt it precisely the ACEs..??.
Why use Anandtech's GCN 1.1 review when you can use Anandtech's GCN 1.2 review instead?
http://www.anandtech.com/show/8460/amd-radeon-r9-285-review/2
![]()
I hope this was a mistake and not a deliberate attempt at misinformation.
I should have linked that. There are changes, however, these are small changes as compared to kepler -> maxwell or VLIW4 -> GCN.
Ultimately that though is about as informative as intel's continual "branch prediction improvements" for each iteration of their CPUs. This is likely the fact that they went from 2 ACE units to 8 (which I think everyone is aware of).
They do have programming guides for all their uarchs.My above comment wasn't aimed at you in particular.
I agree that they aren't giving out a lot of information. We do know that there are other changes in preemption though from what Zlatan has said previously. GCN 1.2 supports finer grained preemption than previous GCN, and Nvidia has stated that Pascal will have fine-grained preemption as well. I have seen that personally on a slide and I have no reason to doubt what Zlatan says about it.
They do have programming guides for all their uarchs.
Here is AMD's gcn (1.0) whitepaper.
The CU front-end can decode and issue seven different types of instructions: branches, scalar ALU or memory, vector ALU, vector memory, local data share,
global data share or export, and special instructions. Only issue one instruction of each type can be issued at a time per SIMD, to avoid oversubscribing
the execution pipelines. To preserve in-order execution, each instruction must also come from a different wavefront; with 10 wavefronts for each SIMD, there
are typically many available to choose from. Beyond these two restrictions, any mix is allowed, giving the compiler plenty of freedom to issue instructions
for execution.
https://www.amd.com/Documents/GCN_Architecture_whitepaper.pdf
The pipeline is in-order.
They published lots of things.
Many changes happened. You can read up and educate yourself.
.
Why should i educate myself..?.
To post constant bashing about a single brand..?.
I need no such education because there s a specialist hanging here and he knows better than me, and of course of you despite your denials when adressing him, at some point you should apply your own advices, all your previous post are innacurately quoting the infos provided by AMD...
Why should i educate myself..?.
Might be a misunderstanding here. Every multiprocessor use in-order logic in todays GPU. GCN just use out-of-order logic for the compute engines (ACEs).
I mean I know it's not possible, but I feel a statement like that really should be bannable on a forum like this. Pretty much defeats the purpose. Again, obviously not possible but..... Lol.... Wow. He just made it to my ignore list at least.I really just read that... didn't I...