zinfamous
No Lifer
- Jul 12, 2006
- 111,994
- 31,557
- 146
out of stock
well, it was in stock for at least the ~15 minutes that I first checked it, to when I came back to link it in this thread.
out of stock
All hardware pipelines have triggers and efficiency redirects dependent on data flows to cut down on execution time....It appears to me he is missing the point big time.
From https://forum.beyond3d.com/posts/1998144/
If a software defined pipeline can be achieved on Vega, that's revolutionary.
The pipeline could be configured dynamically and it could be much more efficient use of the HW.
This extremely different approach requires SW to catch up the HW.
Thanks, but I'm not interested in those games (listed as $129 MSRP). I wanted the standalone card (for both 64 and 56). I just lost interest in picking one up because they're not worth the inflated prices.Newegg currently still has 56 w/ Wolfenstein and Prey for $499
https://www.newegg.com/Product/ComboBundleDetails.aspx?ItemList=Combo.3623491
This is because Nvidia is software implementation. AMD does not have to give you any documentation on HBCC, because it is HBCC's job to manage all the data, for you. With Nvidia's approach you have to program specifically for specific boarders within Memory Controller. With AMD approach, you don't have to because it is the memory controllers job.
I love that people who are supporting Nvidia are liking your post but that shows your and their's lack of understanding what you are commenting on.
You get mixed results with HBCC, because of two reasons. In current state of drivers you are limited to only 64 GB page, as a maximum. It will be full 512 TB of data indexed, when AMD will fix the drivers, and allow full capability.
Second reason is Simple. Software is not designed with HBCC in mind. Current implementation - you had to manage memory yourself. HBCC's job is, as I have written, to manage it for you.
Im sorry, but you are showing complete lack of understanding of what you are talking about.
Yeah, newegg is going to be fishing price points all day after they sold 10 RX vega reference cards for MSRP this morning. They did the same thing with vega 64. Every hour a new bundle was added to the site that got worse and worse in terms of pricing. Making my rounds on various forums...well, it was in stock for at least the ~15 minutes that I first checked it, to when I came back to link it in this thread.![]()
There are no lines to be read through until AMD demonstrates it actually functions and performs in any notable fashion on RX Vega consumer. If you've managed to figure out how to use google, you would know that reviewers have enabled HBCC and have tried to figure out if it works and how well it performs. Sometimes it results in same performance. Sometimes worse.
1.) Paging memory has existed for a long time so excuse me for not jumping over backwards because Radeon group is relabeling it with non industry standard language. This is what an education does for you.. It tempers your outlandish dream like assumptions about marketing slides
2.) Yes, if you've managed to read between the lines of my comments, my particular focus and use case is compute. So, if HBCC is only available to the GPU pipeline, it's useless to me. Has Radeon detail this? Nope ofc not. Also, if you've managed to read between the lines on what this feature is for, it's mainly for compute flows with large consistent contiguous, predictable, and pinneable data sets... Not for volatile gaming data sets especially not in Real-time graphics generation pushing north of 60fps whereby the PCI-E latency would cause issues with respect to whatever data you could predict and page across in time.. But hey, formal education again thinking about the engineering details and technicalities. Something you obviously aren't thinking about
3.) Yeah, read between the lines. That's what pinned memory is for : Compute... That's what it's used for on PRO SSG (8k video editing) whereby you have huge asset files that enjoy large page files and can be fit in consecutive and contiguous memory allocations. Do you have a formal education in this area? It seems you obviously do not
4.) AMD's RTG GROUP hasn't brought a single thing to the masses on consumer RX Vega as it hasn't been proven what in the world this is beyond PRO SSG w/ NVME storage that is specifically designed for compute tasks not gaymen. Instead, what this is, is cost cutting of die cuts whereby they've included a gimped feature in consumer cards and are literally making up its use case on the fly.
5.) Kobe doesn't suck but this highlights your distaste for people who are skillful in their craft including me.
This is my last comment on the matter to you, because you are wasting everybody's time, with your lack of understanding about HBCC.Who do you think you're fooling w/ your commentary? LOL.
HBCC was developed for PRO SSG whereby the data to be migrated are large asset files managed further by NVME on the GPU. These are compute flows not graphics.
Also, making stuff up to make a point(false, but anyway) is beneath any educated person.All those animations to emphasize statements in these new member posts remind me too much of WCCFTECH. comments section. Maybe I live in a different world, but no educated person I know behaves that way. Perhaps I should expand my social circles, or maybe not.
I posted this same demo earlier and it was dismissed by same. If accepted it would refute the entire series of posts, so I guess we can't have that, can we.While it may not have been designed with gaming in mind, they did demo it running Deus X with a Vega locked to only 2gb of vram and compared having it on and off and it was a big improvement.
Also, reviewers had a hard time testing because they couldn't get any games to actually use up all the 8gb of vram on board to make it use the hbcc
Who do you think you're fooling w/ your commentary? LOL.
HBCC was developed for PRO SSG whereby the data to be migrated are large asset files managed further by NVME on the GPU. These are compute flows not graphics.
Paging already exists in the GPU pipeline. How do you think game data works?
AMD's supposedly going to fully automate this? Sounds like a #@%* show in the making. No comment about being proprietary either. God help the dev whose code base HBCC takes a dump on because AMD didn't consider their corner case.
Dev : Hey, my game is performing like crap on your latest GPU. Is there any way I can fix it?
AMD : Oh sorry, we can't fix that it's automated.
I need to stop wasting my time in forums w/ uneducated people who feel the need to parrot the same stuff over and over and resort to insults of anyone whose educated and speaks critically because they lack the ability to have a discussion on their level .
64GB page limits as if consumer (non-workstation) motherboards even supports more than that.
512TB .....
No discussion of latency.
> mfw company claims they've invented the second coming .. System wide pageable memory to all of your system resources.
Q : What about the latency of each component?
A : Yeah, about that....
Q : You say its automated, how does that work?
A : When you're listening to led zepplin mp3s, it will automatically use a NN and determine that you want that on your GPU, evict crucial data from HBM2, and replace it with the full library of Led zepplin albums you have on your 3.5" 5400 RPM HD. As there aren't enough DMA controllers to service multiple flows, it will just stall out transfers until its done.
You're referring to :While it may not have been designed with gaming in mind, they did demo it running Deus X with a Vega locked to only 2gb of vram and compared having it on and off and it was a big improvement.
Also, reviewers had a hard time testing because they couldn't get any games to actually use up all the 8gb of vram on board to make it use the hbcc
Remind other participants to do the same and I will ensure not to be dragged down to the level of insults when I am insulted. That aside, I ensure in every post to speak on technical constructive issues which have not been addressed in follow up commentary that has been insulting and lacking of any constructive commentary beyond that.Okay, here's a message for you, brainlet -- knock it off with the lame animated GIFs, and start acting like an adult in this forum -- which also includes engaging in discussion and debate here constructively and free from the obnoxious, hectoring tone your posts are starting to take.
This will be your only warning.
-- stahlhart
I don't believe anyone in their right mind believes there's going to be a magic driver to increase performance by a lot. We do have confirmation however that certain features are still disabled, so from a technical perspective, we're definitely still interested in possible gains in certain scenarios from features like primitive shaders combined with DBSR.There isnt going to be a magical driver that comes out tomorrow that improves performance by a large margin. If there was something like that possible around the corner AMD would have pushed it out for launch, they clocked this thing so high with so much voltage its insane, they had to to try and match Nvidia performance, if they could have done that in software with some magical driver release dont you think they would have done that instead of driving their power usage vs Nvidia into the stratosphere and making the worst first impression possible?
Who do you think you're fooling w/ your commentary? LOL.
HBCC was developed for PRO SSG whereby the data to be migrated are large asset files managed further by NVME on the GPU. These are compute flows not graphics.
Paging already exists in the GPU pipeline. How do you think game data works?
AMD's supposedly going to fully automate this? Sounds like a #@%* show in the making. No comment about being proprietary either. God help the dev whose code base HBCC takes a dump on because AMD didn't consider their corner case.
Dev : Hey, my game is performing like crap on your latest GPU. Is there any way I can fix it?
AMD : Oh sorry, we can't fix that it's automated.
I need to stop wasting my time in forums w/ uneducated people who feel the need to parrot the same stuff over and over and result to insults when they lack the ability to discuss things at a higher or technical level.
64GB page limits as if the average motherboard even supports more than that.
512TB .....
![]()
No discussion of latency.
> mfw company claims they've invented the second coming .. System wide pageable memory to all of your system resources.
Q : What about the latency of each component?
A : Yeah, about that....
Q : You say its automated, how does that work?
A : When you're listening to led zepplin mp3s, it will automatically use a NN and determine that you want that on your GPU, evict crucial data from HBM2, and replace it with the full library of Led zepplin albums you have on your 3.5" 5400 RPM HD. As there aren't enough DMA controllers to service multiple flows, it will just stall out transfers until its done.
![]()
#GAMECHANGER #BRAINLETS_DECLARE_2nd_COMING and challenge anyone with technical knowledge
Just caught up on this thread as i havent been on in a few days.
We are seriously back to the wait for AMD to fix the driver narrative? seriously? So we went back in time to 2-3 weeks before launch when everyone thought magical drivers were going to solve the performance issue?
I expect AMD to improve drivers over time, they always do vs nvidia. But thats going to be 5-25% over the next 3-4 years like it always is.
There isnt going to be a magical driver that comes out tomorrow that improves performance by a large margin. If there was something like that possible around the corner AMD would have pushed it out for launch, they clocked this thing so high with so much voltage its insane, they had to to try and match Nvidia performance, if they could have done that in software with some magical driver release dont you think they would have done that instead of driving their power usage vs Nvidia into the stratosphere and making the worst first impression possible?
I don't fully agree. We might get 15% overnight with an enabled driver, together with lower power consumed. The AVFS control system appears to be screwed-up as of now. If working properly it should optimize voltages on the fly. We should get the benefits of manual undervolting automatically. A 15% increase in performance simultaneously with a 15% reduction in power transforms both performance and the perf/watt metricc.Just caught up on this thread as i havent been on in a few days.
We are seriously back to the wait for AMD to fix the driver narrative? seriously? So we went back in time to 2-3 weeks before launch when everyone thought magical drivers were going to solve the performance issue?
I expect AMD to improve drivers over time, they always do vs nvidia. But thats going to be 5-25% over the next 3-4 years like it always is.
There isnt going to be a magical driver that comes out tomorrow that improves performance by a large margin. If there was something like that possible around the corner AMD would have pushed it out for launch, they clocked this thing so high with so much voltage its insane, they had to to try and match Nvidia performance, if they could have done that in software with some magical driver release dont you think they would have done that instead of driving their power usage vs Nvidia into the stratosphere and making the worst first impression possible?
It depends on how much the features, that are not apparent in drivers are affecting game performance.I don't fully agree. We might get 15% overnight with an enabled driver, together with lower power consumed. The AVFS control system appears to be screwed-up as of now. If working properly it should optimize voltages on the fly. We should get the benefits of manual undervolting automatically. A 15% increase in performance simultaneously with a 15% reduction in power transforms both performance and the perf/watt metricc.
To consider:
Ryan Smith: https://forum.beyond3d.com/posts/1997699/
"Quick note on primitive shaders from my end: I had a chat with AMD PR a bit ago to clear up the earlier confusion. Primitive shaders are definitely, absolutely, 100% not enabled in any current public drivers.
The manual developer API is not ready, and the automatic feature to have the driver invoke them on its own is not enabled."
Finally for the HBCC issue, at least one site found this.
https://translate.google.tt/transla...ls/HBCC-Gaming-Benchmark-1236099/&prev=search
"Let's look at the benchmarks. We took some titles from the current PCGH-Parcours to the chest and partly used other settings, as for example with The Talos Principle, which we also tried with powerful, but imaginative 4x Supersample-Antialiasing. This was followed by a test from our old Parcours: Metro Last Light Redux (see benchmark how-to ) with our standard settings for the benchmark "The Crossing" - not the integrated performance test, but a real play scene. This was all the more surprising, as the Radeon RX Vega 64 was able to grow by an impressive 11 or 14%, depending on the driver used."
For a non-feature, that sure seems rather good. An existing game tested by a third party showing solid improvement.
To end this. Yes AMD has released without the software fully optimized. If they can extract an additional 5-25% over the next few years,it should be in addition to what I wrote.
Also, reviewers had a hard time testing because they couldn't get any games to actually use up all the 8gb of vram on board to make it use the hbcc
Agreed, I put 15% as a not too ridiculous number. A possibility of a larger improvement exists depending on specifics.It depends on how much the features, that are not apparent in drivers are affecting game performance.
![]()
Currently Vega is using Native Pipeline. What will happen when it will use NGG Fast Path?
It depends on how much the features, that are not apparent in drivers are affecting game performance.![]()
Currently Vega is using Native Pipeline. What will happen when it will use NGG Fast Path?
