• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

News Intel GPUs - we've given up on B770, where's Celestial already

Page 147 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Previously I speculated that DXVK could help DX9 performance and it appears it does. This Video tests it on GTA IV and it runs about 3X faster in DXVK (going from about 30-40 FPS to 120+ FPS).

I think Intel should adopt DXVK as their official solution rather than using the Microsoft DX9 emulator.
 
Last edited:
Previously I speculated that DXVK could help DX9 performance and it appears it does. This Video tests it on GTA IV and it runs about 3X faster in DXVK (going from about 30-40 FPS to 120+ FPS).

I think Intel should adopt DXVK as their official solution rather than using the Microsoft DX9 emulator.
Now I'd like to know how DXVK does in DX11 games...
 
It would make sense for Intel to have introductory pricing on these cards. Somewhere around $125-150 reminding the customers they are new to the discrete graphics card business. That way customers would be more tolerant/forgiving of the growing pains of the Intel graphics cards.
 
Previously I speculated that DXVK could help DX9 performance and it appears it does.

I've been wondering the same thing. It's nice to be vindicated at times.

I think Intel should adopt DXVK as their official solution rather than using the Microsoft DX9 emulator.

I completely agree. Emulation will always be slower then direct translation.

If that causes improvements to DXVK, so much the better. Having Intel-level resources behind it could really make a difference.
 
I think Intel should adopt DXVK as their official solution rather than using the Microsoft DX9 emulator.

Now I am just speculating, but the benefits may not be universal. They could selectively enable DXVK per game as they do game optimizations.

@moinmoin Meteorlake is pretty much an iGPU version of low end ARC. Of course they can't fit any higher end ones in such a mobile form factor.
 
I thought that I had read somewhere that some of the copy protection, anti-mod/anti piracy modules like Denuvo have issues with DXVK and that some games will ban you for using it.
 
I completely agree. Emulation will always be slower then direct translation.

If that causes improvements to DXVK, so much the better. Having Intel-level resources behind it could really make a difference.

I don't know if one is really emulation and the other translation. They are probably doing similar call translation. DXVK just seems faster. DXVK team is really motivated to give good speed. Microsoft is just offering a fallback.

Having Intel onboard benefits in multiple ways. Users get a faster solution, Intel will likely start submitting updates, and if it was the official solution, it could likely be incorporated without triggering anti-cheat code.
 
I thought that I had read somewhere that some of the copy protection, anti-mod/anti piracy modules like Denuvo have issues with DXVK and that some games will ban you for using it.
The more reason for Intel to pick it up officially so that developers stop doing that. 😛

I don't know if one is really emulation and the other translation. They are probably doing similar call translation. DXVK just seems faster. DXVK team is really motivated to give good speed. Microsoft is just offering a fallback.

Having Intel onboard benefits in multiple ways. Users get a faster solution, Intel will likely start submitting updates, and if it was the official solution, it could likely be incorporated without triggering anti-cheat code.
Afaik both use essentially the same approach.

Microsoft, knowing its own frameworks, seems to have approached it as a one and done job though, whereas DXVK being pushed by Valve for its Linux based Steam Deck is constantly being updated, extended, tweaked and optimized for all possible games.

Under Windows the latter is what would need to happen in graphics drivers. Under Linux graphics drivers don't take game specific optimizations so translators like DXVK is where such happen instead. It's a perfect fit for Intel's needs indeed, no idea why they though Microsoft's stale approach is preferable.
 
But... isn't the guy who had ambitions to become Intel CEO very much upper management?!

Playing politics is a full time job after all.

Fully agree. He led the radeon group. He was upper mangament already then and upper management doesn't make designs, they manage and influence and pass down their vision (or not). If the manager provides a bad work environment, then the product will end up being bad regardless of the team working on it.
 
I thought that I had read somewhere that some of the copy protection, anti-mod/anti piracy modules like Denuvo have issues with DXVK and that some games will ban you for using it.

-Big ooooofff on the banning but maybe specifically "blacklist" certain games from the DXVK work around and force them into the emulation fallback.

Surely it's better to have a handful of games suck performance wise than to have everything suck and force users to find workarounds to get your hardware functioning properly.
 
I believe that the reasoning is that using methods like DXVK can introduce pathways for users to cheat without hacking the game internally. Things like making walls semi-transparent and adding visual markers. They are rightly cagey.
 
Reddit users are saying since the first day that supplies of the ARC cards are low.

I don't know why that is. If you compare numbers sold for products that are out of stock versus the one that's not, the latter consistently sells in higher numbers. I think it was 20-30% higher. Pat said in the earnings call that they don't expect to meet the target sales number for ARC.

So why the low amount of ARC cards? Remember, if they can't buy ARC, then they go for the alternative. That's why products in good supply sell in higher numbers.

Artificial capping of supplies hurts YOU(the manufacturer). Generally, sales numbers are highest few days after the launch. So have enough for the initial launch, and ramp production much lower after.
 
Reddit users are saying since the first day that supplies of the ARC cards are low.

I don't know why that is. If you compare numbers sold for products that are out of stock versus the one that's not, the latter consistently sells in higher numbers. I think it was 20-30% higher. Pat said in the earnings call that they don't expect to meet the target sales number for ARC.

So why the low amount of ARC cards? Remember, if they can't buy ARC, then they go for the alternative. That's why products in good supply sell in higher numbers.

Artificial capping of supplies hurts YOU(the manufacturer). Generally, sales numbers are highest few days after the launch. So have enough for the initial launch, and ramp production much lower after.

MLID has said that most of the ARC chips are going to OEM complete systems with OEM ARC GPU.
 
Yes, but while MLID may have no sources (or no reliable ones), Intel pushing their OEM relationships is hardly a groundbreaking piece of speculation. The alternative is that Intel had yield issues, or had broken silicon and had to respin after receiving a lot of wafers already, or knew a good while ago that Arc was uncompetitive and is saving TSMC 6nm wafers for Battlemage, or messed up in some way.
 
Yes, but while MLID may have no sources (or no reliable ones), Intel pushing their OEM relationships is hardly a groundbreaking piece of speculation. The alternative is that Intel had yield issues, or had broken silicon and had to respin after receiving a lot of wafers already, or knew a good while ago that Arc was uncompetitive and is saving TSMC 6nm wafers for Battlemage, or messed up in some way.

One does not exclude the others. It could well be a combination of things.
 
My A380 runs ~12W lower with the fix. Important to note that GPUz is not reliable, I measured it on the power socket with energy check 3000 from the entire system. While GPUz power usage is basically unchanged my entire system dropped by about 12W. Tomshardware seems to rely only on GPUz, they have to test it and not rely on GPUz. GPUz reports 15-16W GPU power on my A380 regardless of the idle profile being enabled or disabled, considering that I only gain about 7W without a dGPU installed this power number seems inaccurate.
 
Back
Top