• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question The RTX 4070 was a bust

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
r7-5700x
DDR4 3200 32gb
NVME W Black 1TB
B450m
2560x1440/144hz LG IPS 32"
I dont think it was bottlenecked.
I just anticipated a bit more from a $550 card vs. the 3060ti.
 
Vram prices are in the toilet. I do not understand why 16GB is not the standard base Vram configuration. There should be 32GB vram cards at the top level.

Best argument I've seen for that is cards with more ram would interfere with the professional market. People might skimp on the 5k+ pro cards with the $1500 card with 24-32GB of ram. Like they have in the past.
 
Best argument I've seen for that is cards with more ram would interfere with the professional market. People might skimp on the 5k+ pro cards with the $1500 card with 24-32GB of ram. Like they have in the past.
Yes, we can't ask them to impede the Quadro market. I guess they call the Quadro cards the A series of professional cards with a fancy card markup over consumer cards.
 
r7-5700x
DDR4 3200 32gb
NVME W Black 1TB
B450m
2560x1440/144hz LG IPS 32"
I dont think it was bottlenecked.
I just anticipated a bit more from a $550 card vs. the 3060ti.

Monitor your GPU usage to know when you are CPU bound. RTX 4000 series is kinda meh except for 4090. 5700X is not enough for 144 fps in Starfield.
 
Vram prices are in the toilet. I do not understand why 16GB is not the standard base Vram configuration. There should be 32GB vram cards at the top level.
There is more to VRAM cost than the chips, that being increased die size caused by increased memory bus or altenatively backside VRAM. Both cost money
 
Last edited:
TPU has the 4070 as performing 45% better than the 3060 Ti across all three resolutions, so it is a solid upgrade. Really the only knock against it was that the 3060 Ti was $400 so at $600 a 4070 isn't really adding much value per dollar outside of the added VRAM and DLSS3 functionality.

Unless the 8 GB of VRAM is an issue, holding on to the 3060 Ti until Blackwell is probably a better option.
 
As mentioned, you may be CPU bound, even with your 5700X. Part of the problem is just Starfield IMO. It is known to have optimization / performance issues. I would try to enjoy the 4070 in other games if possible, or if you don't think it is a big enough upgrade, you could try to return it if possible, as others have mentioned. Then maybe you could get a bigger upgrade down the road.

You may also fair better over time as drivers improve, and Starfield receives patches with more optimization. Mods/tweaks from the community might help you as well.
 
r7-5700x
DDR4 3200 32gb
NVME W Black 1TB
B450m
2560x1440/144hz LG IPS 32"
I dont think it was bottlenecked.
I just anticipated a bit more from a $550 card vs. the 3060ti.
In Starfield you'll be cpu limited in most cities. Should be seeing drops into the 50s, especially in New Atlantis.
 
sorry I was hitting the sauce pretty hard last night.
one example, starfield, ran like trash on the 3060ti, it ran like leftovers contemplating being trash on the 4070. the other games I play, i didn't notice substantial improvement.

4070 is about 30% performance upgrade. While significant. It's not an amount I'd bother upgrading for.

I had a rule that I established back around when I had my first "PC" (a 486). Don't upgrade unless it's at least double the performance. That way, I can really notice the difference when I upgrade, and appreciate it.

Double performance for the same price, was happening relatively often, back in the early, good old days of Moores law, when all aspects were delivering massive jumps in transistors, and efficiency, for practically the same cost, but more difficult today with things slowing down.
 
Does that mean AMD loses money on their cheap 12GB cards? Somehow I think not.
If refering to RDNA2 with its much cheaper 7nm, then no, they are not necessarily losing money. Margins are smaller (just looking at their official margins) but they have better maneuvarability in terms of prices with respect to VRAM. Even more than Nvidia's current and last gen who while last gen had Ampere using a cheaper Samsung node, most of their GPUs were limited by the 1GB GDDR6X RAM chips that were available for most of the architecture's lifetime
 
Back
Top