That's just false, because I do criticize AMD tech as well, while you consistently show huge bias, including seeing 'vaseline' blurriness that no one else seems to see.
Who are you tell me what I see and don't see?
Newsflash I'm not the only one that saw ghosting, bluriness and temporal artifacts with FSR. If you look at the comment section on TPU, you will see several forumites noticed more ghosting with FSR, and on the YouTube video, several people commented that the DLSS looked sharper and clearer while FSR looked terrible. Also, DLAA is supposed to be the best according to many people.
Many people also said the opposite, that DLSS had more artifacts and IQ issues and they preferred FSR. I've said this many times already, there is a strong subjective component to image quality comparisons.
There is also a thing called confirmation bias, and people are very susceptible to it......including you and me.
I'm going to time stamp the blurry trees in the YouTube video. FSR is in the middle and it looks noticeably more blurry to me compared to TAA and DLSS. If you can't see it, I'm not going to accuse you of being bias towards AMD just because...
I didn't know that you are part of both the Nvidia and AMD developer teams and have inside information how many man-hours each company spent on this.
I don't need to be on their development team. I suggest you do some research on ML and see why so many tech companies are using it to improve their products and product experience.
It's not some whacky gimmick. ML and AI are two of the main reasons why the Google Pixel phones had such a big lead in a still shot quality for a time before the other phone manufacturers started to get in on it and eventually catch up.
In any case, FSR's quality has also improved by leaps and bounds.
I would definitely agree.
You seem to have this magical belief in ML, but the advantage of Nvidia could be explained by them having a substantial lead start. Nvidia also have a lead in raytracing, but you can't attribute that lead to ML.
Machine learning and AI are very imporant to RT performance because of DLSS and FG. How do you think they are getting such large performance gains with DLSS and FG so that you can "max out" RT settings?
Of course the difference between XeSS on Intel & DLSS vs FSR 2 is that the former use unique hardware, while AMD provides a solution that works about equally well on all hardware. Nvidia has a habit of leaving owners of previous gen cards out in the cold, while AMD doesn't.
This is true but there are pros and cons to each approach. Intel and Nvidia's results in higher performance and quality on their hardware though at the cost of being proprietary and exclusive. AMD's approach results in easier adoption and accessibility, but at the cost of quality and performance.