So a study that rejects the best ocean measurement data we have today - ARGO and satellite - and instead uses EIT and water buckets, is now the gold standard of climate warming.
Fascinating.
Your post seems to be very disingenuous - created to try to leave a little doubt for people reading the information. Can you show: Where the scientists "rejected" this data, where they relied exclusively on "EIT and water buckets," and how the hell you don't think their careers would be easily destroyed by someone showing that they ignored data that contradicted their research? Are you paid to make these posts? Or is this how you actually think?!
So it doesn't bother you that this supposedly settled science is complete shit at predicting anything and useful only for concocting reasons why their predictions failed but their formulae are still somehow accurate? Is there a single other discipline where you would accept that performance?
Say a group of mathematicians develop a set of equations for solving some particular real world problem, but every time their equations are used they get the wrong answer. They spend a year or three analysing the data set and then announce that their equations are in fact spot-on, they just need to add a few new variables to show how they got the correct answer, people just weren't looking in the right place. Over and over this happens. Are these in fact useful mathematicians? And if so - by what mechanism could their equations ever be disproven?
It's very easy to prove that two and three equal four if you are allowed to go back and reduce three to two.
The settled science is that the Earth is warming and that it's caused by man. To keep it at a very basic level for you, you do know how a best fit line works, right? And, you know how error bars work, right? Are you really arguing that having more points on the curve, then calculating a best fit line should show the exact same slope as with fewer points? What has been done, effectively, is decreasing the error bars - which doesn't necessarily mean that the data point is still in the exact center of the previous error bar.
I'll keep it really simple for you; although the numbers are fictional, it's done so to make the understanding pretty obvious:
Year 1 data: 30 +/- 10 (means the actual is definitely somewhere between 20 and 40)
Year 2 data: 30.1 +/- 10 (means the actual is definitely somewhere between 20.1 and 40.1)
Year 3 data: 29.9 +/- 10 (means the actual is definitely somewhere between 19.9 and 39.9)
Now, looking at those numbers, it would seem that the trend is pretty leveled off. But, suppose that since that time, more information is learned to correlate data that reduces those error bars. So now, the data is
28.3 +/- 3
32.4 +/- 3
34.4 +/- 3
Now, the analysis clearly shows that there's an increase.
Now, in that light, perhaps you realize why your post is not applicable.