Yes it is.
Pascal went backwards compared to Maxwell. Just look at GP104 compared to GM104. GP104 has 25% more shaders and clocks roughly 45% higher, which results in 80% more raw shading performance. However the actual performance improvement in games was only
60-70%.
Of course this is not caused by the performance of individual shaders in GP104 regressing, but rather by the fact that performance never scales linearly due to Amdahl's law (whether it be due to memory bottlenecks, issues with extracting sufficient parallelism or something else).
The 2080 Ti FE has 21% more shaders than the 1080 TI, and a 0% increase in clock rate (the 1080 Ti boosts to an
average of 1636MHz in real life usage). The 2080 Ti FE may of course boost higher than it's advertised boost, but we don't know that yet, and besides it's irrelevant for estimating the baseline improvement.
Based upon what we saw with Pascal and it's less than linear scaling, the baseline is thus somewhere around 15-20% (the same number also applies to the 2080 and 2070). It's of course perfectly possible that Turing ends up much higher than 15-20% faster, but 15-20% is just the baseline.
And that's not guessing, that's based on historical precedent.