What you are saying Taejin is 100% correct---and if your posted hypothetical numbers of 1 and 1.4% was in fact correct, the public health stats would show that exact increase
with minor variations due to chance. And a good study would show not only that but a statistical confidence interval stating what is the chance the reported numbers could be merely be due to random chance. And not an effect of the variable being measured.
In general, unless you have at least a 95% confidence its not due to random chance, the study would not even be accepted as something worth publishing. But sometimes the numbers are actually there in some initial study, and only much later will it be discovered that there is a deliberate or accidental flaw in the methodology of the study. And when the flaw is removed, subsequent replication experiments will no longer support the initial conclusions. Or as heart surgeon will tell you, its frequent in medical science to have a small clinical trial, as a quick and dirty initial test, thats shows promising numbers, and when a larger study is done the initial results may or may not pan out. Even though the methodologies of the pilot trial look good initially, other tests such as double blind tests and much larger numbers will be what later decides what may have met earlier peer reviews for a pilot study.
But even with public health problem rates of 1 and 1.4 %, the differences would stick out like a sore thumb in statistical analysis given the large numbers. And when those predicted results don't show the predicted increases given other good statistics, its gotta ring some real alarm bells. And yes, I do have a social science background and do know something about these kinds of questions.---and why peer review results don't always pan out. It will in fact take years to test this study if the numbers can be even be worth initially supporting. But when the predicted results ain't happening in the real world--common sense and the long odds are---as as Click and Clack put it--- its boooooooogus.