Submitted by ILikeNeurons t3_10hrjhr in dataisbeautiful
coffeesharkpie t1_j5bdc9j wrote
Reply to comment by Obvious-Priority-791 in How Covid-19 vaccines succeeded in saving a million US lives, in charts by ILikeNeurons
You realize that this is a problem that concerns quite a lot of areas when it comes to research in medicine where you can't simply conduct classical experiments? E.g. what would happen if Person X smokes vs. doesn't smoke, takes certain medication vs. don't, stays in his mouldy home vs. moves out, etc. Things where you would put people's lives at risk if you withhold treatment or activly damage them like with taking drugs/smoking, etc. You get the gist.
For this reason, researchers developed sophisticated, statistical methods to get a grip on this. E.g. Rubins Potential Outcomes Framework, Causal Mediation Analysis, etc. Using, for example, prior information or trying to find someone who is as equivalent as possible to Person X in all relevant traits (e.g., age, gender, fitness, social background, etc.) aside from smoking to draw inferences from there. Honestly, there are multiple approaches there.
So, long story short, estimates are not drawn from thin air. They are a product of scientific rigour, commonly used in practical all empirical fields in science (from intelligence tests or personality assessments to climate science or partical physics), and because of this they can be surprisingly accurate. Especially as most of them also have information on uncertainty related to them (e.g. standard errors, confidence or credible intervals, etc.)
unhappymedium2 t1_j5cewak wrote
Sure, but those methods often have to make assumptions about significant variables or bring together variables with wide tolerance bands. The resulting "estimates", therefore, have very low confidence and should really be taken with a grain of salt, but many people see it and seem to think our species has figured out how to predict the future.
coffeesharkpie t1_j5e1g0z wrote
Well, you know it's a common notion in statistics that "All models are wrong, but some are useful". This means no model will ever capture reality as is, but we can make sure the model is good enough to be useful for the particular application. This is possible because we can actually quantify uncertainty about prior information, estimates and predictions (e.g. through credible or confidence intervals) and make sure models are as exact and as complex as needed.
Funnily, we can predict things quite well, especially when it comes to large numbers of people (individuals are the hard stuff). Like how social background influences educational levels for a population, how lifestyle will influence average health, how climate change may affect frequency of extreme weather, even what people may want to write on their smartphones is predicted with these kind of models.
nutsbonkers t1_j5dk9q1 wrote
We can predict the future, with a degree of confidence. The statistical models used in this un peer reviewed paper have been peer reviewed. The math they used is sound because it's been peer reviewed and deemed appropriate and accurate enough. I'm sure it will be reviewed in the future, Vice or whoever just wants to get a jump on a good article.
Viewing a single comment thread. View all comments