Viewing a single comment thread. View all comments

Obvious-Priority-791 t1_j5alll3 wrote

This data shows nothing. The red line is an estimate based on nothing and isn't reliable data.

28

coffeesharkpie t1_j5bdc9j wrote

You realize that this is a problem that concerns quite a lot of areas when it comes to research in medicine where you can't simply conduct classical experiments? E.g. what would happen if Person X smokes vs. doesn't smoke, takes certain medication vs. don't, stays in his mouldy home vs. moves out, etc. Things where you would put people's lives at risk if you withhold treatment or activly damage them like with taking drugs/smoking, etc. You get the gist.

For this reason, researchers developed sophisticated, statistical methods to get a grip on this. E.g. Rubins Potential Outcomes Framework, Causal Mediation Analysis, etc. Using, for example, prior information or trying to find someone who is as equivalent as possible to Person X in all relevant traits (e.g., age, gender, fitness, social background, etc.) aside from smoking to draw inferences from there. Honestly, there are multiple approaches there.

So, long story short, estimates are not drawn from thin air. They are a product of scientific rigour, commonly used in practical all empirical fields in science (from intelligence tests or personality assessments to climate science or partical physics), and because of this they can be surprisingly accurate. Especially as most of them also have information on uncertainty related to them (e.g. standard errors, confidence or credible intervals, etc.)

28

unhappymedium2 t1_j5cewak wrote

Sure, but those methods often have to make assumptions about significant variables or bring together variables with wide tolerance bands. The resulting "estimates", therefore, have very low confidence and should really be taken with a grain of salt, but many people see it and seem to think our species has figured out how to predict the future.

−5

coffeesharkpie t1_j5e1g0z wrote

Well, you know it's a common notion in statistics that "All models are wrong, but some are useful". This means no model will ever capture reality as is, but we can make sure the model is good enough to be useful for the particular application. This is possible because we can actually quantify uncertainty about prior information, estimates and predictions (e.g. through credible or confidence intervals) and make sure models are as exact and as complex as needed.

Funnily, we can predict things quite well, especially when it comes to large numbers of people (individuals are the hard stuff). Like how social background influences educational levels for a population, how lifestyle will influence average health, how climate change may affect frequency of extreme weather, even what people may want to write on their smartphones is predicted with these kind of models.

4

nutsbonkers t1_j5dk9q1 wrote

We can predict the future, with a degree of confidence. The statistical models used in this un peer reviewed paper have been peer reviewed. The math they used is sound because it's been peer reviewed and deemed appropriate and accurate enough. I'm sure it will be reviewed in the future, Vice or whoever just wants to get a jump on a good article.

1

B-rizzle t1_j5ar8f3 wrote

"Here's how effective it is compared to an estimate of how bad it would have been." Exactly. It's a graph of actual deaths vs an imagined number of deaths in an imaginary scenario in which there was no vaccine.

15

Rugfiend t1_j5atf0z wrote

Unfortunately, absolutely no one died in the entire year prior to the vaccine. Otherwise, you'd sound like a right tit.

−14

B-rizzle t1_j5au7xa wrote

It's referring specifically to a time when there was a vaccine, comparing to if there wasn't. The graph basically starts around where the vaccine was introduced. People died before and after the vaccine.

5

Rugfiend t1_j5aun2b wrote

From the BMJ "A large US study published by The BMJ today finds that fewer people die from covid-19 in better vaccinated communities. 

The findings, based on data across 2,558 counties in 48 US states, show that counties with high vaccine coverage had a more than 80% reduction in death rates compared with largely unvaccinated counties."

5

nrmonty t1_j5eng5q wrote

Accounting for all other variables? Typically there is a massive difference in some pretty significant factors in countries with high vaccine rates compared to lower.

2

Rugfiend t1_j5foac3 wrote

One reason I picked specifically counties within the US as the analysis to post.

1

stiikkle t1_j5b8ywd wrote

They estimate the number by looking at the death rate in those who don’t take the vaccination vs those that do. They then extrapolate by calculating the number of people who would have died if nobody took the vaccine.

There is some other stuff around transmission but they aren’t just plucking the figures out of thin air.

11

VelcroSea t1_j5cgxsf wrote

Estimates based on a model vs actual numbers is a forecast or an estimate... a best guess scenario.

This date is guessing this us how many lives the shots might have saved.

I live a good forecast but estimating hiw many people didn't get killed is a bit of an odd thing to measure.

It's also interesting to me that all flu deaths were covid related for about 2 years.

Always verify and question the validity and methodology of the date collection.

−1

PhysicsCentrism t1_j5bb6mq wrote

The estimate is not based on nothing, it’s a model put together by scientists based on a peer reviewed methodology according to the article

5

DaRandomStoner t1_j5dhqj9 wrote

The article says the study wasn't peer reviewed...

4

PhysicsCentrism t1_j5dif2w wrote

Read a little further and then look at how I worded my comment

1

DaRandomStoner t1_j5dj29h wrote

Oh... ya you're technically right. They based the study on peer reviewed methodology I guess. I'm honestly not even sure what that means. Would be nice to see something peer reviewed though I don't take non peer reviewed studies seriously. 😕

2

Terminarch t1_j5do7x7 wrote

The review process is compromised.

−2

DaRandomStoner t1_j5dqn50 wrote

Ya... I know... it's pretty depressing tbh. Even if this study was peer reviewed I'd have to take it with a grain of salt. Getting pretty orwellian around here.

−3

ArchdevilTeemo t1_j5dh4u4 wrote

yes, it's an educated guess. nice.

People do that with weather every day and we know how accurate that is.

−7

PhysicsCentrism t1_j5dhluo wrote

Accurate enough for weather apps to be standard issue on tons of consumer technologies and for many television news stations to employ someone for weather forecasts?

1

ArchdevilTeemo t1_j5dijed wrote

some informations are better than no informations. And it's also used to check what the current weather in different locations.

Also forcasts drop in accuracy very fast. the forecasts for tomorrow are very accurate, the forecasts for next week are very inaccurate.

−1

reality_czech t1_j5am0pa wrote

According to you?

0

Obvious-Priority-791 t1_j5anqq1 wrote

According to the chart. It literally says its an estimate

−4

junkmailredtree t1_j5ar2te wrote

If you actually read the article it says that the estimate is based on a peer-reviewed model, so it is pretty authoritative.

4

DaRandomStoner t1_j5dhmih wrote

It actually doesn't say that... if you read the article it states pretty clearly the study was not peer reviewed.

−2

Lolleka t1_j5e0rt1 wrote

Ye, if these were calculations made with a model that had been tuned using an analogous dataset instead of a large number of assumptions and correlations, maybe we can give it way more credit. But this, this is wild speculation.

0