Submitted by Jealous-Pop-8997 t3_z4g1ab in science
fasthpst t1_iy03dwx wrote
Reply to comment by fasthpst in Glyphosate associated with lower birth weights by Jealous-Pop-8997
Does anyone find it interesting that the only time animal model studies and high exposure are called into question is when the agent being tested is an agricultural chemical?
There are studies every day posted in this sub regarding animal studies, cell cultures, in silico, etc finding effects of various stressors but rarely does someone disparage methods like they do with Glyphosate
eng050599 t1_iy1f7d2 wrote
You just don't get it do you?
It's not the use of animal models that's the issue. It's the overall experimental design of the studies you elect to cite.
It's entirely possible to make use of the same animal model in multiple studies, it's how they are used in terms of the overall power of analysis that determines the strength of the study, and if it can be used to show causal effects, or is limited to correlative associations.
Quite simply, power of analysis reflects the ability of a given method to accurately differentiate between treatment effects and natural background noise at a specific threshold for significance.
The key elements that factor into this are the sample size, and the variability within the population.
The problem with the studies you cite is that they universally lack the strength to accomplish this for causal effects. There's simply too much noise in the background for them to accurately manage this.
This isn't the case for the OECD studies, as they were specifically developed to ensure that researchers would have the statistical power to test for causal effects, and they've been updated MANY times over the years to take into account new methods and overall knowledge relating to toxicology.
Just take a look at the review of Griem et al., (2015, Doi: 10.3109/10408444.2014.1003423).
It goes through a range of carcinogenicity and chronic toxicity studies and details not only studies that were fully compliant, as well as those who fall short of this.
Just in that review we see the successful replication of the OECD methods, with comparable results obtained from different lab, different researchers, different countries, and a period of two decades.
Now look at the studies that you've bet the farm on.
None of them even come close to the statistical power of one of the compliant studies, let alone be capable of rebutting the full collection of them.
[deleted] t1_iy222dp wrote
[removed]
eng050599 t1_iy63n5j wrote
>Funny all these independent research groups from various facilities around the world are all thumbs (according to you)
>
>Have you ever proposed a project, had it approved by ethics department etc then gained funding? It's a pretty involved process usually involving a team of people. I'm terrible at statistics but we have specific experts that tell us in advance how how many mice/fish/frogs/flies are needed for each level of results. If you think all those various teams were unaware then really the onus is on you to prove this incompetence.
>
>The OECD guidelines you keep harping on about are for regulatory application approval and consideration for reviews. They dont usually apply to primary research. Have you ever actualy applied for a grant?
Doubling down on idiocy I see, and now pretending that you actually plan and conduct toxicity testing...how cute.
You keep on forgetting that there are different types of studies, and they all have differing abilities based on their design and statistical power.
The studies that you keep on harping about can only show correlation, and in the case of observational studies, that's the norm, as only the largest of these are ever capable of concluding that a causal link exists.
Consider the landmark cancer study of Hammond and Horn. It recruited over 100,000 subjects, and even that wasn't enough to be certain that the link was causal. It was only after the follow-up study of Hammond and the American Cancer Society followed over 1,000,000 subjects that the link was firm enough.
The reason why such numbers are needed is due to the increased variation of the population used in the study. The greater the variance, the greater the required population size is, plus in epidemiological studies, we are not dealing with controlled environments, and as such, the number of confounding and lurking variables makes anything but correlative associations next to impossible.
The OECD studies do not have such a limitation, as they are designed specifically to have the power of analysis to determine if the effects of a given chemical are causal in nature.
Now, first off, I can now see why so many of your posts cannot be seen. They've been removed by the moderators.
Note that it's not just your replies to me that are getting removed, so this might be an instance where you should take the hint, and realize that you're fundamentally wrong in your understanding of toxicology.
Case in point, in an earlier, and now deleted post (I still have the link though https://libguides.winona.edu/ebptoolkit/Levels-Evidence), you provided a link to the hierarchy of evidence...but you missed what types of studies that was related to, as well as where the OECD methods would fall under this hierarchy.
The page that you selected is in relation to clinical studies relating to treatment protocols, not assessing toxicity of a given chemical.
While toxicity testing is conducted on all pharmaceutical candidates, it's not in the clinical phase, it's all pre-clinical.
Quite literally, you're not even looking at the right place in the research timeline.
Also, and even more amusingly, we can extrapolate out this hierarchy to encapsulate toxicity assessments by looking at the design of things like the OECD methods.
More specifically, almost ALL of the OECD study designs are double blinded randomized control trials, with the test and control populations all randomly selected.
Guess what that makes Griem et al., (2015)?
Top of the bloody heap, as it is a systemic meta-review of all the relevant DB-RCT studies on glyphosate.
Finally, the age of a study isn't relevant unless you can show that there's an issue related to the data collected and/or the methods used. Simply pointing to more recent studies that lack comparable statistical power to the older studies isn't in any way, shape, or form, capable of countering the previous studies.
This is why I keep on pointing out the fact that you have NOTHING that can counter the compliant studies because literally every single study you choose to cite is orders of magnitude weaker in terms of just what it can differentiate.
Hell, even the authors of the study here don't even try to claim that they can show causation, and even their correlative associations are underpowered.
You don't understand this topic, and are unwilling to take the time to learn. Unfortunately this means that your only real use in this discussion is as an abject lesson of the dangers of the Dunning Kruger Effect and cognitive dissonance.
Edit: Oh, and your comment about the number of publications supporting you; again it's so cute that you think that, but you are very wrong, as you have NO publications that can show causal effects. This is the whole reason why we continually see the regulatory and scientific agencies reject the banal fear-mongering from the anti-biotech side of things.
Your supporting data isn't even close to equivalent, let alone capable of superseding properly conducted chronic toxicity studies.
Viewing a single comment thread. View all comments