relevantmeemayhere
relevantmeemayhere t1_jcrotun wrote
Reply to comment by MysteryInc152 in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152
Mm, not really.
Bootstrapping is used to determine the standard error of estimates using resampling. From here we can derive tools like confidence intervals, or other interval estimates.
Generally speaking you do not use the bootstrap to tweak the parameters of your model. You use cross validation to do so.
relevantmeemayhere t1_jbl3ivt wrote
This really isn’t a surprise, where’s there’s gender equality; there tends to be more focus on human rights as a whole.
relevantmeemayhere t1_jbl3bxy wrote
Reply to comment by Smart-Rip-3733 in Where there's gender equality, people tend to live longer by LifeTableWithChairs
I mean, there will never be true equality when women don’t have to sign their bodies over to selective service or get a bunch of rights stripped away. Or when we overly commoditize the attention and speech (and just general financial, and social support) around women’s rights and empowerment when the education gap, sentencing gap, and mental health/suicide/substance abuse/homeless gap is continuing to widen with women being the “favored”, which harms men and drives them towards shit like trumpism.
The solution to your problem is ironically empowering men more, as the motherhood gap does have measurable affects on say; one’s earning ability.
relevantmeemayhere t1_j9kygtx wrote
Reply to comment by Featureless_Bug in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
Any problem where you want things like effect estimates lol. Or error estimates. Or models that generate joint distributions
So, literally a ton of them. Which industries don’t like things like that?
relevantmeemayhere t1_j9kin48 wrote
Reply to comment by adventuringraw in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
I agree with you. I was just pointing out that to say they are the only solution is foolish, as the quote implied
This quote could have just been used without much context, so grain of salt.
relevantmeemayhere t1_j9kifhu wrote
Reply to comment by VirtualHat in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
Linear models are often preferred for the reasons you mentioned. Under fitting is almost always preferred to overfitting.
relevantmeemayhere t1_j9ki2x1 wrote
Reply to comment by Featureless_Bug in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
Because they are useful for some problems and not others, like every algorithm? Nowhere in my statement did I say they are monolithic in their use across all subdomains of ml
The statement was that deep learning is the only thing that works at scale. It’s not lol. Deep learning struggles in a lot of situations.
relevantmeemayhere t1_j9khp8m wrote
Reply to comment by VirtualHat in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
As you mentioned, this is highly dependent on the functional relationship of the data.
You wouldn’t not use domain knowledge to determine that.
Additionally, non linear models tend to have their own drawbacks. Lack of interpretability, high variability being some of them
relevantmeemayhere t1_j9ilsax wrote
Reply to comment by [deleted] in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
?
relevantmeemayhere t1_j9ij6cc wrote
Lol. The fact that we use general linear models in every scientific field, and have been for decades should tell you all you need to know about this statement.
relevantmeemayhere t1_jcrp2rr wrote
Reply to comment by Temporary-Warning-34 in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152
Honestly, really comes off as word salad lol.
I haven’t read the details, but it sounds like resampling in a serial learner?