hadaev
hadaev t1_jdzcowi wrote
Reply to comment by Seankala in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Well we usually expect it from not really ds peoples like biologists using ds methods and making such a trivial mistake.
It doesnt seems hard to search matches in text. Unlike other data types.
hadaev t1_jdlym7s wrote
Reply to comment by Crystal-Ammunition in [D] Do we really need 100B+ parameters in a large language model? by Vegetable-Skill-9700
Idk, internet is big.
hadaev t1_iziyd0p wrote
Reply to comment by Flag_Red in [R] Large language models are not zero-shot communicators by mrx-ai
Don't trust my sample, try yourself.
hadaev t1_izikjgj wrote
hadaev t1_iy4hp4q wrote
hadaev t1_ixnrhn4 wrote
Reply to comment by sam__izdat in [P] Stable Diffusion 2.0 Announcement by hardmaru
Colab.
But yeah, usually such big models are tested on huge scales.
Some cherry picked comparisons with tens samples shows nothing.
hadaev t1_ixnh0ro wrote
Reply to comment by sam__izdat in [P] Stable Diffusion 2.0 Announcement by hardmaru
>so it appears they mostly removed human anatomy, weapons, certain contemporary artists, celebrity faces, etc.
Ah, appears.
How many data samples you tested for this conclusion?
hadaev t1_ixn5o10 wrote
Reply to comment by my-sunrise in [P] Stable Diffusion 2.0 Announcement by hardmaru
"accuse of censorship" was about worst artists styles prompts.
And gived how some artists whined about model, some peoples on stable diffusion subbredit started conspiracy about due "legal issues they’re facing" they removed (censored) some artists from data and gave us lobotomized model.
Which probably doesnt happened to my opinion, gived they said they changed text encoder.
hadaev t1_ixn4rus wrote
Reply to comment by Flag_Red in [P] Stable Diffusion 2.0 Announcement by hardmaru
>The model is censored for NSFW content
I mean not related to porn things like greg rudkowski prompt.
>is because they were included in Clip's training set
Basically what i said.
hadaev t1_ixmajnt wrote
Reply to comment by asdfzzz2 in [D] Transfer Learning of Image Trained Network in Audio Domain by Oceanboi
To add to it, non random weight might be worse for tiny/simple models.
But modern vision models should be fine with it.
Like bert text weights is a good starting point for images classification.
hadaev t1_ixm0qnw wrote
Reply to [P] Stable Diffusion 2.0 Announcement by hardmaru
I like how community overreact because some prompts have reduced quality (probably due to the new text encoder) and accuse of censorship.
hadaev t1_ixltjo0 wrote
Reply to comment by Ok_Construction470 in [D] Transfer Learning of Image Trained Network in Audio Domain by Oceanboi
Just rescale it to -1, 1 like people do for image
hadaev t1_ixlti0s wrote
> and you may be better off starting from scratch.
Basically you compare random wights and good trained weights. Why latter should be worst?
hadaev t1_ix5eduw wrote
Reply to comment by yannbouteiller in [R] Tips on training Transformers by parabellum630
Just replace gru with transformer and keep cnn as positional encoding.
hadaev t1_jedzbbc wrote
Reply to comment by MrFlufypants in [D][N] LAION Launches Petition to Establish an International Publicly Funded Supercomputing Facility for Open Source Large-scale AI Research and its Safety by stringShuffle
I think main idea is to open source whatever trained on this thing.
Open ai want to share their datasets and train new gpt? Well, nice for everyone.