NoLifeGamer2
NoLifeGamer2 t1_je4bxa1 wrote
I love how there are so many GPT models now that we have taken to calling them GPT-n lol
NoLifeGamer2 t1_jdv52o0 wrote
Reply to [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
This is basically bootstrapping for llms right?
NoLifeGamer2 OP t1_jb559ia wrote
Reply to comment by Philpax in [D] Ethics of minecraft stable diffusion by NoLifeGamer2
Thx! If I do do it, I will probably just use it myself, or submit it anonymously.
NoLifeGamer2 t1_jarju3y wrote
Reply to comment by BitterAd9531 in [D] offline speech to text - trainable by AlexSpace3
Isn't that online?
NoLifeGamer2 t1_jaj9i1b wrote
Reply to comment by visarga in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
Gotta love getting those "Model currently busy" errors for only a single request
NoLifeGamer2 t1_jac6dd7 wrote
Reply to Prepare for Carbonated Trouble by ProJYeet
Jesse! We need to coke!
NoLifeGamer2 t1_ja4au30 wrote
But the value of these shells will fall!
NoLifeGamer2 t1_j8hmag2 wrote
To my understanding, if you use noise, then you can generate different images using the same algorithm, just by changing the noise. If you have a blank canvas, there is only 1 initial starting position (blank), so there would be only 1 output image.
NoLifeGamer2 t1_j7gin1l wrote
Reply to comment by sinavski in [D] List of Large Language Models to play with. by sinavski
Honestly wouldn't be surprised lol
NoLifeGamer2 t1_j7geyw5 wrote
I love how bloom was just like "F*ck it let's one-up openAI"
NoLifeGamer2 t1_je9gi5u wrote
Reply to [D] Improvements/alternatives to U-net for medical images segmentation? by viertys
I recommend using bootstrapping to create more datapoints, then approve the ones you like and add them to the dataset. Then, train based on the larger dataset.