Submitted by Vegetable-Skill-9700 t3_121agx4 in deeplearning
BellyDancerUrgot t1_jdpbtyo wrote
Reply to comment by Appropriate_Ant_4629 in Do we really need 100B+ parameters in a large language model? by Vegetable-Skill-9700
Oh I’m sure it had the data. I tested them on a few different things , OOPs, some basic CNN math, some philosophy, some literature reviews, some paper summarization. The last two were really bad. One mistake in CNN math. One mistake in OOPs. Creative things like writing essays or solving technical troubleshooting problems, even niche stuff like how I could shunt a gpu , it managed to answer correctly.
I think people have the idea that I think gpt is shit. On the contrary I think it’s amazing. Just not the holy angel and elixir of life that AI influencers peddle it as.
Viewing a single comment thread. View all comments