Tavrin
Tavrin t1_j9tns0j wrote
Reply to comment by nul9090 in What are the big flaws with LLMs right now? by fangfried
If this is true, the context window of GPT is about to do a big leap forward (32k tokens context window instead of the usual 4k or now 8k). Still I agree with you that actual transformers don't feel like they will be the ones taking us all the way to AGI (still there is a of progress that can still be done with them even without more computing power and I'm sure we'll see them used for more and more crazy and useful stuff)
Tavrin t1_j9p5467 wrote
Reply to Seriously people, please stop by Bakagami-
Seems like every time a cutting edge AI technology gets released to the public and becomes mainstream people just wanna show off about it. We got it with Stable Diffusion, and now ChatGPT, until the next big thing.
We get it, it's cool and all. But after more than a month of spamming the same things all the time it gets old real fast.
Tavrin t1_j1qlpcr wrote
Reply to Sam Altman Confirms GPT 4 release in 2023 by Neurogence
Can we stop with those clickbait posts about Sam Altman speaking about GPT-4 ? Until he or someone else at OpenAI say something tangible about it, it's only rumors or in this case suppositions/whishful thinking.
Tavrin t1_izickau wrote
Reply to Chat GPT down or overloaded or something? by mistfox69
Everybody and their grandmother are talking about it and trying it, it's bound to be overloaded as hell
Tavrin t1_iyi19fj wrote
I've seen that it's pretty good at writing unit tests, building a class or a method or even get useful tips to improve current code. It maybe doesn't get it right the first time but you can discuss with it to incrementally improve the returned code snippets.
I've been really impressed by its context memory
It feels like some kind of software Jarvis (minus the quirky personality, that bot's pretty dry)
Tavrin t1_iw0cw27 wrote
Reply to comment by KnewAllTheWords in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by Dr_Singularity
Now it's all about BIG-bench
Tavrin t1_itc17b5 wrote
Reply to When do you expect gpt-4 to come out? by hducug
I can't find the link anymore but there were rumours it would be between November and February.
But in reality no one knows
Tavrin t1_itbfbyx wrote
Reply to comment by daltonoreo in 3D meat printing is coming by Shelfrock77
I've got to say, that's too bad. This subreddit is about the singularity, where we all hope that technological advancements will end suffering, poverty, food related issues, labour etc...
Can't we also hope that this future and technological advances will also end animal suffering by making it unnecessary (with the creation of fake meat that is as good or even better than real meat) ? Since we all hope that technology will make the human condition better, can't we hope the same for other species ?
Tavrin t1_itbewma wrote
Reply to comment by Rebatu in 3D meat printing is coming by Shelfrock77
You can call it propaganda (since the goal is indeed to incite people to at least eat less meat) but the imagery is still real and pretty shocking/gruesome. It does an admirable job at making people discover the suffering behind the meat industry.
Tavrin t1_itb1jen wrote
Reply to comment by daltonoreo in 3D meat printing is coming by Shelfrock77
I invite you to watch this, most powerful documentary I have ever seen, it changed me.
Man I can't wait for the development of slaughter free lab grown meat that tastes exactly like the real deal. It will be a game changer.
Tavrin t1_it9bbov wrote
Reply to U-PaLM 540B by xutw21
There tends to always be a lot more papers this time of the year because the NeurIPS conference is just around the corner so that's why we are suddenly seeing alot of new stuff right now but it's always nice to see.
And obviously the papers become more and more impressive each year.
I've got to say right now Google came prepared and came in full force
Tavrin t1_is2kbjf wrote
Reply to comment by lifebeyondwalls in AIs are now expert-human-level in no-press Diplomacy and Hanabi by Ezekiel_W
It already had some cooperation, but maybe not on the same level as this new paper https://openai.com/blog/openai-five-defeats-dota-2-world-champions/#cooperativemode
Tavrin t1_is2k87u wrote
Reply to comment by Ezekiel_W in AIs are now expert-human-level in no-press Diplomacy and Hanabi by Ezekiel_W
It did include cooperation with humans https://openai.com/blog/openai-five-defeats-dota-2-world-champions/#cooperativemode
Tavrin t1_is1nj63 wrote
I may have misunderstood what he meant but Dota certainly requires a lot of cooperation to win, it's not a 2 player zero sum game.
OpenAI's dota bots were really crazy at the time and cooperated really well, I would love to see them go at it again with today's scaling and advances in algorithms
Tavrin t1_iqtombh wrote
Reply to comment by Nmanga90 in Large Language Models Can Self-improve by Dr_Singularity
It's anonymous for double peer reviewing (to try to prevent review biases) but like someone said, it's probably PaLM since the model is the same size, so the authors are probably from Google.
Tavrin t1_j9vl37m wrote
Reply to comment by MysteryInc152 in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
Flan-Palm is 540B so there's that