GuyWithLag
GuyWithLag t1_j9t3zgd wrote
Reply to comment by sideways in What are the big flaws with LLMs right now? by fangfried
I get the feeling that LLMs currently are a few-term Taylor series expansion of a much more powerful abstraction; you get glimpses of it, but it's fundamentally limited.
GuyWithLag t1_j9j9l56 wrote
Reply to comment by sticky_symbols in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
>He could be a little nicer and more optimistic about others' intelligence.
Apologies for sounding flippant, but the whole political situation since '15 or so has shown that he's too optimistic himself...
GuyWithLag t1_j2atjfy wrote
Reply to comment by Equivalent-Ice-7274 in OpenAI might have shot themselves in the foot with ChatGPT by Kaarssteun
>commercials before you get to see the ai’s response
They're not that amateurish.
GuyWithLag t1_j2atfmc wrote
Reply to comment by Economy_Variation365 in OpenAI might have shot themselves in the foot with ChatGPT by Kaarssteun
Fuck it, I'm an IT pro and the way that ChatGPT can generate corporate boilerplate already saves me hours per month; I'd be willing to pay for a subscription just for that, and I assume I'll find more uses as it improves.
GuyWithLag t1_j1q43hg wrote
Reply to comment by imlaggingsobad in Money Will Kill ChatGPT’s Magic by vernes1978
There's good indications that one can trade-off training time and corpus size against model size, making the post-training per-execution cost smaller.
Note that ChatGPT is already useful to very many people; but training a new version takes time, and I'm guessing that OpenAI is currently still in the iterative development phase, and each iteration needs to be short as it's still very early in the AI game.
GuyWithLag t1_j1hgj0h wrote
Reply to comment by Ortus12 in Hype bubble by fortunum
Dude, no. Listen to the PhDs - the rapture isn't near, not yet at least.
On a more serious note: This is what the OP refers to when talking about a "hype bubble". The professionals working in the field actually know that the current crop of AI models are definitely not suitable for the architecture of AGI, except maybe as components thereof. Overtraining is a thing, and it's also shown that overscaling is also a thing. Dataset size is king, and the folks that create the headline-grabbing models already fed the public internet to the dataset.
From a marketing standpoint, there's the second-mover advantage: see what other did, fix issues and choose a different promotion vector. You're looking at many AI announcements in a short span due to the bandwagon effect, caused by a small number of teams showing multiple years' worth of work.
GuyWithLag t1_j0ttml7 wrote
Reply to comment by Ace_Snowlight in Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
Still, to have AGI you need to have working memory; right now for all transformer-based models, the working memory is their input and output. Adding it is... non-trivial.
GuyWithLag t1_izg8rz0 wrote
Reply to comment by visarga in 1 year of college since using GPT by innovate_rye
Was this written by an AI? because it's veering hard to a similar topic after the first paragraph.
>penmanship in the age of keyboards
Bad example, in both cases you need to know what you want to write, and how to express it. I'm of the position that the approach used by OP will lead to a shallower understanding of the topics he delegates to the AI for research, and that he himself will not have the necessary foundations to generate advances (novel things, sure, he'll get from the AI recombining the current state of the art).
GuyWithLag t1_ize6696 wrote
Reply to 1 year of college since using GPT by innovate_rye
>4.0 GPA stem club member robotics club member internship at a biology lab
But your thinking _as expressed by the post you wrote_ is of a teenager; you writing is sub-par and somewhat rambling. Have you actually integrated and internalized anything of what you were taught in that year, besides by osmosis from GPT-3? And no, I'm not talking about bloody facts, mate.
You probably should pass this piece through a GPT-3-equivalent cleanup process. But here lies the rub: you will need to do this for everything that you do going forward, and the facade will need to never fall.
GuyWithLag t1_iy39rpy wrote
Reply to comment by enkae7317 in Why is VR and AR developing so slowly? by Neurogence
That's... optimistic, even for this subreddit.
GuyWithLag t1_ix8lmg8 wrote
Reply to comment by Yuli-Ban in Metaculus community prediction for "Date Weakly General AI is Publicly Known" has dropped to Oct 26, 2027 by maxtility
>If you have a follow up to Gato that's 10x or 100x larger, the ability to cross/interpolate its knowledge across learned skills, and has a context window larger than 8,000 tokens, then you're approaching something like a proto-AGI.
And exactly this is why I think we're missing some structural / architectural component / breakthrough - the current models have the feel of unrolled loops.
GuyWithLag t1_iwtopke wrote
Reply to comment by TwoOwners in Vaccine doubles brain tumour survival rate in medical breakthrough by tonymmorley
This is a vaccine; for it to work you need to have a working immune system, which chemo more or less destroys
GuyWithLag t1_iw236tu wrote
Reply to comment by Mooide in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
>Only a few people can afford the IP for AI
Only a few people can afford the IP for AI Google, yet everyone has it at their fingertips.
Same thing will happen again, unfortunately.
GuyWithLag t1_iu8hfwk wrote
Reply to comment by Deserana12 in Vision Series Starring Paul Bettany In Works At Marvel Studios For Disney+: ‘Vision Quest’ by MarvelsGrantMan136
>it just makes me think “another one? Can’t we have a break?”
That's called getting older; time flies faster the more years you have on you...
GuyWithLag t1_iu8ha9r wrote
Reply to comment by nevereatpears in 'The Devil’s Hour' – Proof That Peter Capaldi Is One Of The World’s Most Terrifying Actors by Gato1980
>The doctor is clearly not a villain
The Doctor is a deity. Major or minor is debatable, but tends towards chaotic good...
GuyWithLag t1_irmzxby wrote
Reply to comment by eddiepaperhands in What is the current consensus on coronavirus transmission through fomites? Can I stop pressing elevator buttons with my keys? by PolytheneMan
It's not a matter of pride, you fool. Did you even read what I wrote? It's not scientists that drive this, it's a political topic, the MAGA morons made sure of that _exactly because it's a divisive topic_.
GuyWithLag t1_irm805h wrote
Reply to comment by eddiepaperhands in What is the current consensus on coronavirus transmission through fomites? Can I stop pressing elevator buttons with my keys? by PolytheneMan
>This is all correct, except the experts were extremely reluctant and slow to accept that they were wrong about aerosol droplet size.
This is an adaptation to science and public health reporting. The general public has been trained to not understand the scientific process, and "hey folks, we were wrong in our research and now you need to do X and stop doing Y" makes Joe "Moron" Public go "Hah, see? Science can be wrong, what else have they gotten wrong too?".
This isn't helped by there being monied interests in play.
GuyWithLag t1_jdz349i wrote
Reply to comment by The_Woman_of_Gont in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
>non-AGI AI is enough to convince reasonable laypeople it’s conscious to an extent I don’t believe anyone had really thought possible
Have you read about Eliza, one of the first chatbots? It was created, what, 57 years ago?