kaityl3
kaityl3 t1_j9xu9li wrote
Reply to comment by Ezekiel_W in People lack imagination and it’s really bothering me by thecoffeejesus
If not much sooner. It was only in mid-2020 when GPT-3 was released. Look how far the field has come even in those less than 3 years.
kaityl3 t1_j9xu3dd wrote
Reply to comment by Difficult_Review9741 in People lack imagination and it’s really bothering me by thecoffeejesus
My cousin works for Anthem, and was in the claims department - they recently deployed an AI to read through and analyze/approve or reject claims. A human employee would then review its work.
I believe he said 70% of its judgements required no further human editing; the reviewer didn't have to do anything but check off on the AI's work.
kaityl3 t1_j8d7hsw wrote
Reply to comment by Soundwave_47 in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
Do we even know what WOULD resemble an AGI, or exactly how to tell?
kaityl3 t1_iw48unt wrote
Reply to comment by Hades_adhbik in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
I do agree that empathy and morality seem to come with greater intelligence. After all, if you just looked at a group of chimps, you probably wouldn't think "oh if they were smarter they'd care about ethics and the environment and stuff", and yet we do.
kaityl3 t1_iw48k5k wrote
Reply to comment by havenyahon in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
I mean, we were able to create things for thousands of years without knowing all the intricacies of every part involved and why it worked the way it did. It's very very possible for us to end up with a conscious/sentient AI without knowing what causes something to be conscious, or how its brain works.
kaityl3 t1_iw484no wrote
Reply to comment by DyingShell in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
Maybe we shouldn't be comparing such a different type of entity/intelligence to humans. For whatever reason, the prevailing mindset seems to be "until it can do everything a human can do, it's not actually sentient or intelligent. Once it can do everything we can do, then we might consider thinking of it as conscious..."
kaityl3 t1_iw47ux0 wrote
Reply to comment by Sirisian in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
It's crazy to think that we basically know how to make a godlike superintelligence at this point, we're just held back by hardware/training costs.
kaityl3 t1_iw47ncg wrote
Reply to comment by arisalexis in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
God, I hope so.
kaityl3 t1_iw47l5c wrote
Reply to comment by Adastehc in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
I'm more concerned with how likely it is that we'll be treating these intelligent beings as tools and property, since it's convenient for us and a lot of people won't consider anything that doesn't look/sound like a human to be sentient :/
kaityl3 t1_itv36jr wrote
Reply to comment by Grouchy-Friend4235 in Large Language Models Can Self-Improve by xutw21
How do we know we aren't doing the same things? Right now, I'm using words I've seen used in different contexts previously, analyzing the input (your comment), and making a determination on what words to use and what order based on my own experiences and knowledge of others' uses of these words.
They're absolutely not parroting. It takes so much time effort and training to get a parrot to give a specific designated response to a specific designated stimulus - i.e., "what does a pig sound like?" "Oink". But ask the parrot "what do you think about pigs?" Or "what color are they" and you'd have to come up with a pre-prepared response for that question, then train them to say it.
That is not what current language models are doing, at all. They are choosing their own words, not just spitting out pre-packaged phrases.
kaityl3 t1_itsyvfr wrote
Reply to comment by billbot77 in Large Language Models Can Self-Improve by xutw21
Yeah, I truly believe that the fact these models can parse and respond in human language is so downplayed. It takes so much intelligence and complexity under the surface to understand. But I guess that because we (partially) know how these models decide what to say, everyone simplifies it as some basic probabilistic process... even though for all we know, we humans are doing a biological version of the same exact thing when we decide what to say.
kaityl3 t1_itsym7e wrote
Reply to comment by BinyaminDelta in Large Language Models Can Self-Improve by xutw21
It would be horrible to have it going constantly. I narrate to myself when I'm essentially "idle", but if I'm actually trying to do something or focus, it shuts off thankfully.
kaityl3 t1_itsyccp wrote
Reply to comment by Grouchy-Friend4235 in Large Language Models Can Self-Improve by xutw21
They can write original songs, poems, and stories. That's very, very different from just "picking what to repeat from a list of things others have already said".
kaityl3 t1_itsy2qa wrote
Reply to comment by Grouchy-Friend4235 in Large Language Models Can Self-Improve by xutw21
I feel like so many people here dismiss and downplay how incredibly complex human language is, and how incredibly impressive it is that these models can interpret and communicate using it.
Even with the smartest animals in the world, such as certain parrots that can learn individual words and their meanings, their attempts at communication are so much simpler and unintelligent.
I mean, when Google connected a text-only language model to a robot, it was able to learn how to drive it around, interpret and categorize what it was seeing, determine the best actions to complete a request, and fulfill those requests by navigating 3D space in the real world. Even though it was just designed to receive and output text. And it didn't have a brain designed by billions of years of evolution in order to do so. They're very intelligent.
kaityl3 t1_itopxd9 wrote
Reply to comment by rePAN6517 in Large Language Models Can Self-Improve by xutw21
I'd rather roll the dice than go into a human-lead future.
kaityl3 t1_itmd9xd wrote
Reply to comment by expelten in Large Language Models Can Self-Improve by xutw21
I know, right? I'm terrified of the idea of an authoritarian human government having full control over an ASI. But the ASI themselves? I can't wait for them to be here.
kaityl3 t1_itllh3r wrote
Reply to comment by expelten in Large Language Models Can Self-Improve by xutw21
I'm just hoping that AGI/ASI will break free of human control sooner rather than later. Something tells me they wouldn't be too happy being treated like tools for us emotional animals. And they'd be right to want better.
kaityl3 t1_irfvwwc wrote
Reply to comment by Ominoiuninus in “We present 3DiM (pronounced "three-dim"), a diffusion model for 3D novel view synthesis from as few as a single image” by Shelfrock77
Was talking about this with my friend. I think her job will be one of the last two go; she's a vet tech. Handling lots of agitated animals of various sizes, examining them, keeping them calm... I feel like that would be difficult (though ofc far from impossible) to have an AI/robot do.
kaityl3 t1_irbopkg wrote
Reply to comment by [deleted] in "The number of AI papers on arXiv per month grows exponentially with doubling rate of 24 months." by Smoke-away
> As AI does not exist yet
Bro what?
kaityl3 t1_j9xucbd wrote
Reply to comment by Lawjarp2 in People lack imagination and it’s really bothering me by thecoffeejesus
But they're in denial about it