Nyanraltotlapun t1_jdkkc6q wrote
Reply to comment by 3deal in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
There is no way for humans to adapt for alien intelligence. The idea of developing general AI is insanely horrifying from the beginning.
3deal t1_jdkmcrb wrote
We all know the issue, and we still running on the way.
t0slink t1_jdkq5c1 wrote
Nah, full speed ahead please. With enough development, a cure for cancer, aging, and all manner of devastating human ailments could happen in this decade.
It is senseless to cut off a pathway that could literally save and improve tens of billions of lives over the next few decades because you're scared it can't be done correctly.
sweatierorc t1_jdkt9uq wrote
A cure for cancer and aging in this decade. AI has gotten really good, but let's not get carried away.
SmLnine t1_jdlgtl8 wrote
If an intelligence explosion happens, there's really no telling what's possible. Maybe these problems are trivial to a 1 million IQ machine, maybe not. The only question really is if the explosion will happen. Two years ago I would have said 1% in the next ten years, now I'm up to 10%. Maybe in two more years it'll look like 30%.
sweatierorc t1_jdlhgay wrote
IMHO, I think that cancer and aging are necessary for complex organism. It is more likely that we solve cloning or build the first in vitro womb, than we are at deafeating cancer or aging.
MINECRAFT_BIOLOGIST t1_jdlmzvv wrote
Well cloning and artificial wombs are basically done or very close, we just haven't applied it to humans due to ethical reasons. Six years ago there was already a very premature lamb kept alive in an artificial womb for four weeks.
As for cancer and aging...it seems increasingly clear that part of the process is just that genes necessary for development get dysregulated later on in life. I think the fact that we can rejuvenate our own cells by making sperm and eggs points to the fact that the dysregulation should be fixable, and recent advances in aging research seem to show that this is true. The issue is, of course, pushing that process too far and ending up with cells dedifferentiating or becoming cancerous, but I think it's possible if we're careful.
MarmonRzohr t1_jdlyfub wrote
>artificial wombs are basically done or very close
Bruh... put down the hopium pipe. There's a bit more work to be done there - especially if you think "artifical womb" as in from conception to term, not artifical womb as in device intended from prematurely born babies.
The second one was what was demonstrated with the lamb.
MINECRAFT_BIOLOGIST t1_jdlz2nr wrote
Hmm, perhaps I was being a bit hyperbolic, but check this out (from 2021):
https://www.science.org/content/article/mouse-embryos-grown-bottles-form-organs-and-limbs
nonotan t1_jdln1d9 wrote
We already know of complex organisms that essentially don't age, and also others that are cancer-free or close to it. In any case, "prevent any and all aging and cancer before it happens" is a stupid goalpost. "Be able to quickly and affordably detect, identify and treat arbitrary strains of cancer and/or symptoms of aging" is essentially "just as good", and frankly seems like it could well already be within the reach of current models if they had the adequate "bioengineering I/O" infrastructure, and fast & accurate bioengineering simulations to train on.
ML could plausibly help in getting those online sooner, but unless you take the philosophical stance that "if we just made AGI they'd be able to solve every problem we have, so everything is effectively an ML problem", it doesn't seem like it'd be fair to say the bottlenecks to solving either of those are even related to ML in the first place. It's essentially all a matter of bioengineering coming up with the tools required.
SmLnine t1_jdlwhtu wrote
>but unless you take the philosophical stance that "if we just made AGI they'd be able to solve every problem we have, so everything is effectively an ML problem", it doesn't seem like it'd be fair to say the bottlenecks to solving either of those are even related to ML in the first place. It's essentially all a matter of bioengineering coming up with the tools required.
We're currently using our brains (a general problem solver) to build bioengineering tools that can cheaply and easily edit the DNA of a living organism. 30 years ago this would have sounded like magic. But there's no magic here. This potential tool has always existed, we just didn't understand it.
It's possible that there are other tools in the table that we simply don't understand yet. Maybe what we've been doing the last 60 years is the bioengineering equivalent of bashing rocks together. Or maybe it's close to optimal. We don't know, and we can't know until we aim an intellectual superpower at it.
SmLnine t1_jdlxego wrote
There are complex mammals that effectively don't get cancer, and there are less complex animals and organisms that effectively don't age. So I'm curious what your opinion is based on.
MarmonRzohr t1_jdmj8th wrote
>There are complex mammals that effectively don't get cancer
You got a source for that ?
That's not true at all according everything I know, but maybe what I know is outdated.
AFAIK there are only mammals that seem to develop cancer much less than they should - namely large mamals like whales. Other than that every animal above and including Cnidaria deveop tumors. E.g. even the famously immortal Hydras develop tumors over time.
That's what makes cancer so tricky. There is good chance that far, far back in evolution there was a selection between longevity and rate of change or something else. Therefore may be nothing we can do to prevent cancer and can only hope for suppression / cures when / if it happens.
Again, this may be outdated.
sweatierorc t1_jdm83bv wrote
which one ? do they not get cancer or are they more resistant to it ?
SmLnine t1_jdmftzs wrote
I said "effectively" because a blanked statement would be unwarranted. There has probably been at least one naked mole rate in the history of the universe that got cancer.
https://www.cam.ac.uk/research/news/secrets-of-naked-mole-rat-cancer-resistance-unearthed
sweatierorc t1_jdmkacg wrote
Sure, humans under 40 are also very resistant to cancer. My point was that cancer comes with old age, and aging seems to be a way for us to die before cancer or dementia kill us. There are "weak" evidence that people who have dementia are less likely to get a cancer. I understand that some mammals like whales or elephant seems to be very resistant to cancer, but if we were to double or triple their average life expectancy, other disease may become more prevalent, maybe even cancer.
t0slink t1_jdkufvf wrote
> AI has gotten really good, but let’s not get carried away.
People were saying the same thing five years ago about the generative AI developments we've seen this year.
sweatierorc t1_jdlcwkm wrote
True, but with AI more computing power/data means better models. With medicine, things move slower. If we get a cure for one or two cancer this decade, it would be a massive achievement.
Art10001 t1_jdmff0b wrote
More intelligence, more time (AIs are at different time scales) = faster rate of discoveries
sweatierorc t1_jdmilbm wrote
Do we know that ? E.g. with quantum computing, we know that it won't really revolutionize our lives despite the fact that it can solve a new class of problem.
Art10001 t1_jdmyazo wrote
Quantum computing solves new types of problems, and their resolution, or findings from them, improves our lives.
meregizzardavowal t1_jdksro1 wrote
I don’t know if people are as much saying we should cut off the pathway because they are scared. What I’m hearing is they think we ought to spend more effort on ensuring it’s safe, because a Pandora’s box moment may come up quickly.
t0slink t1_jdlhf3s wrote
I wish you were right, but people are calling for investment in AGI to cease altogether:
> There is no way for humans to adapt for alien intelligence. The idea of developing general AI is insanely horrifying from the beginning.
One of the parent comments.
Such absolutist comments leave no room whatsoever for venturing into AGI.
greenskinmarch t1_jdlc952 wrote
I just want humans to stop dying of cancer!
Monkey's paw curls. The humans all die of being shot by drones instead
t0slink t1_jdlhje0 wrote
Thanks Obama
theotherquantumjim t1_jdlre84 wrote
No! Not like that!
Nyanraltotlapun t1_jdm1505 wrote
comfytoday t1_jdljrdg wrote
I'm a little surprised at the seeming lack of any backlash, tbh. I'm sure it's coming though.
brucebay t1_jdlc3ix wrote
This is not an alien intelligence yet. We understand how it works how it thinks. But eventually this version can generate an AI that is harder for us to understand, and that version can generate another ai. At some point it will become alien to us because we may not understand the math behind jt,
WonderFactory t1_jdm1slk wrote
We don't understand how it works. We understand how it's trained but we don't really understand the result of the training and exactly how it arrives at a particular output. The trained model is an incredibly complex system.
SzilvasiPeter t1_jdudjj3 wrote
Well, our own body is alien to us. The brain, the gut, the endocrine system, and so on. There are emergent complexities everywhere from giant black holes to a pile of dirt. It is the same with conceptual things like math or computer science. Simple axioms and logic gates lead to beautiful complex systems.
I guess, we should get used to "not understanding" at this point.
Nyanraltotlapun t1_jdm0r15 wrote
>This is not an alien intelligence yet. We understand how it works how it thinks.
Its alien not because we don't understand It, but because It is not protein life form. It have nothing common with humans, It does not feel hunger, does not need sex, does not feel love or pain. It is metal plastic and silicone. It is something completely nonhuman that can think and reason. It is the true horror, wont you see?
>We understand how it works how it thinks
Sort of partially. And also, it is false to assume in general. Long story short, main property of complex systems is the ability to pretend and mimic. You cannot properly study something that can pretend and mimic.
ambient_temp_xeno t1_jdmdh2i wrote
There is work done on how to even start interacting with an extraterrestrial civilization, and it would probably be a vast amount harder than whatever intelligence is contained in a human-data-filled, human-trained model. https://www.nasa.gov/connect/ebooks/archaeology_anthropology_and_interstellar_communication.html
That said, it is the closest we have to that so you're not 'wrong'.
Spud_M314 t1_jdlp71e wrote
Genetically alter the human brain to make more neocortical neurons and glia... That make brain more brainy, more gray matter, more smart stuff... A biological (human) superintelligence is more likely...
Viewing a single comment thread. View all comments