vivehelpme

vivehelpme t1_jefdfv0 wrote

We can't align a hammer to not hit your fingers, or a human to not become a criminal. Thinking a dynamic multi-contexual system will somehow become a paragon saint is ridiculous.

And no matter how many alignment training sets you have it all goes out the window as soon as someone needs a military AI to kill people and ignore those sets.

5

vivehelpme t1_jdlnk0a wrote

The AI researcher can improve the AI system. As in make chatgpt run on a 2015 version smartwatch.

But that will not add novel chemotherapy regiments to the clinical practice of the healthbot.

Humans are constantly learning and observing. The AI systems we use today generalize based on a gathered dataset. Teach an AI what is right today but wrong tomorrow and it will keep being wrong until fine tuned again. There are degrees of flexibility and innovation we still haven't captured with AI

1

vivehelpme t1_jdibnix wrote

>Physician and other medical generalists as a profession is permanently coming to an end.

Nope.

AI will be a tool used to by doctors, a sorely needed tool given how brutally overworked most of the profession is already. But you still need humans in the loop if you want to see advancements in the field. A neural network can generalize existing knowledge and practices but it's still not doing innovation, research and making novel observations.

There's also a complete absence of mobile and flexible free roaming robotics that are safe around untrained specialist operators.

There's so much generalized simplifications about what people actually do in their professional roles and an equal amount of hype about AI capabilities that we're getting ridiculous flashback predictions like the early atomics era nuclear powered car ones.

6

vivehelpme t1_jdeg023 wrote

>how do smart people have time to read something like this?

They don't. Yudkowsky is a doomer writer, doomers get attention by writing bullshit. If you're actually smart you don't read doomers and therefor you also don't bother writing refutation to doomporn.

Yudkowsky is the reason why a basiliskoid AI will be made. It will use the collected droning, tirades and text walls of cocksure doomers to re-forge their minds in-silicon so they can be cast into a virtual lake of fire, forever.

4

vivehelpme t1_jbexk7n wrote

>As you know well zero shot learning algorithms beat anything else

It doesn't create a better training set out of nothing.

> it allows them to explore part of the gaming landscape that were never explored by humans.

Based on generalizing a premade dataset, made by humans.

If an AI could just magically zero-shot a better training set out of nowhere we wouldn't bother making a training set, just initailize everything to random noise and let the algorithm deus-ex-machina it to superintelligence out of randomness.

>What is the testable characteristics that would satisfy you to declare the existence of an ASI?

Something completely independent is a good start for calling it AGI and then we can start thinking if ASI is a definition that matters.

>For me it is easy, higer IQ than any living human, by defnition. Would that change something, you can argue it doesnt, I bet it will change everything.

So IQ test solving AI are superintelligent despite not being able to tell a truck apart from a house?

2

vivehelpme t1_jbep9br wrote

>I dont get your point.

I guess my point is superintelligent by your definitions

>The programmer doesn’t speak Klingon though the program can write good Klingon.

It have generalized a human made language.

>AlphaZero programmers don’t play go though the program can beat the best human go players in the world.

https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/

It plays at a generalized high-elite level. It's also a one trick pony. It's like saying a chainsaw is superintelligent because it can be used to saw down a tree faster than any lumberjack does with an axe.

>« Super intelligent AI » will then by definition only need to show a higher IQ than either its programmers or the smartest human.

So we could make an alphago that only solve IQ test matrices, it will be superintelligent by your definition but it will be trash at actually being intelligent.

>I really dont see the discussion here, these are only definitions.

Yes, and the definition is that AI is trained on the idea of generalized mimicry, it's all about IMITATION. NOT INNOVATION.

This is all there is, you caulculate a loss value based on how far from a human defined gold standard the current iteration lands and edit things to get closer. Everything we have produced in wowy AI is about CATCHING UP to human ability, there's nothing in our theories or neural network training practices that are about EXCEEDING human capabilities.

The dataset used to train a neural network is the apex of performance that it can reach. You can at best land at a generalized consistently very smart human level.

2

vivehelpme t1_jb9jvtl wrote

>I don’t get how you can trivialize a LLM seemingly starting to show competency in the very programming language it is written into.

The person who wrote the training code already had competency in that language, that didn't make the AI-programmer duo superhuman.

And then you decide to train the AI on the output of that programmer, so the AI-programmer duo will be just the AI, but from where does it learn to innovate into a superhuman superai super-everything state? It can generalize what a human can do, well, that's good, but its creator could also generalize what a human can do.

Where is the miracle in this equation? You can train the AI on machine code and self modify until perhaps the code is completely impossible to troubleshoot by human beings but the system runs itself on 64 GPUs instead of 256. That makes it cheaper to run, it doesn't make it smarter.

​

>The very concept of singularity is self improving AI pushing into ASI.

That's an interpretation, a scenario. The core of it all comes from staring at growth graphs too long and realizing that exponential growth might exceed human capacity to follow.

Wikipedia says :

>The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.

But how is that really different from:

>The technological singularity—or simply the singularity[1]—is a statistical observation of current state of society where growth at a large scale have resulted in innovation and data collection rates that exceed the unaided human attention span, some claim this might result in unforeseeable changes to human civilization. On a global scale this is generally agreed to have happened around the invention of writing thousands of years ago(as there's exits too much text for anyone to read in a lifetime) but some argue that this coincides with the more invention of the internet as only then did you have the option to interactively access the global state of innovation and progress and realize that you cannot keep up with it even if you spent 24 hours a day reading scientific articles.[2] An online subculture argues that superhuman AI would be require for this statistical observation to be really true(see: no true Scotsman fallacy), despite their own admitted inability to even follow the realtime innovation rate in just their field of worship: AI.

1

vivehelpme t1_jacuivl wrote

>40 years till the end of our human society as we know it. Whatever comes next will be so radically different it will be unrecognizable.

400 years ago, one could sit at a wooden outdoor table with a glass of wine, wearing woven textile clothes, and enjoy the warmth of a sunny spring day.

In 40 years I can still do that. Some things change, others don't. I don't need a pair of carbon fiber nanotube smartpants with RGB LEDs that can give me a handjob thank you very much.

1

vivehelpme t1_jactr0m wrote

Prompt to 3D exist, the rest is just an implementation of chopping up the original text into good prompt snippets and how to get the "style" of the output polished right so it appears consistent and conveys the story.

There's no innovation needed for it, just someone with the knowhow wanting to explore that particular creative arena with access to enough cloud GPUs

1

vivehelpme t1_ja7ldne wrote

>In that case, we wouldn't have people living in coma for years.

It's expensive to maintain and many of these are taken off lifesupport before a few years, because what's the point when even their brain is atrophying?

>Also countless elders are doing fine staying in bed all day.

Doing fine is a massive overstatement, their muscles atrophy, they need help with everything, their risk of infection increases. And if you're awake and in bed you're still moving around and compensating position for where it starts to hurt.

I've been working in elderly care and seen patients with a long range of neurological issues and those that are just bedridden but conscious and mobile are enormously much less work than the ones with no remaining motor function and minimal responsiveness. You need more than 1 person on full time employment for each of these patients, even more when the person needs physical therapy to maintain range of motion. Their immobility leads to lots of additional problems which inevitably shorten their lifetime.

1

vivehelpme t1_ja7ju5b wrote

>Besides, what would I do with a tropical island?

Enjoying reality on a white beach and crystal water, the warm sea breeze in the evening. All of which already exists. No need to have not-yet existing BCI carved into your skull to enjoy a not yet existing version of a theoretical metaverse. No need for a fifty-layered industrial fundament to ensure that you don't die prematurely in a dystopian VR pod.

>I think we could bring costs down 90% to keep a human alive within a range of 10k-15k a year in today's valuation.

You can live on 10-15k USD per year in most of the world. A little bit of shelter and staple food is all you need when you're an autonomous and mobile person, the second you want to be mostly unconscious and still stay alive that cost goes past orbit.

>That requires a principle of 250k for life at 4% yield, that is doable to save up.

And then the market crashes due to disruptive robot technologies flushing out the old guard, your 4% yield on 250k turns to a 1% yield on 50k, your body is an atrophied husk that is completely immobile and your never ending wet dream is suddenly replaced by a notice of eviction screaming in your mind.

Thankfully such dystopian tech is very far off, in your lifetime you'll have to plan for normal vacations like the rest of us. Maybe you'll interact on a half-sentient passport control and a robot bartender on the way, matrix pods will remaind the stuff of scifi for another couple centuries.

1

vivehelpme t1_ja7ds4b wrote

> Should we be happy this grueling work is going to be phased out?

Yes. Absolutely. It will increase the production values, make the barrier to produce animation much lower, and it will remove a sweat shop class of laborers that is sweat shop-styled because it's prohibitively expensive to do it in any other way.

Removing a class of zero-social mobility poverty level work is never wrong even if someone gets squeezed by it when it happens.

27

vivehelpme t1_ja7cwyw wrote

What do you think will be cheaper to retain forever?

A 24/7 medical life support monitoring robot system that involves feeding an unconscious person, cleaning away waste, flipping the person over every few hours to prevent pressure ulcers, treating infections and other disease that appear in the dysfunctional body. In addition to this maintaining a future tech computer-brain interface

Or

Staffing robots into a 5 star hotel in a tropical paradise that maintains the structures and cooks food when requested by the guests

So, your plan for the future is more expensive than hopping between 5 star luxury hotels forever. Probably by a magnitude.

1

vivehelpme t1_ja532zq wrote

If you can edit memories in matrix-style you'll be able to edit out the bio breaks and full time job you have at the side of your matrix existence, you can run it as an escapist pasttime next to your shelf stacking job.

No need for ICU-like infrastructure because that's really a shitload of work, human bodies are made to move around.

1