ElvinRath

ElvinRath t1_je5u2rk wrote

In my country live expectancy if you get to 60 is already around that...

​

it was already 86 in 2019 (PRE Corona, we'll probably go back to those numbers when we get the data from 2022)... So 88 in 22 more years is pretty conservative.

​

Anyway we got to a point when the main impact in live expectancy is having a healthy live. Be a tiny bit active, avoid being overweight and you'll get past 90 even without medicine improvements.

1

ElvinRath t1_j88rexo wrote

It might make sense to talk about how to prepare for the short term automation that will come, but I don't see much sense in preparing for the singularity.

​

Singularity might come in 20 years, 50, or 200. Maybe we'll be alive at that time, maybe not.
Is there any safe assumptions we can make that make things better for us after singularity without making things worse before it?

What if it takes 200 years and we die before it?
I think that the best approach to take with tech is take in to account short term tech coming (More automation, new kind of content creation, etc...) but not go too far, because going to far is never safe.

​

I mean, you talk about live extention. That's great. Maybe en 50 years everyone will be virtually inmortal. But it's totally possible that we see something far more conservative, like a 10 years increase in live expectancy.

​

Hope for the better, prepare for the worst (?) haha.

​

And above all don't risk your long term safety for short term gains hoping for singularity to make things good for you in the future (Like...spending all your money now, because you think money will have no sense in the future)

​

​

Also, talking about short/medium term things have the advantage of making you sound a tiny bit less crazy (There is crazy things coming! But you can already show some kind of preview of those crazy things)

​

Let things like LEV and ASI and Singularity where they belong, in this sub :P

5

ElvinRath t1_j694ill wrote

Haha sorry, It was a set phrase, I don't mean literally praying, I'm an atheist.

I think that it's also used in english, but maybe it is not that common? At least in my language there are a lot of expressions that include religious terms (like praying or god) that are quite common, and I don't feel particulary bad using them when they seem fit to the conversation.

It means "let's HOPE+WISH that they keep publishing the papers".

I don't mention the code because...well, I WISH that they published it, but I have no HOPE for that hahaha

4

ElvinRath t1_j67wl0d wrote

Because it's programmed to do so?

Everything an AI does, no matter what, it will do it because of us. Because we, either intentionally or unintentionally coded/trained it to do it.

​

​

If you are thinking of some kind of awakening where an AI suddenly gets it's own different goals, you don't know what you are talking about. (Wich is understable, but I would suggest you to read instead of writting about it)

9

ElvinRath t1_j56gy3g wrote

Either we need new architectures or more data.

​

Right now, even if somehow we put youtube into text, wich could be done, there is just not enought data to efficiently train a 1T parameters model.

And just in text form, there is probably not enought even for 300B....

​

So, yeah, there is no enought data. to keep scaling up

​

It Might be different with multimodal, I don't know about that.

7

ElvinRath t1_j52bg3j wrote

Well, there are riots from time to time already.

​

Anyway, we are not yet there. Not even close.

​

And governments move always too slow, that's not even a question... But most people who what UBI are highly unrealistic about it.

​

I totally see a UBI in the future, but unemployment has to go up A LOT before that.

6

ElvinRath t1_j51e00q wrote

Yeah, but that's probably how it is gonna be, and those things... It would be a bad idea to do then in real life.
If what you meant with your post was that we should not abandon the real world, well...Honestly I'm not sure if that will be that much of a risk.
Will we want that?
I'm more worried about indirect things, like if we will in the end ignore human contact.

>It's like when people try to fill a void in their heart with things, rather than meaningful experiences with people.

What is meaningful and what is not? I'm not asking as a joke, in fact I'm not even asking you, I'm just asking to emphasize that we don't know.
Most people think about achievements, but let's be real, most people die without any great achievement.
A lot of people think about their family and offspring. Well, biologicaly this makes sense, but probably with inmortal lives it can get weird. (Maybe we keep expanding our numbers and fill the universe of humans...but it's hard to picture families staying together with hundreds of generations alive)

Friends? Human contact? Well, maybe. I don't like the idea of a future with much less human contact, but certainly see it as a posibility.
A lot of people think about their job. I can understand someone working on some fields saying that, but come on. Most people work on jobs that they hate...
I'm not sure of how we will feel that we are living meaningful lives. Maybe we won't, but maybe we will notice not because we have lost our meaning, but because we have more time to pay attention.
But I don't see what VR has do to with most of that.

I think that the real risk is continue to have human contact, or not.

​

Honestly, I'm a bit worried about future generations born after this. Will they attend schools with other humans?

I mean, I want to experience a VR where I can do whatever I desire, but I think that without a real world with boundaries and limits is probably needed for a healthy mind, specially in the first years of our lives.

The contact with other beings that we have to respect it's probably something very healthy.

​

Future young people is gonna kill me, but we should probably heavily restrict a lot the use of VR under 18 or 16 or something...

0

ElvinRath t1_j50z9ep wrote

I know how to answer this one!

​

For instance, in current VR, we can do things like killing people and risk our (virtual) lives.

​

It is technically possible to do those in real life, but as ChatGPT would say, it would raise some moral and ethical concerns that is important to consider.

​

​

Also, in VR it's probably technically posible to simulate reality bending powers, time travel, planet scale destruction, etc...

Things that are probably not possible in real life. Or not practical, at least...We can't all destroy earth each monday.

​

​

VR should have its place, and real life should have its place too. I mean, I can understand that you don't wanna leave the real world, and I can probably agree with that, but certainly VR offers some things that are just not practical in the real world, or would raise a lot of ethical concerns (And not like the ones by ChatGPT, that it's concerned about everything... real ethical issues, like hurting other people/do things that affect other people... In your own virtual world, nothing of that is a problem.)

30

ElvinRath t1_j4dw91r wrote

Oh, yeah, sorry.

I was answering the op and I used "original research" because he mentioned that, but I was thinking "independent" (Term that I use later in my post), meaning, "without human intervention" (Or, at least, not more intervention that "Hey, go and research on this")

​

No human intervention is the requirement for the concept of singularity (Well,,, or augmented humans that can comprehend in seconds what actually take years, but that's probably not a human anymore... :D )

1

ElvinRath t1_j4c3qvz wrote

That's hardly surprising.
When things happend they usually become far less impressive.
Anyway I think that your approach here is not correct. we are not one or two generations away from AI doing original research, or at least, we don't know that.

​

It might happend, it might not.

About 8 years ago we thought that we would have full autonomous vehicles in 1-3 years max, and art in a hundred years, maybe.

And here we are.

​

We are quite bad at reading the future. And by me, I don't mean us on reddit, I also mean the real experts.

​

Look, don't get me wrong, I'm optimistic. But there is also a real posibility that 10 years into the future the only thing we have is ChatGPT 5.0, running in your phone, faster and a tiny bit better but basicaly the same.

​

That will be of course very useful, but still, having that (A very powerfull AI assistant) and having AGI (Capable of conducting independent research!!!! That if fully working is the path to singularity) is fundamentally differently. And we can't predict those things because they need breakthrough.

3

ElvinRath t1_j4bc2v2 wrote

Well, I would still be the best at being me, and no AI will ever do that better.

What it can do is fullfill my roles better than me. I totally expect AI to eventually do that. (And maybe future humans will have low if any human interaction.... That might be a serious problem, but it's for future humans more than for humans living the transition)

But yeah, I'm not the fulfillment of my roles. I am my own consciousness and no other existence can take it away from me.

If you are talking about people feeling empty because they might feel they have no porpuse... Yeah, that might happend. But that already happends. A lot of people already feel that about their jobs, because they find not meaning on their job and they also have troubles with their social live and don't find meaning there.

Can AI doing their jobs extend that to other people? Sure. But that's not a problem that comes with singularity, it comes with automation and doesn't require the singularity.

Also, probably the problems appears in that moment, but there was already a problem. I don't think that most people should find meaning on their jobs. I mean, in some cases it makes sense: If you are doing what you want and are changing the world to your liking, nice.

But if you work to pay the bills, your work should not be what gives meaning to your live...

1

ElvinRath t1_j4ap0wv wrote

Well, I might be biased by my own thinking about it but I really can't see the drama in that.

What's the matter with being a biological computer? You are still you.

​

I could understand people getting crazy if we were a simulation or something, as basicaly we woudn't be us, really, but just a little part of a bigger computer program.

But as long as I'm individualy me, I just have to deal with whatever comes as best as I can.

​

Being a biological computer or being a flesh miracle with a soul, well...That's just definitions. Reality doesn't change just because we get a very very very fancy AI god tool to makes our life better.

1

ElvinRath t1_j46vp0x wrote

>Realistic humanoid robotic arm

Lol, they say that they accept preorders in the website (But they don't), they also have some new (quite different, more robotic) videos...

​

https://www.patreon.com/clonerobotics

https://twitter.com/clonerobotics

​

​

Anyway it looks cool, but speaking honestly, deep inside, I can't help but think that it has to be some kind of scam...I mean, it's too cool to have a patreon with a few hundred dollars....

10

ElvinRath t1_j3hx2ok wrote

My estimation of 2 or 3 years wasn't intented as a "realistic estimation", more like a "If everything goes very fast and we are very lucky" scenario. I think that it will probably take much more time to get there.

If you think that's too slow, well, we'll see, haha. I hope to be wrong and that you are right, it would be very cool :P

3

ElvinRath t1_j3hsk47 wrote

Yep!

And about "we". They are supposed to release the model like they did with Stable Diffusion.

​

It probably won't have the same impact, because I guess that it might be a bit too much for most consumer GPUs, but it's very cool to have this kind of tech available.

3