Viewing a single comment thread. View all comments

blxoom t1_iyvimzk wrote

Reply to comment by Shelfrock77 in bit of a call back ;) by GeneralZain

2040s. the 2030s is the decade of AR wearables. people will not just go from smartphone to FDVR, hell smartphones are more similar to 80s tech than fdvr. they'll first need ar headsets/wearables that connect to their phones through Bluetooth, then standalone glasses, then contacts. then when they realize reality can be 100% manipulated through technology, they'll feel susceptible to FDVR come the 2040s.

15

Head_Ebb_5993 t1_iyyknlh wrote

nah that's toooooooooooooooooo optimistic , if we mean the same things by FDVR, then I don't expect anything like FDVR in my lifetime , maybe in 2300-2400 and that's still rather optimistic guess , neuroscience is not that easy , we know practically nothing about brain , like even neuralink pretty much just repeated old experiments without setting any new boundries , we are nowhere near to even talk about FDVR and it's very hard to make progress in neuroscience .

−2

SoylentRox t1_iyyper0 wrote

Head_Ebb, do you understand the Singularity hypothesis?

While it's been rehashed many times, in it's most general form, if humans build AI, that AI can use it's above human intelligence to build better AI, and to control vast numbers of robots to build more robots that go out and collect materials and energy and then more computers to run AI on and so on.

It is exponential. So if the hypothesis is correct, you will see rapidly accelerating progress to levels unknown in history. It will be impossible to miss or fake.

It doesn't continue 'forever', it halts when technology is improved close to the true limits allowed by physics, and/or when all the available matter in our star system is turned into waste piles and more robots.

So anyways because it's exponential your hypothesis of '2300-2400' for the technology of full dive VR isn't a plausible one. In order for your theory to be correct, it would mean that human researchers continue to steadily study biology and neuroscience (arguably they really became somewhat competent at it less than a century ago, with DNA actually discovered in 1953 and full genome sequencing in 1999) to eventually develop safe neural implants.

You think it will take 328 years!!! for that to happen. Hell, we don't have any technology now that people started on 328 years ago, and they have already started on neural implants. (by 'start' I mean have a theory as to how to do it, and begin building working prototypes). About the only technology I can readily think of that humans have been working on for a long time that doesn't work yet is fusion, and it does work, just not well enough.

This doesn't mean humans will get FDVR, but it means either they will have it in...uh...well if the singularity is actually starting right now then in 10-20 years but maybe it isn't actually hitting criticality* yet...or well, they will be extinct.

*criticality : nuclear materials do jack shit really until you reach a critical mass. So for years fission scientists theorized a chain reaction was possible, but they didn't have enough enriched uranium in one lab with enough neutron reflectors for it to work. So all they could do was measure activity counts and do math.

With AI we theorize that we can get an AI smart enough to reprogram new versions of itself (or asymmetric peers) to perform well on tests of cognitive ability that include simulated tasks from the real world. Criticality happens when this works.

6

Head_Ebb_5993 t1_iyysfdb wrote

Is this some kind of cult ? Or religion ? I know what is singularity , just because it sounds simple doesn't matter , because you underestimate how hard it is to get to that level , when we can't even practically define what is intelligence... and don't even dare to define stuff like concioussnes , you are treating this stuff more like a religion then science

Cool we had first drawing of neuron in 1870 and we still have no idea how brain properly works , we have problems to measure brain activity precisly enough to even begin to ponder questions of how brain properly works , like the most interesting stuff that I can think of is that we can make AI that can read your mind , but it requires a lot of training and has to be done to pre-chosen words , also it usually doesn't have the best error rate , FDVR compared to that is like star wars space ships to planes

2300-2400 was my optimistic guess , but in reality i am rather skepticall that there will ever even be something like that , if you won't be able to do it safely without risking brain damage or altering brain too much then it will be just more practicall to use other easier means we have today , it might be in year 2800

−2

SoylentRox t1_iyytf3y wrote

>Is this some kind of cult ? Or religion ? I know what is singularity , just because it sounds simple doesn't matter , because you underestimate how hard it is to get to that level , when we can't even practically define what is intelligence...

It's neither. It's a large group of people, many of us live in the Bay Area and work for AI companies to make it happen. It's an informed opinion of what we think is about to happen. Similar to those nuclear fission researchers in the 1940s who thought they would be able to blow up a city, but weren't entirely sure they weren't about to blow up the planet.

Your other objections are dated prior 2012. Please update your knowledge.

3

Head_Ebb_5993 t1_iyyu8wr wrote

How exactly outdated ? enlighten me , ideally with sources. Because I don't think so

Edit : also I am rather skepticall that there are any people who work in any way with neuroscience and AI , and from all discussion with actuall people in the subject I've realized that AGI isn't even taken seriously at the moment , it's just sci-fi

In all seriousness , people write essays on why AGIs are actually impossible , even though that's little bit extreme position for me , but not a contrarian in scientific consensus

−1

SoylentRox t1_iyyvlhg wrote

https://www.deepmind.com/blog read all these.

The most notable ones : https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html

https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html

For an example of a third party scientist venturing an opinion on their work:

see here : https://moalquraishi.wordpress.com/2020/12/08/alphafold2-casp14-it-feels-like-ones-child-has-left-home/

To succinctly describe what is happening:

(1) intelligence is succeeding at a task by choosing actions that have a high probability in the agent seeing future states that have a high value to the agent. We have tons and tons of simulated environments, some accurate enough to immediately use in the real world - see here for an example https://openai.com/blog/solving-rubiks-cube/ - to force an agent to develop intelligence.

(2) neuroscientists have known for years that the brain seems to use a similar pattern over and over. There are repeating cortical columns. So the theory is, if you find a neural network pattern you can use again and again - this one is currently doing well and powers all the major results - and you have the scale of a brain, you might get intelligence like results. Robust enough to use in the real world. And you do.

(3) where the explosive results are expected - basically what we have now is neat but no nuclear fireball - is putting together (1) and (2) and a few other pieces to get recursive self improvement. We're very close to that point. Once it's reached, agents that (1) work in the real world better than humans do (2) are capable of a very large array of tasks, all at higher intelligence levels than humans, will happen.

Note that one of the other pieces of the nuke - the recursion part - actually has worked for years. See: https://en.wikipedia.org/wiki/Automated_machine_learning

To summarize: AI systems that work broadly, over many problems, and well, without needing large amounts of human software engineer time to deploy them to a problem, are possible very soon through leveraging already demonstrated techniques and of course stupendous amounts of compute, easily hundreds of millions of dollars worth to find the architecture for such an AI system.

Umm to answer your other part, "how can this work if we don't know what intelligence is". Well I mean, we do know what it is, but in a general sense, what we mean is "we simulate the tasks we want the agent to do, including tasks that we don't give the agent any practice on but it uses skills learned in other tasks and receives written instructions as to the goals of the task". Any machine that does well on the benchmark of intelligence described is intelligent and we don't actually care how it accomplishes it.

Does it have internal thoughts or emotions like we do? We don't give a shit, it just needs to do it's tasks well.

7

SoylentRox t1_iyywjbs wrote

>Edit : also I am rather skepticall that there are any people who work in any way with neuroscience and AI , and from all discussion with actuall people in the subject I've realized that AGI isn't even taken seriously at the moment , it's just sci-fi
>
>In all seriousness , people write essays on why AGIs are actually impossible , even though that's little bit extreme position for me , but not a contrarian in scientific consensus

? so...deepmind and AI companies aren't real? What scientific consensus? All the people who have the highest credentials in the field are generally working machine learning already, those AI companies pay a million+ a year TC for the higher end scientists.

Arguably the ones who aren't worth 1m+ are not really qualified to be skeptics, and the ones I know of, Gary Marcus, keeps getting proven wrong in weeks.

2

Head_Ebb_5993 t1_iyyy2f1 wrote

But that's obvious straw man i wasn't and we weren't talking about AI , but AGI , Just because somewhere in AI industry are money doesn't imply that concept of AGI is valid and will be there in few years

PhDs with +1 million sallary or what ? That seems like the biggest BS I've ever heard

And you can be skeptic no matter your salary , if you have expertise in the field , I don't understand how your salary is in any way relevant to your crtique

You really seem to treat this as a religion and not science

I will look at your sources maybe tommorow , because I am going to sleep , but just from skimming I am already skeptical

0

SoylentRox t1_iyyz3vi wrote

>ut that's obvious straw man i wasn't and we weren't talking about AI , but AGI ,

The first proto AGI was demonstrated a few months ago.

https://www.deepmind.com/publications/a-generalist-agent

Scale it up to 300k tasks and that's an AGI.

I am saying if industry doesn't think someone is credible enough to offer the standard 1 million TC pay package for a PhD in AI, I don't think they are credible at all. That's not unreasonable.

2

LowAwareness7603 t1_iyymp7n wrote

Jesus, I would probably just shoot myself if I was that pessimistic about something like FDVR. I get what you mean man. I fuckin' totally think we'll have it in our lifetimes. In mine at least. I don't think I'll ever die.

2

TheHamsterSandwich t1_iz469m7 wrote

You'd better take care of your health if you want that belief to come true. You can't rely on advancements in life extension to make you live forever if you don't know when they'll arrive.

1