Comments

You must log in or register to comment.

Quealdlor t1_iyutscy wrote

Don't be overly optimistic though. I am optimistic, but I don't expect crazy advanced stuff very soon. And we haven't been progressing linearly to 2022, we've been progressing exponentially, especially for the past 270 years.

85

Shelfrock77 t1_iyuwh3o wrote

When do you expect fdvr tech to drop on the shelves ?

17

jlpt1591 t1_iyuxm22 wrote

2040-2065

18

Shelfrock77 t1_iyuy9cx wrote

Neuralink just put a white dot over a monkeys vision field. You don’t think a low key corporation has already done more than adding a white dot ? Imagine putting brainchips in Clonaid’s clones ?All it needs is scaling which AI will figure out quicker than we can. The genie 🧞‍♂️ will provide you anythig you wish.

https://en.m.wikipedia.org/wiki/Clonaid

https://youtu.be/RJc5pZCD9u0

The pineal gland contains cellular memory !!!

15

Cr4zko t1_iywq3fr wrote

If we don't have the evidence this is just a conspiracy theory. I don't very much believe we've reached the stage of human cloning yet.

1

blxoom t1_iyvimzk wrote

2040s. the 2030s is the decade of AR wearables. people will not just go from smartphone to FDVR, hell smartphones are more similar to 80s tech than fdvr. they'll first need ar headsets/wearables that connect to their phones through Bluetooth, then standalone glasses, then contacts. then when they realize reality can be 100% manipulated through technology, they'll feel susceptible to FDVR come the 2040s.

15

Head_Ebb_5993 t1_iyyknlh wrote

nah that's toooooooooooooooooo optimistic , if we mean the same things by FDVR, then I don't expect anything like FDVR in my lifetime , maybe in 2300-2400 and that's still rather optimistic guess , neuroscience is not that easy , we know practically nothing about brain , like even neuralink pretty much just repeated old experiments without setting any new boundries , we are nowhere near to even talk about FDVR and it's very hard to make progress in neuroscience .

−2

SoylentRox t1_iyyper0 wrote

Head_Ebb, do you understand the Singularity hypothesis?

While it's been rehashed many times, in it's most general form, if humans build AI, that AI can use it's above human intelligence to build better AI, and to control vast numbers of robots to build more robots that go out and collect materials and energy and then more computers to run AI on and so on.

It is exponential. So if the hypothesis is correct, you will see rapidly accelerating progress to levels unknown in history. It will be impossible to miss or fake.

It doesn't continue 'forever', it halts when technology is improved close to the true limits allowed by physics, and/or when all the available matter in our star system is turned into waste piles and more robots.

So anyways because it's exponential your hypothesis of '2300-2400' for the technology of full dive VR isn't a plausible one. In order for your theory to be correct, it would mean that human researchers continue to steadily study biology and neuroscience (arguably they really became somewhat competent at it less than a century ago, with DNA actually discovered in 1953 and full genome sequencing in 1999) to eventually develop safe neural implants.

You think it will take 328 years!!! for that to happen. Hell, we don't have any technology now that people started on 328 years ago, and they have already started on neural implants. (by 'start' I mean have a theory as to how to do it, and begin building working prototypes). About the only technology I can readily think of that humans have been working on for a long time that doesn't work yet is fusion, and it does work, just not well enough.

This doesn't mean humans will get FDVR, but it means either they will have it in...uh...well if the singularity is actually starting right now then in 10-20 years but maybe it isn't actually hitting criticality* yet...or well, they will be extinct.

*criticality : nuclear materials do jack shit really until you reach a critical mass. So for years fission scientists theorized a chain reaction was possible, but they didn't have enough enriched uranium in one lab with enough neutron reflectors for it to work. So all they could do was measure activity counts and do math.

With AI we theorize that we can get an AI smart enough to reprogram new versions of itself (or asymmetric peers) to perform well on tests of cognitive ability that include simulated tasks from the real world. Criticality happens when this works.

6

Head_Ebb_5993 t1_iyysfdb wrote

Is this some kind of cult ? Or religion ? I know what is singularity , just because it sounds simple doesn't matter , because you underestimate how hard it is to get to that level , when we can't even practically define what is intelligence... and don't even dare to define stuff like concioussnes , you are treating this stuff more like a religion then science

Cool we had first drawing of neuron in 1870 and we still have no idea how brain properly works , we have problems to measure brain activity precisly enough to even begin to ponder questions of how brain properly works , like the most interesting stuff that I can think of is that we can make AI that can read your mind , but it requires a lot of training and has to be done to pre-chosen words , also it usually doesn't have the best error rate , FDVR compared to that is like star wars space ships to planes

2300-2400 was my optimistic guess , but in reality i am rather skepticall that there will ever even be something like that , if you won't be able to do it safely without risking brain damage or altering brain too much then it will be just more practicall to use other easier means we have today , it might be in year 2800

−2

SoylentRox t1_iyytf3y wrote

>Is this some kind of cult ? Or religion ? I know what is singularity , just because it sounds simple doesn't matter , because you underestimate how hard it is to get to that level , when we can't even practically define what is intelligence...

It's neither. It's a large group of people, many of us live in the Bay Area and work for AI companies to make it happen. It's an informed opinion of what we think is about to happen. Similar to those nuclear fission researchers in the 1940s who thought they would be able to blow up a city, but weren't entirely sure they weren't about to blow up the planet.

Your other objections are dated prior 2012. Please update your knowledge.

3

Head_Ebb_5993 t1_iyyu8wr wrote

How exactly outdated ? enlighten me , ideally with sources. Because I don't think so

Edit : also I am rather skepticall that there are any people who work in any way with neuroscience and AI , and from all discussion with actuall people in the subject I've realized that AGI isn't even taken seriously at the moment , it's just sci-fi

In all seriousness , people write essays on why AGIs are actually impossible , even though that's little bit extreme position for me , but not a contrarian in scientific consensus

−1

SoylentRox t1_iyyvlhg wrote

https://www.deepmind.com/blog read all these.

The most notable ones : https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html

https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html

For an example of a third party scientist venturing an opinion on their work:

see here : https://moalquraishi.wordpress.com/2020/12/08/alphafold2-casp14-it-feels-like-ones-child-has-left-home/

To succinctly describe what is happening:

(1) intelligence is succeeding at a task by choosing actions that have a high probability in the agent seeing future states that have a high value to the agent. We have tons and tons of simulated environments, some accurate enough to immediately use in the real world - see here for an example https://openai.com/blog/solving-rubiks-cube/ - to force an agent to develop intelligence.

(2) neuroscientists have known for years that the brain seems to use a similar pattern over and over. There are repeating cortical columns. So the theory is, if you find a neural network pattern you can use again and again - this one is currently doing well and powers all the major results - and you have the scale of a brain, you might get intelligence like results. Robust enough to use in the real world. And you do.

(3) where the explosive results are expected - basically what we have now is neat but no nuclear fireball - is putting together (1) and (2) and a few other pieces to get recursive self improvement. We're very close to that point. Once it's reached, agents that (1) work in the real world better than humans do (2) are capable of a very large array of tasks, all at higher intelligence levels than humans, will happen.

Note that one of the other pieces of the nuke - the recursion part - actually has worked for years. See: https://en.wikipedia.org/wiki/Automated_machine_learning

To summarize: AI systems that work broadly, over many problems, and well, without needing large amounts of human software engineer time to deploy them to a problem, are possible very soon through leveraging already demonstrated techniques and of course stupendous amounts of compute, easily hundreds of millions of dollars worth to find the architecture for such an AI system.

Umm to answer your other part, "how can this work if we don't know what intelligence is". Well I mean, we do know what it is, but in a general sense, what we mean is "we simulate the tasks we want the agent to do, including tasks that we don't give the agent any practice on but it uses skills learned in other tasks and receives written instructions as to the goals of the task". Any machine that does well on the benchmark of intelligence described is intelligent and we don't actually care how it accomplishes it.

Does it have internal thoughts or emotions like we do? We don't give a shit, it just needs to do it's tasks well.

7

SoylentRox t1_iyywjbs wrote

>Edit : also I am rather skepticall that there are any people who work in any way with neuroscience and AI , and from all discussion with actuall people in the subject I've realized that AGI isn't even taken seriously at the moment , it's just sci-fi
>
>In all seriousness , people write essays on why AGIs are actually impossible , even though that's little bit extreme position for me , but not a contrarian in scientific consensus

? so...deepmind and AI companies aren't real? What scientific consensus? All the people who have the highest credentials in the field are generally working machine learning already, those AI companies pay a million+ a year TC for the higher end scientists.

Arguably the ones who aren't worth 1m+ are not really qualified to be skeptics, and the ones I know of, Gary Marcus, keeps getting proven wrong in weeks.

2

Head_Ebb_5993 t1_iyyy2f1 wrote

But that's obvious straw man i wasn't and we weren't talking about AI , but AGI , Just because somewhere in AI industry are money doesn't imply that concept of AGI is valid and will be there in few years

PhDs with +1 million sallary or what ? That seems like the biggest BS I've ever heard

And you can be skeptic no matter your salary , if you have expertise in the field , I don't understand how your salary is in any way relevant to your crtique

You really seem to treat this as a religion and not science

I will look at your sources maybe tommorow , because I am going to sleep , but just from skimming I am already skeptical

0

SoylentRox t1_iyyz3vi wrote

>ut that's obvious straw man i wasn't and we weren't talking about AI , but AGI ,

The first proto AGI was demonstrated a few months ago.

https://www.deepmind.com/publications/a-generalist-agent

Scale it up to 300k tasks and that's an AGI.

I am saying if industry doesn't think someone is credible enough to offer the standard 1 million TC pay package for a PhD in AI, I don't think they are credible at all. That's not unreasonable.

2

LowAwareness7603 t1_iyymp7n wrote

Jesus, I would probably just shoot myself if I was that pessimistic about something like FDVR. I get what you mean man. I fuckin' totally think we'll have it in our lifetimes. In mine at least. I don't think I'll ever die.

2

TheHamsterSandwich t1_iz469m7 wrote

You'd better take care of your health if you want that belief to come true. You can't rely on advancements in life extension to make you live forever if you don't know when they'll arrive.

1

Down_The_Rabbithole t1_iyuyvec wrote

This isn't dependent on moore's law or AI but is actually limited by certain technology out of range of the exponential growth like battery capacity, understanding of the human brain and anti-inflammatory drug development.

It's possible we'll have reached ASI guided singularity, fusion power generation and space habitats while still not having access to FDVR because of a physical limit in something like anti-inflammatory drugs or material connection to the human brain.

6

SoylentRox t1_iyyqigr wrote

It kinda seems like we could direct the AI to build a huge number of research nodes and to study exhaustively these effects, seeking a safe electrode or nanotechnology wiring that tricks the brain into thinking that it's friendly. Or a mixture of drugs that does this. Or enough rules for life support that someone's immune system can be safely shut down. Or...

Basically there seem like there are a lot of ways to accomplish this once you can start manipulating biology with more consistent results, and you can practice on millions of samples of human brains (nothing unethical, just small batches of living cells from living or deceased donors) and learn from them all in parallel.

I mean no human scientist alive can learn from a million experiments in parallel so of course we couldn't figure it out. It's too complicated, there are obviously thousands of variables.

And there are an awful lot of ways to succeed is my point. Genetically modified neurons that get introduced, they synapse to our cortical columns, and then they grow into new signaling molecules never used in humans before, borrowed from another animal, that get emitted by the electrode grids which are embedded in sheets surgically installed. That might solve all problems you mentioned above because these modified 'bridge cells' can have the behaviors programmed in to not become inflamed and to synapse well to artificial electrodes.

3

2Punx2Furious t1_iyvjpzl wrote

Maybe a very rough prototype after NeuraLink manages to do I/O. But the actual FDVR will happen post-singularity.

4

Desperate_Donut8582 t1_iyxdg21 wrote

Someone didn’t watch swords art online…….on a real note FDA would never approve

1

SoylentRox t1_iyyqypq wrote

At a certain point the FDA is going to be under a lot of pressure to reform it's policies. Other countries will allow more advanced medical procedures, and once people start getting majorly improved care driven by AI - patients with 'multiple organ failure' surviving because an AI doctor can handle complex situations humans can't, patients with stage IV cancer regularly returning from the clinic after only 1 treatment and no horrible side effects, that kind of thing.

It is possible if you had the right tools and infrastructure to solve these problems. (the how is fairly obvious - robots transplant in lab grown organs for all the failing ones for the multiple organ failure case, and in real time deliver hundreds of drugs in parallel, the dose changing by the second, and splice in substitute organs externally as needed to keep them alive from all the trauma of the surgeries and all the things that would cause them to die. AI can do it because there are thousands of rules you need to take into account that a human doctor can't - the what to do is very complicated and screw up just once and their brain tissue dies. For the cancer that's simpler, it's just a gene hack that introduces cancer suppression genes in the areas of the tumor, causing the tumor cells to self destruct while leaving the healthy ones alone)

3

Desperate_Donut8582 t1_iyyza6j wrote

  1. I don’t see what fdvr and AI correlation…..plus what other countries exactly because we all know china is way way more strict on tech than america by far and Russia is also strict as hell…..either way I don’t see what AI has to do with fdvr tbh

  2. Again what does any of this have to do with my above comment concerning the comment above saying fdvr will be a thing?

1

SoylentRox t1_iyz0miz wrote

The FDVR problem is "find a way to make human beings sense things, with as much fidelity as their own body has, from arbitrary virtual environments. Interface with their brain in such a way that they cognitively do not have any deterioration, and keep their body maintained such that they live indefinitely".

That's a big huge problem but it devolves into obvious subproblems. "make these samples of human motor homunculus in the lab stay alive. Inject signals into them and ensure the signal quality is the same as their own internal connections..."

For keeping a human body alive, well obviously you need to be able to keep individual organs alive. And know which proteins in blood chemistry are bad news and what to do in each situation.

It's a tree of subproblems. The 2 top level statements end up probably being millions of separate research tasks.

And the 'doctor' who has to keep you alive needs to know the results of all the millions of separate tasks, and make multiple decisions about your care every second, and make no errors so that you can enjoy FDVR for thousands of years...

See the problem? it's impossible without AI, and AI makes the problem easy.

I don't give a shit which countries you name, there are many. All it takes is one country that lets you do advanced medical procedures.

3

Redvolition t1_iz30sko wrote

Much simpler to just isolate the brain and discard the body. You only have one point of failure now. A pig brain has been kept alive for hours after death in 2019.

I believe the first FDVR implementations might be some brain implant that doesn't attempt to preserve your body in any particular way, maybe from Neuralink or one of its competitors.

The second implementation might be a ship of Theseus kind of thing, in which nanotechnology gradually replaces biological tissue throughout your whole body, including your brain. These new components might allow for controlling emotional states and sensorial imput.

If this second implementation fails to materialize in the next 10 to 20 years, then the brain isolation pathway might gain early adopters and start being explored in the meantime.

If the gradual replacement via nanotech proves to be particularly difficult, it might even be the case that entire generations of humans will exists as isolated brains, with artificial forms of reproduction to keep the population levels.

1

SoylentRox t1_iz38wf7 wrote

Sure. I agree more or less. I mean the body wouldn't actually be discarded per say. Keeping a brain alive by itself is hard. You would realistically provide the functions of a body with living human cells in artificial scaffolding in separate containers from the body. So everything can be carefully monitored for problems because the walls of the containers are clear. Whole thing in a nitrogen filled sterile chamber only robots can access.

1

Quealdlor t1_iyzgk1z wrote

Certainly post 2040. Currently I see VR and AR developing slowly.

0

Down_The_Rabbithole t1_iyuypkw wrote

I'm expecting revolutionary change in AI before 2030, potentially AGI but certainly a massive reduction in the amount of human knowledge workers needed.

15

Ambiwlans t1_iyvi75a wrote

As someone in ai, we have a revolution every few weeks. Shits crazy

20

FirstOrderCat t1_iyw487q wrote

> have a revolution every few weeks

like +5% on benchmarks detached from real world?

7

Ambiwlans t1_iyw6jcp wrote

5% improvements on sota doesn't even get an arxiv paper for most problems.

Look at text to image generation 1 year ago and today.

15

FirstOrderCat t1_iyw6ute wrote

I watch NLP/LLM papers, people sure will release arxiv paper and likely apply on conference with few % improvement.

2

Ambiwlans t1_iyw78uu wrote

What metric? 5% reduction in errors of 5% improvement in score? I mean, one might be a lot bigger.

Llms are basically doa waiting on gpt4 in a few months now anyways unless they offer something really novel.

4

FirstOrderCat t1_iyw8tuu wrote

Here is recent paper, they improved previous SOTA in GSM8K by 2%: 78->80: https://arxiv.org/pdf/2211.12588v3.pdf

​

>Llms are basically doa waiting on gpt4 in a few months now anyways unless they offer something really novel.

why are you so confident? Current gpt is very far from doing any useful work, it can't replace programmer, lawyer, accounter, the is a huge space for improvement before they reach some AGI and replace knowledge workers.

2

Ambiwlans t1_iywjrxk wrote

>why are you so confident?

I never made any claim of strong agi any time soon dude. And gpt4 certainly will not be strong agi.

Although automation is taking jobs today.

6

FirstOrderCat t1_iywkjp6 wrote

yes, hand coded automation empowered by LLMs can take many jobs.

0

Madrawn t1_iyxwdi2 wrote

The current codex-davinci model from openAI still blows me away.

I basically asked it nicely to write me a vscode plugin that takes the selected text, prompts the user for the instructions and sends it off to the edit-api endpoint and replaces the text with the response. Including the changes to the package json needed to expose the setting where you put the api key and and a prompt if the key setting is empty to fill the setting.

All that in around seven prompts and in only 2 of them I had to make some changes as it fucked up a bracket in one and one where it forgot to read the apikey setting first before checking it.

It's not perfect, you still need to be able to code to check for errors, but it's already more helpful than some of my colleagues.

6

AI_Enjoyer87 t1_iyva55r wrote

AI to the moon. Its coming. Get ready. Buckle up. Everything is doubling every couple months.

13

Heizard t1_iyvqr6h wrote

We have liner progress bias and previous AI winter bias that heavily affect how we currently perceive scientific progress.

67

SoylentRox t1_iywax9s wrote

Yep. Note part of the cause of AI winters was massive hype over future AI capabilities. So while from 1965 to 2012 steady improvements were made in AI, from neural networks to numerous attempts at symbol logic machines to tons of other machine learning techniques, it was never amazing and instantly real world useful.

It would be like if physicists were hyping nuclear fission bombs for decades prior but they simply couldn't get their hands on more than a gram of plutonium or uranium. Hype all they want, it isn't gonna work.

Obviously once you reach criticality with fission crazy shit happens and reactor cores glow red hot with absurd amounts of activity. And prompt critical, well...

AI needs many many TOPs worth of compute and vast amounts of computer memory - a couple terrabytes of GPU memory is in use on these bigger models.

27

Arthropodesque t1_iyyqvcg wrote

I recently read that transistors can theoretically be built to be 1,000 x more efficient. Maybe an AI can design that.

6

Superschlenz t1_iyxv98c wrote

u/GeneralZain is executing Gigi D'Agostino's "Bla Bla Bla" program from 1999: https://youtube.com/watch?v=Hrph2EW9VjY

Coming next: u/GeneralZain losing his head.

Finally, u/GeneralZain will face Dumbo, the flying Disney Elefant from 1941.

3

GeneralZain OP t1_iyy0j5k wrote

its a banger.

also its elephant*** ;)

4

Superschlenz t1_iyy15uk wrote

Elefant is the German word for elephant.

You have successfully banned the ger ;-)

2

[deleted] t1_iyyf5u5 wrote

Seems pretty smooth? Next 200 years slow for tech?? Even without ai this is way off, we’ve been living two different timelines

2

SoylentRox t1_iyys3mc wrote

Because without AI, each succeeding problem for technology is harder to solve than the one prior. So building the first transistor took 2 guys in a lab, and going to 4 nanometer takes thousands and thousands of people and 10-50 billion dollars. It's so hard to do that only one company, TSMC, is expected to do it soon.

Developing penicillin took basically one guy noticing that some mold scrapings were killing bacteria in a microscope. Developing a better antibiotic - especially one to deal with all these resistant bacteria that have popped up - takes thousands of people and billions of dollars.

And so on.

AI basically gives you the intellectual equivalent of having thousands of geniuses, then millions, then a situation where the cognitive equivalent all of humanity are geniuses solely dedicated to research, and so on, to apply to your problems.

They do still keep getting harder but for a brief window of time during the Singularity is when you will see some crazy improvement. Obviously, post singularity, tech is pretty close to as good as it can get, and it would be smooth from there on out. (without black swans, like material from other universes or cheat codes, etc)

5

[deleted] t1_iyysnqi wrote

Well sure, but drastic changes are still happening currently, and theres still a ton of progress we can make without ai. Ai will just make it faster and take it further. No one in their right mind would think the next 200 years will be slow for tech.

I just think the comic is poorly portraying the insane period we’re in

2

GeneralZain OP t1_iyz6qh9 wrote

lmao it went right over your head...

2

[deleted] t1_iz08aa6 wrote

What?

1

GeneralZain OP t1_iz090ax wrote

I guess I gotta explain the meme then...

the man represents the general consensus of people who are unaware of the concept of the singularity, who (wrongfully) assume that progress with continue at a slow and linear pace.

​

I personally don't think it will take 200 years, but go and ask literally anybody who doesn't think or care about AI or the singularity, and they will stare blankly , and then casually say "that's impossible" or "not for another 100+ years"

​

that's the joke...they think it wont happen for a long time, then suddenly it happens. it "hits" them...get it?

2

[deleted] t1_iz0b300 wrote

Uhh, youre missing what im saying… we have not been on a slow and linear pace at all, and nobody thinks that. Thats literally the entirety of my point.

I agree that many people probably arent aware of whats to come and how soon, but those people are still aware that technology is not improving slowly.

Also, whats with the rudeness?

1

GeneralZain OP t1_iz0d13h wrote

my bad I didn't mean to come off as hyper rude, it was more of a playful snarky-ness :P

but that said, I think almost nobody in this reddit thinks that...but there are billions of people outside of this little bubble on the internet who don't know or care about AI.

what I'm saying is if you literally go outside right now and ask somebody about AI...heck, ask 100 people...90% of them will give you similar responses that I gave. (I have experienced this many times :P)

2

[deleted] t1_iz0fcc2 wrote

Haha honestly Im just being nit picky about the comic and have completely ruined the joke at this point lmao

1

Head_Ebb_5993 t1_iyyld9j wrote

guys chill , I am reading this subreddit and it seems like random people saying that in 2030 there will be AGIs and stuff , while in reality we don't even remotely know how to approach problems like that and it's practically pure sci-fi at the moment , there probably won't be anything like that in our lifetimes ...

1