Redvolition
Redvolition t1_ivrj8x1 wrote
Reply to comment by h20ohno in How might fully digital VR societies work? by h20ohno
I always thought the best argument for why we are not living in a simulation is that it would have been a senselessly gruesome and suboptimal one, with an abundance of negative emotion.
You just made me think that maybe our current world is a first run of the simulation just after we are born, so that we fully develop and mature into functioning adults, before being revealed that, in fact, we are isolated brains kept on artificial support machines.
Just imagine that when you reach 50 years old or whatever, you go to sleep one day and wake up in a white room full of people looking at you, and one of them speaks:
- Welcome, anon, you concluded your maturation successfuly, now you will be introduced to the real world.
Everyone around us is just either a simulated philosophical zombie, or other humans in the maturation run, and everyone above 50 or so is an NPC acting as a placeholder for somebody tha already matured and left the first simulation.
Redvolition OP t1_ivrhqn5 wrote
Reply to comment by Sashinii in Perspectives on a Digital Existence by Redvolition
>full dive VR requires molecular nanotechnology
I don't think so. FDVR only requires three things:
- Isolated brain kept alive via artificial vascular system feeding it nutrients and essential chemicals. Has already been done in pig brains kept alive 36h in 2019, if I am not misremembering.
- Connection with sensory nerves that send and receive signals. There are already rudimentary technologies around this, mostly targeting prosthesis control and sensory implants.
- AI world generators.
Molecular nanotech will make it easier, but is not strictly necessary.
Redvolition OP t1_ivrgzhs wrote
Reply to comment by Mortal-Region in Perspectives on a Digital Existence by Redvolition
The ever improving hardware and ever more efficient algorithms make me believe that localized systems will already be capable of generating an interactive and realistic world for an individual. Don't forget that our entire reality is generated from 5 sensory systems and a 1.4 kg brain consuming 20 watts of power per day. Our current computer technology is vastly inefficient in comparison.
A shared world will exist, but a sizeable portion of people will inhabit their own realms.
Submitted by Redvolition t3_yr1eb5 in singularity
Redvolition t1_ivrcvmq wrote
Reply to How might fully digital VR societies work? by h20ohno
For a digital existence to be possible, you either would need an isolated brain, for which the body has been discarded, or a fully uploaded mind, in which we left the organic substrate altogether in favor of a synthetic one. A virtual world in which your body is kept around and taken care by a support machine does not seem feasible to me, as there are too many points of failure, from diseases, to aging, and muscular atrophy. A single isolated brain connected to an artificial support, on the other hand, seems far more feasible.
For there to be an UBI, there needs to be scarcity of basic needs. In all likelihood, there won't be anything truly essential and scarse that one or a group of humans can offer to others in exchange for money, considering all brains will already be able to generate whatever they want and imagine on their own system. However, assuming there still is a differential in intelligence, the most capable minds will congregate to advance the technological dependencies that everyone relies on, such as the world generators, brain support machines, artificial reproduction pipelines, exowombs, energy supplies, longevity treatments, molecule builders, etc. They will be compensated by their efforts via having access to the latest technologies first, whereas everyone else will simply wait until it is made available for them. Only a minority of highly gifted brains will participate in the economy and be producers of technology, whereas everyone else will simply be consumers.
This is assuming we achieve brain isolation after AGI, but before ASI, which is not necessarily going to be the case. If we reach ASI first, then there will be no human producers in the first place and, if mind upload is possible, it will be readily achievable by the ASI. An independent and well aligned ASI will likely make the whole notion of a market economy obsolete. Everyone will simply live in their own worlds or cross over to other people’s worlds and public realms. Some will fully retreat and never interact with other humans again, whereas others will constantly congregate with their previous family and friends.
I don’t know much about neurobiology, but I believe there are limitations to how much pleasure an individual can induce before reaching several forms of neurological damage and intrinsic limits. So it might be the case that simply bombarding yourself with pleasure chemicals is not going to work, and a more natural distribution of positive and negative emotion, resembling our present reality, will still be necessary for self-preservation. Even though isolated brains won’t be able to have endless chemically induced orgasms and serotonin overloads, the lows of poverty, disease, anxiety and depression will just cease to exist.
Redvolition OP t1_iudz1eg wrote
Redvolition t1_iubdypp wrote
First came about the idea in 2016, and was enthusiastic about it for 1 or 2 years. Things went somewhat stale after that and I set these perspectives aside. Last two months image AI generators and BCI advancements brought me fully back into checking tech related subs almost daily to see the newest breakthroughs, including this one.
Redvolition t1_iub8hb5 wrote
Reply to comment by End3rWi99in in Experts: 90% of Online Content Will Be AI-Generated by 2026 by PrivateLudo
Can confirm. Normies will be normies.
Redvolition t1_iub8a8t wrote
Reply to comment by Sashinii in Experts: 90% of Online Content Will Be AI-Generated by 2026 by PrivateLudo
The biggest question for me is whether or not we are going to run into the same type of diminishing returns as Full Self-Driving did, as we get 90 to 99% of functionality really quick, but then the rest of the 10 to 1% takes forever.
Some commented, and I agree, that if it is indeed the case that the last few percent will be the hardest, then only the lower end of production value within the entertainment industry will see substantial disruption, such as indie, YouTube, manga, anime, and porn, but the big budged, big name producers, not so much, as their user base tends to include a higher percentage of people that would be upset about seeing a few stray edges or colors around, whereas the viewers of lower production content would not care so much.
Redvolition t1_iub7gan wrote
Here is my timeline.
Capacity Available
(Q2 2024) Produces realistic and stylized videos in 720p resolution and 24 fps via applying post processing on crude 3D input. The videos are almost temporally consistent frame to frame, yet require occasional correction. Watch the GTA demo, if you haven't already. It could look like a more polished version of that.
(Q1 2025) Produces realistic and stylized videos in 720p resolution and 24 fps from text or low entry-barrier software, and the result is nearly indistinguishable from organic production, although with occasional glitches.
(Q3 2026) AI produces realistic and stylized videos in high resolution and frame rate from text or low entry-barrier software, and the result is truly indistinguishable from organic production. Emerging software allow for fine tuning, such as camera position, angle, speed, focal lenght, depth of field, etc.
(Q4 2027) Dedicated software packages for AI video generation are in full motion, making almost all traditional 3D software as we know obsolete. Realistic high resolution videos can be crafted with the click of a button or a text prompt already, but professionals use these softwares for further fine control.
Temporal and Narrative Consistency
(Q1 2025) Temporal consistency is good frame to frame, yet not perfect, and visual glitches still occur from time to time, requiring one form or another of manual labor to clean up. In addition, character and environment stability or coherence across several minutes of video is not yet possible.
(Q1 2026) The videos are temporally consistent frame to frame, without visual flickering or errors, but lack long-term narrative consistency tools across several minutes of video, such as character expressions, mannerisms, fine object details, etc.
(Q3 2027) Perfect visuals with text input and dedicated software capable of maintaining character and environment stability to the finest details and coherence across several minutes or hours of video.
Generalization Effectiveness
(Current) Only capable of producing what it has been trained for, and does not generalize into niche or highly specific demands, including advanced or fantastical elements for which an abundance of data does not exist.
(Q1 2025) Does generalize into niche or highly specific demands, such as advanced or fantastical elements for which an abundance of data does not exist, yet the results are subpar compared to organic production.
(Q2 2027) Results are limitless and perfectly generalize into all reasonable demands, from realistic, to stylized, fantastical, or surreal.
Computational Resources
(Current) Only supercomputers can generate videos with sufficient high resolution and frame rate for more than a couple of seconds.
(Q2 2025) High end personal computers or expensive subscription services need to be employed to achieve sufficient high resolution and frame rate for more than a couple of seconds.
(Q4 2028) An average to low end computer or cheap subscription service is capable of generating high resolution and frame rate videos spanning several minutes.
Redvolition OP t1_iub1ure wrote
Reply to Engineers at UNSW have found a way to convert nerve impulses into light, which could lead to nerve-operated prosthetics and brain-machine interfaces. by Redvolition
Call me a dreamer, but I envision a future where we are all isolated brains with our nerves connected to a computer and supported by artificial vascular systems.
I've read a paper recently summarizing all of the BCI methods, and nerve interception seemed the most promising to me, instead of attempting to interact directly with the brain, as Neuralink and its competitors seem to be doing.
The technology will surely begin threading the corporate landscape by restoring function to people with disabilities, but we could eventually connect AI generators to nerve endings and emulate all 5 senses in an immersive virtual reality, fully controlled by ourselves.
Redvolition t1_iu7m7hd wrote
Reply to Engineers at UNSW have found a way to convert nerve impulses into light, which could lead to nerve-operated prosthetics and brain-machine interfaces. by unswsydney
Call me a dreamer, but I envision a future where we are all isolated brains with our nerves connected to a computer and supported by artificial vascular systems.
I've read a paper recently summarizing all of the BCI methods, and nerve interception seemed the most promising to me, instead of attempting to interact directly with the brain, as Neuralink and its competitors seem to be doing.
The technology will surely begin threading the corporate landscape by restoring function to people with disabilities, but we could eventually connect AI generators to nerve endings and emulate all 5 senses in an immersive virtual reality, fully controlled by ourselves.
Redvolition t1_iu7ktk6 wrote
The Boston Dynamics robot costs 74k. A low end manual laborer in the US costs 30k per year. I believe we are 5 to 10 years from having a suficiently dexterous robot to replace most manual laborers. It will be a bloodbath.
It won't be much better for most desk jobs either. The safest jobs are in STEM, in my opinion, and only the most innovative sector of it. Lab technicians, assistants, and entry level programmers are on the line too.
Redvolition t1_itlmbug wrote
Reply to comment by red75prime in Given the exponential rate of improvement to prompt based image/video generation, in how many years do you think we'll see entire movies generated from a prompt? by yea_okay_dude
Have you seen the Phenaki demo?
I am not an expert, but from what I am digesting from the papers coming out, you could get to this Q4 2028 scenario with just algorithm improvements, without any actual hardware upgrade.
Redvolition t1_itieewe wrote
Reply to What will you do to survive in the time between not needing to work anymore to survive and today? by wilsonartOffic
Some commenters seem to be misreading OP's post as asking what are your preparations for an upcoming UBI type of implementation. I believe OP is actually asking what are your preparations for a scenario in which jobs are scarse and UBI is not yet implemented.
I am making as much cash as I can on my online business, and stashing it all up in broad market ETFs. Next, I intend to branch off into research on engineering fields in computer science and biology, as science roles requiring actual innovation will likely be the last to be replaced.
The rule of the game is that you need to make as much money within the next decade or so, especially if you are sub 120 IQ and not on a role that requires constant innovation, as jobs below that threshold will get more and more scarce. I've posted about it recently here on this sub. The other alternative is settling for manual labor jobs, as opposed to desk jobs, as those are still likely to be competitive against the 70k USD that you would spend on a robot while its AI controller is not yet too dexterous.
Redvolition t1_itgk5qq wrote
Reply to Given the exponential rate of improvement to prompt based image/video generation, in how many years do you think we'll see entire movies generated from a prompt? by yea_okay_dude
I voted for 3 to 4 years. Here is the breakdown:
The dates in parenthesis refer to when I currently believe the referred technologies will be available as a published, finished, and usable product, instead of codes, papers, beta software, or demos floating around. Also, NeRF just seems to be glorified photogrammetry to me, which at best would produce good conventional 3D models, but that just seems to be a subpar workflow compared to post processing on top of a a crude 3D base or just generating the videos from scratch.
Tell me your own predictions for each category.
Capacity Available
(Q2 2024) Produces realistic and stylized videos in 720p resolution and 24 fps via applying post processing on crude 3D input. The videos are almost temporally consistent frame to frame, yet require occasional correction. Watch the GTA demo, if you haven't already. It could look like a more polished version of that.
(Q1 2025) Produces realistic and stylized videos in 720p resolution and 24 fps from text or low entry-barrier software, and the result is nearly indistinguishable from organic production, although with occasional glitches.
(Q3 2026) AI produces realistic and stylized videos in high resolution and frame rate from text or low entry-barrier software, and the result is truly indistinguishable from organic production. Emerging software allow for fine tuning, such as camera position, angle, speed, focal lenght, depth of field, etc.
(Q4 2027) Dedicated software packages for AI video generation are in full motion, making almost all traditional 3D software as we know obsolete. Realistic high resolution videos can be crafted with the click of a button or a text prompt already, but professionals use these softwares for further fine control.
Temporal and Narrative Consistency
(Q1 2025) Temporal consistency is good frame to frame, yet not perfect, and visual glitches still occur from time to time, requiring one form or another of manual labor to clean up. In addition, character and environment stability or coherence across several minutes of video is not yet possible.
(Q1 2026) The videos are temporally consistent frame to frame, without visual flickering or errors, but lack long-term narrative consistency tools across several minutes of video, such as character expressions, mannerisms, fine object details, etc.
(Q3 2027) Perfect visuals with text input and dedicated software capable of maintaining character and environment stability to the finest details and coherence across several minutes or hours of video.
Generalization Effectiveness
(Current) Only capable of producing what it has been trained for, and does not generalize into niche or highly specific demands, including advanced or fantastical elements for which an abundance of data does not exist.
(Q1 2025) Does generalize into niche or highly specific demands, such as advanced or fantastical elements for which an abundance of data does not exist, yet the results are subpar compared to organic production.
(Q2 2027) Results are limitless and perfectly generalize into all reasonable demands, from realistic, to stylized, fantastical, or surreal.
Computational Resources
(Current) Only supercomputers can generate videos with sufficient high resolution and frame rate for more than a couple of seconds.
(Q2 2025) High end personal computers or expensive subscription services need to be employed to achieve sufficient high resolution and frame rate for more than a couple of seconds.
(Q4 2028) An average to low end computer or cheap subscription service is capable of generating high resolution and frame rate videos spanning several minutes.
Redvolition t1_itbnd6i wrote
Reply to comment by katiecharm in U-PaLM 540B by xutw21
Generating titties should be humanity's final and most noble endeavor.
Redvolition OP t1_it9ecir wrote
Reply to comment by SnowyNW in Thoughts on Job Loss Due to Automation by Redvolition
Robots don't need to be perfect, they just need to be better than the average human. As with most lines of work, there is going to perdure a transition period between full human labor and full robot labor in which both coexist, before full automation. Besides, if they are not there right now yet, it is only a matter of time until they are and the time frame is in years and decades at most, not centuries.
I don't understand your point.
Redvolition OP t1_it9bsl3 wrote
Reply to comment by SnowyNW in Thoughts on Job Loss Due to Automation by Redvolition
Only professionals still left employable towards the next decade in medicine are on category 3, which include those that perform some type of research on the medical field. Patient facing folks will be largely phased out. Surgeons will last a bit longer.
Redvolition OP t1_it95xgy wrote
Reply to comment by BinyaminDelta in Thoughts on Job Loss Due to Automation by Redvolition
The Boston Dynamics robot cost 74k USD, so they seem to be priced at a range that would put manual laborers at serious threat of losing their jobs, if they become dexterous enought.
Redvolition OP t1_it93np3 wrote
Reply to comment by Torrall in Thoughts on Job Loss Due to Automation by Redvolition
IQ is established science with useful and important correlations in the real world, such as professional and academic performance, income, lifespan, etc.
It is only criticized by either the ignorant, or those with a political or ideological agenda, particularly on the left, as they detest any form of inquiry that could reveal innate differences between individuals.
Redvolition OP t1_it92z8v wrote
Reply to comment by ihateshadylandlords in Thoughts on Job Loss Due to Automation by Redvolition
Despite being a free market advocate myself and generally against state interventionism, for the first time, I happen to agree with the opinion of a socialist:
Before this post scarcity utopia arrives, though, we might have a large UBI underclass that spends the money it receives from the government on companies, which in turn get a large ammount of their profits taxed away to feed the underclass back, thus creating a cycle of production and consumption not based on work, but rather on taxation and gratuituous distribution. For this to work, we are going to start seeing absurd levels of taxation, far above what socialist nations such as Belgium and France practice.
Redvolition t1_it8ucsu wrote
>How do we know we are not in one of these full dive simulations right now?
Because it would either have been horrendously bad engineered, or built by a sadist that wanted to see us suffer on purpose.
I believe that isolated brains kept on artificial support and connected via BCI are a high likelihood, and the world created by most people, left to their choices would definitely be much different to what we experience now.
Redvolition OP t1_ivrsvne wrote
Reply to comment by Sashinii in Perspectives on a Digital Existence by Redvolition
You could say there are 5 major categories of external senses, technically exteroceptors, and many subcategories. Touch, for example, technically somatosensory, could be subdivided into pressure, vibration, light touch, tickle, itch, temperature, pain, kinesthesia, etc. Then there are other numerous internal senses, technically interoceptors, such as hunger, the vestibular and proprioception systems, etc.
In any case, once nerves are successfuly intercepted for send and receive operations, all this information becomes nothing more than electrical signals, so even if we had thousands of senses, it does not seem to be an obstacle to the generation of a convincing reality. You could just plug an AI world generator to send signals through your nerves and fully emulate an entire reality, from vision and touch, to balance and speed.
Correct me if I am wrong, but everything we feel is either an electrical signal coming from a nerve and interpreted by the brain, or a chemical interacting directly with receptors in the brain.