Redvolition
Redvolition t1_iubdypp wrote
First came about the idea in 2016, and was enthusiastic about it for 1 or 2 years. Things went somewhat stale after that and I set these perspectives aside. Last two months image AI generators and BCI advancements brought me fully back into checking tech related subs almost daily to see the newest breakthroughs, including this one.
Redvolition t1_iub8hb5 wrote
Reply to comment by End3rWi99in in Experts: 90% of Online Content Will Be AI-Generated by 2026 by PrivateLudo
Can confirm. Normies will be normies.
Redvolition t1_iub8a8t wrote
Reply to comment by Sashinii in Experts: 90% of Online Content Will Be AI-Generated by 2026 by PrivateLudo
The biggest question for me is whether or not we are going to run into the same type of diminishing returns as Full Self-Driving did, as we get 90 to 99% of functionality really quick, but then the rest of the 10 to 1% takes forever.
Some commented, and I agree, that if it is indeed the case that the last few percent will be the hardest, then only the lower end of production value within the entertainment industry will see substantial disruption, such as indie, YouTube, manga, anime, and porn, but the big budged, big name producers, not so much, as their user base tends to include a higher percentage of people that would be upset about seeing a few stray edges or colors around, whereas the viewers of lower production content would not care so much.
Redvolition t1_iub7gan wrote
Here is my timeline.
Capacity Available
(Q2 2024) Produces realistic and stylized videos in 720p resolution and 24 fps via applying post processing on crude 3D input. The videos are almost temporally consistent frame to frame, yet require occasional correction. Watch the GTA demo, if you haven't already. It could look like a more polished version of that.
(Q1 2025) Produces realistic and stylized videos in 720p resolution and 24 fps from text or low entry-barrier software, and the result is nearly indistinguishable from organic production, although with occasional glitches.
(Q3 2026) AI produces realistic and stylized videos in high resolution and frame rate from text or low entry-barrier software, and the result is truly indistinguishable from organic production. Emerging software allow for fine tuning, such as camera position, angle, speed, focal lenght, depth of field, etc.
(Q4 2027) Dedicated software packages for AI video generation are in full motion, making almost all traditional 3D software as we know obsolete. Realistic high resolution videos can be crafted with the click of a button or a text prompt already, but professionals use these softwares for further fine control.
Temporal and Narrative Consistency
(Q1 2025) Temporal consistency is good frame to frame, yet not perfect, and visual glitches still occur from time to time, requiring one form or another of manual labor to clean up. In addition, character and environment stability or coherence across several minutes of video is not yet possible.
(Q1 2026) The videos are temporally consistent frame to frame, without visual flickering or errors, but lack long-term narrative consistency tools across several minutes of video, such as character expressions, mannerisms, fine object details, etc.
(Q3 2027) Perfect visuals with text input and dedicated software capable of maintaining character and environment stability to the finest details and coherence across several minutes or hours of video.
Generalization Effectiveness
(Current) Only capable of producing what it has been trained for, and does not generalize into niche or highly specific demands, including advanced or fantastical elements for which an abundance of data does not exist.
(Q1 2025) Does generalize into niche or highly specific demands, such as advanced or fantastical elements for which an abundance of data does not exist, yet the results are subpar compared to organic production.
(Q2 2027) Results are limitless and perfectly generalize into all reasonable demands, from realistic, to stylized, fantastical, or surreal.
Computational Resources
(Current) Only supercomputers can generate videos with sufficient high resolution and frame rate for more than a couple of seconds.
(Q2 2025) High end personal computers or expensive subscription services need to be employed to achieve sufficient high resolution and frame rate for more than a couple of seconds.
(Q4 2028) An average to low end computer or cheap subscription service is capable of generating high resolution and frame rate videos spanning several minutes.
Redvolition OP t1_iub1ure wrote
Reply to Engineers at UNSW have found a way to convert nerve impulses into light, which could lead to nerve-operated prosthetics and brain-machine interfaces. by Redvolition
Call me a dreamer, but I envision a future where we are all isolated brains with our nerves connected to a computer and supported by artificial vascular systems.
I've read a paper recently summarizing all of the BCI methods, and nerve interception seemed the most promising to me, instead of attempting to interact directly with the brain, as Neuralink and its competitors seem to be doing.
The technology will surely begin threading the corporate landscape by restoring function to people with disabilities, but we could eventually connect AI generators to nerve endings and emulate all 5 senses in an immersive virtual reality, fully controlled by ourselves.
Redvolition t1_iu7m7hd wrote
Reply to Engineers at UNSW have found a way to convert nerve impulses into light, which could lead to nerve-operated prosthetics and brain-machine interfaces. by unswsydney
Call me a dreamer, but I envision a future where we are all isolated brains with our nerves connected to a computer and supported by artificial vascular systems.
I've read a paper recently summarizing all of the BCI methods, and nerve interception seemed the most promising to me, instead of attempting to interact directly with the brain, as Neuralink and its competitors seem to be doing.
The technology will surely begin threading the corporate landscape by restoring function to people with disabilities, but we could eventually connect AI generators to nerve endings and emulate all 5 senses in an immersive virtual reality, fully controlled by ourselves.
Redvolition t1_iu7ktk6 wrote
The Boston Dynamics robot costs 74k. A low end manual laborer in the US costs 30k per year. I believe we are 5 to 10 years from having a suficiently dexterous robot to replace most manual laborers. It will be a bloodbath.
It won't be much better for most desk jobs either. The safest jobs are in STEM, in my opinion, and only the most innovative sector of it. Lab technicians, assistants, and entry level programmers are on the line too.
Redvolition t1_itlmbug wrote
Reply to comment by red75prime in Given the exponential rate of improvement to prompt based image/video generation, in how many years do you think we'll see entire movies generated from a prompt? by yea_okay_dude
Have you seen the Phenaki demo?
I am not an expert, but from what I am digesting from the papers coming out, you could get to this Q4 2028 scenario with just algorithm improvements, without any actual hardware upgrade.
Redvolition t1_itieewe wrote
Reply to What will you do to survive in the time between not needing to work anymore to survive and today? by wilsonartOffic
Some commenters seem to be misreading OP's post as asking what are your preparations for an upcoming UBI type of implementation. I believe OP is actually asking what are your preparations for a scenario in which jobs are scarse and UBI is not yet implemented.
I am making as much cash as I can on my online business, and stashing it all up in broad market ETFs. Next, I intend to branch off into research on engineering fields in computer science and biology, as science roles requiring actual innovation will likely be the last to be replaced.
The rule of the game is that you need to make as much money within the next decade or so, especially if you are sub 120 IQ and not on a role that requires constant innovation, as jobs below that threshold will get more and more scarce. I've posted about it recently here on this sub. The other alternative is settling for manual labor jobs, as opposed to desk jobs, as those are still likely to be competitive against the 70k USD that you would spend on a robot while its AI controller is not yet too dexterous.
Redvolition t1_itgk5qq wrote
Reply to Given the exponential rate of improvement to prompt based image/video generation, in how many years do you think we'll see entire movies generated from a prompt? by yea_okay_dude
I voted for 3 to 4 years. Here is the breakdown:
The dates in parenthesis refer to when I currently believe the referred technologies will be available as a published, finished, and usable product, instead of codes, papers, beta software, or demos floating around. Also, NeRF just seems to be glorified photogrammetry to me, which at best would produce good conventional 3D models, but that just seems to be a subpar workflow compared to post processing on top of a a crude 3D base or just generating the videos from scratch.
Tell me your own predictions for each category.
Capacity Available
(Q2 2024) Produces realistic and stylized videos in 720p resolution and 24 fps via applying post processing on crude 3D input. The videos are almost temporally consistent frame to frame, yet require occasional correction. Watch the GTA demo, if you haven't already. It could look like a more polished version of that.
(Q1 2025) Produces realistic and stylized videos in 720p resolution and 24 fps from text or low entry-barrier software, and the result is nearly indistinguishable from organic production, although with occasional glitches.
(Q3 2026) AI produces realistic and stylized videos in high resolution and frame rate from text or low entry-barrier software, and the result is truly indistinguishable from organic production. Emerging software allow for fine tuning, such as camera position, angle, speed, focal lenght, depth of field, etc.
(Q4 2027) Dedicated software packages for AI video generation are in full motion, making almost all traditional 3D software as we know obsolete. Realistic high resolution videos can be crafted with the click of a button or a text prompt already, but professionals use these softwares for further fine control.
Temporal and Narrative Consistency
(Q1 2025) Temporal consistency is good frame to frame, yet not perfect, and visual glitches still occur from time to time, requiring one form or another of manual labor to clean up. In addition, character and environment stability or coherence across several minutes of video is not yet possible.
(Q1 2026) The videos are temporally consistent frame to frame, without visual flickering or errors, but lack long-term narrative consistency tools across several minutes of video, such as character expressions, mannerisms, fine object details, etc.
(Q3 2027) Perfect visuals with text input and dedicated software capable of maintaining character and environment stability to the finest details and coherence across several minutes or hours of video.
Generalization Effectiveness
(Current) Only capable of producing what it has been trained for, and does not generalize into niche or highly specific demands, including advanced or fantastical elements for which an abundance of data does not exist.
(Q1 2025) Does generalize into niche or highly specific demands, such as advanced or fantastical elements for which an abundance of data does not exist, yet the results are subpar compared to organic production.
(Q2 2027) Results are limitless and perfectly generalize into all reasonable demands, from realistic, to stylized, fantastical, or surreal.
Computational Resources
(Current) Only supercomputers can generate videos with sufficient high resolution and frame rate for more than a couple of seconds.
(Q2 2025) High end personal computers or expensive subscription services need to be employed to achieve sufficient high resolution and frame rate for more than a couple of seconds.
(Q4 2028) An average to low end computer or cheap subscription service is capable of generating high resolution and frame rate videos spanning several minutes.
Redvolition t1_itbnd6i wrote
Reply to comment by katiecharm in U-PaLM 540B by xutw21
Generating titties should be humanity's final and most noble endeavor.
Redvolition OP t1_it9ecir wrote
Reply to comment by SnowyNW in Thoughts on Job Loss Due to Automation by Redvolition
Robots don't need to be perfect, they just need to be better than the average human. As with most lines of work, there is going to perdure a transition period between full human labor and full robot labor in which both coexist, before full automation. Besides, if they are not there right now yet, it is only a matter of time until they are and the time frame is in years and decades at most, not centuries.
I don't understand your point.
Redvolition OP t1_it9bsl3 wrote
Reply to comment by SnowyNW in Thoughts on Job Loss Due to Automation by Redvolition
Only professionals still left employable towards the next decade in medicine are on category 3, which include those that perform some type of research on the medical field. Patient facing folks will be largely phased out. Surgeons will last a bit longer.
Redvolition OP t1_it95xgy wrote
Reply to comment by BinyaminDelta in Thoughts on Job Loss Due to Automation by Redvolition
The Boston Dynamics robot cost 74k USD, so they seem to be priced at a range that would put manual laborers at serious threat of losing their jobs, if they become dexterous enought.
Redvolition OP t1_it93np3 wrote
Reply to comment by Torrall in Thoughts on Job Loss Due to Automation by Redvolition
IQ is established science with useful and important correlations in the real world, such as professional and academic performance, income, lifespan, etc.
It is only criticized by either the ignorant, or those with a political or ideological agenda, particularly on the left, as they detest any form of inquiry that could reveal innate differences between individuals.
Redvolition OP t1_it92z8v wrote
Reply to comment by ihateshadylandlords in Thoughts on Job Loss Due to Automation by Redvolition
Despite being a free market advocate myself and generally against state interventionism, for the first time, I happen to agree with the opinion of a socialist:
Before this post scarcity utopia arrives, though, we might have a large UBI underclass that spends the money it receives from the government on companies, which in turn get a large ammount of their profits taxed away to feed the underclass back, thus creating a cycle of production and consumption not based on work, but rather on taxation and gratuituous distribution. For this to work, we are going to start seeing absurd levels of taxation, far above what socialist nations such as Belgium and France practice.
Redvolition t1_it8ucsu wrote
>How do we know we are not in one of these full dive simulations right now?
Because it would either have been horrendously bad engineered, or built by a sadist that wanted to see us suffer on purpose.
I believe that isolated brains kept on artificial support and connected via BCI are a high likelihood, and the world created by most people, left to their choices would definitely be much different to what we experience now.
Redvolition t1_it7zfhk wrote
Reply to comment by phriot in If you believe you can think exponentially, you might be wrong. Transformative AI is here, and it is going to radically change the world before the Singularity, and before AGI. by AdditionalPizza
I believe paper publishing scientists will be amongst the last to be replaced, albeit the lab technicians and assistants doing less innovative work will be far sooner. By the time AI can publish scientific papers to the point of replacing scientists themselves, this is it, we already reached the singularity.
Problem is, this type of innovative work likely requires minimum >120 IQ, which is 1 in 11 people. If you don't reach that cutoff, the remaining options will mostly be traditional manual jobs requiring <100 IQ, or those that benefit from physical human interaction, such as therapists and prostitutes. Basically the middle class, middle cognitive demand jobs for people between 100 and 120 IQ will be eradicated.
If it is difficult to monetize a career in entertainment now, it will be an order or two of magnitude harder in the future, due to competition with AI generators and performers.
Even assuming you have the AI to control robots, the raw materials and fuel to power them cost a lot of resources, and manual laborers are amongst the cheapest, so as long as the robots remain costing more than 4 or 5 years worth of wages, which adds up to 150k to 300k USD in America, plumbers, electricians, and housekeepers will keep their jobs.
We are heading towards a society in the 2030s being stratified as such, in order of wealth:
- Capitalists (~1%)
- Entertainers and Performers (~0.05%)
- Innovation STEM jobs (~5%)
- Management and administration (~5%)
- Physical interaction jobs (~5%)
- Manual labor jobs (~30%)
- UBI majority (53.95%)
Redvolition OP t1_iudz1eg wrote
Reply to comment by Southern-Trip-1102 in Engineers at UNSW have found a way to convert nerve impulses into light, which could lead to nerve-operated prosthetics and brain-machine interfaces. by Redvolition
https://www.researchgate.net/figure/Overview-of-various-ways-to-intercept-motor-control-signals-Motor-control-signal-is_fig3_335586918