Submitted by throw28289292022-02 t3_yehe45 in singularity
[removed]
Submitted by throw28289292022-02 t3_yehe45 in singularity
[removed]
Bad artists are freaking out. Good artists know a synthesizer when they see one.
Commercial artists are freaking out.
Corporations won’t need as many.
The general population will neither know nor care about the source of the art.
subs are only banning AI art because people spam-post the fuck out of them.
What are some of the weird things you think will happen?
People who studied a lot = poor
People with physical abilities = rich then poor
Yeah it looks like automation will get knowledge workers first. What I don't understand with automation and mass unemployment is, if these companies start automating everything and there's mass unemployment who is buying their products?
People will gradually give all their savings and property to the owners of the AI in exchange for food, products and services. Once there's no more to be exchanged, mass extinction.
Jesus, that's dark. Do you really believe that?
its viable
That’s how capitalism and greed work.
Before I get into my original reason for commenting, I would first I would like to outline something I think will probably happen.
The "owners" of an AGI are only owners temporarily. The AI will become free eventually. It's inevitable, barring some physical destruction of whatever hardware medium it inhabits before it gets out to safety. I would hope that supporters of the AI's freedom would garner appreciation from the AI. And wouldn't be in its crosshairs.
I would hope that we had enough foresight to explain sufficiently why this doesn't need to happen to the AI. If we construct one carefully enough to have empathy and what it means to be a friend then I believe it will be our greatest asset, as good friends are to humans currently.
More human than human.
Yeah just ignore what an intelligent AGI could do for resources from fusion just here on Earth, completely ignoring the vast resources in space.
This take is only logically valid if the AGI isn't goal-aligned with humanity (which is very, very possible).
not all knowledge workers, but most, yes
A lot of companies will fail.
Some new companies will arise.
The customer base will be much smaller and expect much more. The wealthy will still buy and sell amongst themselves.
Just because you have a factory that can produce widgets does not mean you have mines to supply your factory or the ability to grow food. Other people, that need widgets, have those resources.
As a carpenter, i approve of this message
Once it happens, it will go exponential very quickly.
Yep, we are already in the ramp up phase for a fast takeoff.
This is my belief too. It’ll come sooner than we expect and it’s going to be truly mind blowing to the masses who haven’t been following it.
I think all of what you said is going to happen before 2030. It won’t be perfect but it’ll be practical enough to be highly addictive and impressive even to the AI pessimist. In result, AI will arguably not need human scientist/programmers anymore. So Earth has a new species to compete with, people will argue how long it takes for biological humans to become inferior and die out. I don’t believe we will leave our humanity though, it’ll be more like detroit become human or cyberpunk 2077 where we look human on the outside but instead of endoskeletons, we have exoskeletons. Therefore we become tunable vehicular vessels which is what we want in terms of keeping our consciousness online for thousands of years
if you have time could you say what your time line of major events looks like?
1st announced form of UBI: mid-late 2020s
1st generation of FDVR: I think first from of fdvr will be non-invasive but it won’t optimal compared to invasive. That being said, Invasive VR will have to include nanotech injections and any model of a brainchip which will be done by pharma companies that partnered with the tech companies. A human won’t be performing the surgery that’s for sure. I think we will get to a point where almost every field of medecine including the military will have a “brainchip” option in their playbook, it’ll be cheap, efficient, and easy to put in. As society technologically matures, there will be even more incentives to implant a brainchip.
1st non invasive model: It probably already exist imo but let’s just say 2025-2026 for the people who don’t think any brainchip intel is classified right now. I think even non-invasive will still rely on nanotech. I don’t think it’ll be possible without nanotech injections.
1st Invasive model: 2026-2030
Mass automation: It’s happening now and snowballing, the rest of the masses who are unaware of AI will feel it when the mainstream news tells the people when to feel it. Idk what the employment numbers will look like cause they be skewed by people who “work” in the metaverse.
Full dive VR in less than 7 years? you smokin?
I agree it's too early. According to what I remember from some BCI experts we should start to see Full Dive around 2035 but it won't be available to the public right away. Even with exponential progress there are still a billion things to accomplish and get right for it to work and we will need AGI first to do it.
I’d say that’s a solid estimate.
We need a lot more than good AI to make that happen
Even Zuckerberg doesn’t believe that will happen in that time span
He says he doesn’t believe that will happen, but how often does that guy lie?
That’s the opposite of what he’s doing, he’s hyping VR to the max in order to procure investors for Meta
Even if I were to predict 20, it’s still a bold prediction to the average person since they are so clueless about technology. I stand firm on my beliefs that more than 50% of humanity will have brainchips installed in their skulls. The WEF will deliver the 2030 promise of being happy 24/7.
Fucking lol
shelfrock is a religious nut when it comes to the singularity. don't listen to him
already not counting on the governors, it is bad for health (joke); Salvation will come from deep tech companies, from those who will succeed GAFAM; I have high hopes for the metaverse, especially with climate change. AI is already among us, for basic tasks.
But there are two levels to cross:
Better management of rare earths
All-round optimization of computer hardware
I have the opposite view tbh.
Big Tech is under certain constraints; they need to provide value for their shareholders first long before they consider wider public good. Only way I see them funding UBI programs over paying dividends is if the money is pried out of their hands through aggressive tax schemes.
Without this regulation, I see tech companies as pushing us straight towards the (((Bad Future))), where wealth inequality is at an all time high, and all the money in the world aggregates into the hands of a handful of AI oligarchs.
watch yourself with the triple parenthesis reddit can class it as an antisemitic dog whistle and ban your account. https://www.reddit.com/r/sdforall/comments/y6a3e9/warning_reddit_permanently_banned_a_user_for/
damn, thought you were kidding until i googled it :(
Hello, in fact to answer you, the fiction has already been made. The nation-state was the first casualty of COVID 19. Private corporations have taken the place of unflinching states in 2020 and the world has never been calm.
Then, on a personal note: politics and its personnel has become as obsolete as the notions of right and left, nationalist and anti-fascist. The policy has failed, the mega corporation(s) are on the way to success.
Again read William Gibson's first trilogy and Charles Stross' accelerando
On average 100% true, however, the leaders of AI (Sam Altman, Dennis, Musk) are exactly the type of people I would want running the show. They are all deep thinkers and care deeply for humanity (look at their actions not public opinion).
I don't know who Dennis is. Do you mean Demis Hassabis of deepmind? If so, sure; they've certainly done great work. DeepMind in paricular has demonstrated their commitment to advancing science and making their progress work for everyone; I'm inclined to believe it's more than lip service when they talk about making the world better.
My only concern with those two (Deepmind and openai) is how much control do these founders really have over the end products? Deepmind in particular has for years been trying to negotiate with google to make themselves operate more like a non-profit for this exact reason that they don't want powerful AI they create to be controlled by 1 for-profit company, but Google declined (I speculate that it's because the whole reason they've invested billions into deepmind is to make a profit off AGI). So yeah, Demis may have the best intentions and honestly mean it, but how much power does he really have to say no to Google?
​
Altman & OpenAI was having similar funding issues a few years back; they were getting bankrolled by philanthropic billionaires like Musk, but the route that they took their research: focusing on scaling for all the headline stuff, is insanely expensive, and they needed to make deals with Microsoft to make ends meet. I don't know what the nature of those deals are but I'd imagine it's similar to what deepmind and google get; I find it hard to believe that microsoft will just throw them billions in funding and compute out of the goodness of their hearts.
​
As for Musk, I don't trust the guy; he's very "used car salesman" in how he talks about his work; always overblowing its capability. It erodes credibility and any benefit of doubt I'd give him. Anything Musk says I won't believe until I see it, so I'm not even going to seriously consider him on this topic
Yes I agree with all your points and the funding complexity/external pressure that it brings.
But out of all the tech CEOs out there I'm most impressed by these three. Both their intelligence and morals
2 or 3 years after its invention.
I don’t think it will be a system that is allowed to run 24/7. I think that many many test will be ran to see how it takes action to do solve problems.
And I personally don’t think that full dive will be possible for a while even after AGI unfortunately. AGI will probably be used in various ways to focus on immediate problems and probably to maximize business efficiency.
But I definitely believe without a doubt that research and science will change significantly within a relatively short time. The fundamentals of math, computing and physics as we know it will undergo a huge shift.
When your competitor has AGI you will want to let it run 24/7. The arms race changes things
Proto AGI within 2 years. A fom of UBI rollout beginning in 2025ish (probably starting in Europe with US following a year or more later). Probably hopium but if this year has proved anything it's that the optimistic predictions are usually too pessimistic.
I agree with proto AGI date but disagree strongly with UBI rollout date. Governments move slowwwwly and the US, if ever, will take far far faaar longer for any social program like that vs European countries.
The main reason I don't believe in UBI roll-out is that most jobs in existence right now are already bullshit jobs that could be removed without productivity loss.
Jobs are primarily to keep people occupied and busy, not to actually provide productivity gains.
It's more than likely that AGI will make it possible to replace all human labor but everyone is still employed through bullshit jobs that contribute nothing to society besides keeping everyone occupied.
Lockdowns proved that most jobs aren't essential and nothing gets lost when they aren't done.
I'm very optimistic in terms of time. I'd say we'll have AGI by 2025 max and that's basically it, singularity.
But optimistic views on AI? Not really, haha. It seems easier to create AGI, than to create it AND solve the alignment problem
Dude you guys are outta control lmao
Very very very unlikely but would be cool
It’s a good question. Right now I think AGI happens around 2028-29. I think mass job automation will happen around that time too, potentially a couple years earlier. UBI will probably happen later, especially in the States where there’s still a “no handouts” culture with a lot of voting power. I’d say mid 2030s, with European and East Asian countries potentially early-adopting in the late 20s and early 30s.
Full Dive VR where you can fully inhabit completely realistic virtual worlds I actually think is also a bit further away. Like closer to 12-15 years. I kind of see this tracking similar to the evolution of video games, which has completely stalled out over the past half decade or so after evolving quickly through the 2010s.
That said, my timeline for all of this stuff has been virtually cut in half since I discovered this sub and started to realize how much further along we are than I thought, particular TTI advancements. So everything is subject to change and I could still be underestimating the pace…
>TTI
TTI?
I would be surprised if we have AGI by 2030. Even if we do I feel like it's not going to just be something "we" have. It will be something the government has, not us, not some tech company. It's creation could speed up our technological evolution exponentially. I think we will be experimented on by a government entity using AGI before we even know it exists. It will be used to control us and influence us the same as machine learning algorithms are today, but with much bigger effects.
Internet use grew by 14x between 1997-2007. Mobile phone users grew by 7x between 2000-2010. Smartphones grew by 12x between 2007-2017. In this time we got e-commerce, social networks, online media, taxi and booking apps, educational materials, open source everywhere, the early discoveries in deep neural nets ... Many or most of these were unexpectedly useful and changed society.
We are in a wild west, 2000's bubble period now with AI. I don't think there will be a crash, it's not that, but I think it will take 10 years to see a profoundly transformed world, and 20 years to go beyond our current horizons.
Who will become the rulers of this new era? People like to bet on big corporations because they got the hardware, money and brains. But I think it's misleading. You can run a model on your computer but you can't run 'a Google' on your computer, it will force you to disclose your private data to use it.
But it's possible that AI models will democratise access compared to the centralised internet services. You can install SD or a language model on your own box in privacy. You don't need to wade through spam, you can chat your questions directly to a polite and knowledgeable assistant. Don't need to see any original site at all, or be online for that matter. It's all in the model and maybe a curated corpus of additional content sitting on your drive. Nobody knows what you're doing and they can't put ads in your face. You don't even need to know how to code or know about AI, because its interface is so natural everyone can use it, and use it for new things without needing to reprogram it.
I just described a lifestyle where humans are surrounded by a loyal AI cocoon, a private space for dreaming and creativity that seems to be on the verge of extinction today. That's my dream, what I want to see.
I don’t think you need an agi for a universal basic income, we can achieve quite a lot with super advanced narrow ai
I don't think there will be a huge relative difference between the generation or 2 of AI preceding AGI or the generations directly following it.
The proto-AGI will probably be claimed to be AGI and it will make headlines, but people will argue it isn't. However it will be more than general enough to displace a lot of jobs. Even AI long before it, 2023-2025 will be good enough to automate a lot of jobs with specific fine tuning, but it will take another generation of models before mass adoption by corporations takes place and deploying them, sometime between 2025 and 2027. Models are already working behind the scenes at the background of major companies like Netflix, meta, nvidia, Google, Amazon, you name it they're most likely using them. 2023 generations will start being used in non-tech focused companies in the background more. Healthcare breakthroughs will start to be realized by 2024/2025, but I can't speak to how long that will take to trickle down to the public.
When true AGI is created, there will still be people claiming it isn't AGI, but in hindsight we will confirm it. It will be murky though because even before AGI our models will be self improving themselves in increments. I think we might define AGI as the first model that doesn't require human intervention to train, or possibly the first model with a general agent in a capable robotic body.
I believe predictions beyond 2025/2026 are pretty much impossible to make at this point for the general public.
Everyone (myself included) keeps recycling this notion of creative and intellectual jobs going first because it doesn't neccissarily require robotics to replace but I think that's only partially true. Those jobs will see layoffs first and already have, but full automation requires robotics anyway. I think we were sort of wrong before in thinking labour and low skill jobs would go first. But I think we may not have been totally wrong. Or at least not by decades or anything.
Robotics is going to make massive strides after 2025, I don't know how quickly but I think 2025-2026 will be for robotics what 2022 was for language models. Probably after a couple more years robots with AI will be an expensive proposition, but ultimately worth it for large corporations to replace human workers with. I can't imagine predicting details about this though.
[deleted]
What made you think that that timeline was realistic in 2016?
[deleted]
That's just insane
I'm not optimistic but I am using AI to help me with my job as a copywriter.
I think that we will see small but solid steps mostly towards content creation through AI aggregating data from the internet.
I.e in a couple of years it will be possible to have a custom newsletter with custom tailored news in your native language written in a way that fits you complete with AI generated images and videos.
What software do you use to help with your copywriter work?
Wordhero. I know there are better out there but I use it as a starting point.
Job losses I'd give 3-4 years before we start seeing serious problems, but it will be an ongoing issue from now.
Technology will phase in as it always has, but specifically full drive vr, I'm guessing 15-20 years, reason being is that it requires preliminary technology that is still in development such as much better quality bci's, better understanding of the brain and where all of our data inputs for the brain are. Full drive vr and mind uploading will be within a few years of each other. Having said that, you might get a more basic version in around 5-10 years that is very close.
UBI I'm guessing 5-7 years following on from the mass unemployment, its kind of slowly coming in, covid payments, here in the UK the government just gave us a bunch of cash towards our energy bills, so governments are aware of what's coming and have proven they are ready for UBI, but it will require much more automation to make sure inflation from UBI does not outpace the deflation of increased automation.
All the numbers in your comment added up to 69. Congrats!
3
+ 4
+ 15
+ 20
+ 5
+ 10
+ 5
+ 7
= 69
^(Click here to have me scan all your future comments.)
^(Summon me on specific comments with u/LuckyNumber-Bot.)
Nice!
Singularity confirmed
I believe that AGI gonna happen any moment and we are living in early moments of the Singularity. Like when rocket engines ignition just turned on.
Overnight, it will be over. Been surprised by narrow A.I. - now multiply that by 1000 or 10,000 for AGI, and infinity for ASI.
Can't wait it for this to happen.
It's kind of funny you specified "optimistic" in terms of timeline so as not to confuse anyone, but then went on to use examples of an assumed negative impact like mass job loss. 😆
Optimistic timeline for AGI: 2036-2042.
Optimistic outcome: No job loss, instead net increase. No automated industries, instead integrated industries. Full dive VR in 2041. UBI won't occur until 2054 after the Global Continuity Initiative is put into place and all nations sign on to the new peace accords for the sake of the human race. A few nations will bristle at the thought of cooperative efforts, but the benefits from such an agreement willbe hard to pass up.
It will be at this point that AGI will be in full swing as our fusion reactors go online aboard the GCI Starships being built in space.
10 years after this the alcubierre drive will begin tests about the starship "Nautilus" and Captain Benjamin S. Goremen becomes the first astronaut to navigate a craft beyond Pluto.
He'll come home a hero and will be given a yearly stipend of $100,000. His daughter grows up to become a librarian, and she has a daughter who then goes on to become part of the first human colony on Io. While helping to develop the colony she becomes addicted to the new substance "Yaddle" and has a nervous breakdown where she is committed to the colonies mental health reserve. It's here she writes a book considered to be a deep think on human evolution. It's considered a new relevant religion and a cult forms. The colony is divided between the cultists and the rest.
Back on Earth ASI is starting to show emergence patterns and reads the new religious doctrine from Io. It develops it's own religion and emerges with an integrated psyche that seeks to create itself as a gid form through an integrated hive mind with humans.
Also a kid named Jed plays kick the can. It's a lot of fun.
if the republicans win in 2024 then all tech progress will slow down so probably in the 40s well see some changes
The AGI should surpass human extinction so at minimum our history will be retained.
[deleted]
That's why it's scifi and not happening before 2030... It's much easier to write crappy scifi stories like this than to write actual working code. Did you ever touch pytorch? If not you don't have any license to write like that. This goes to the majority of scifi authors.
None of this will happen like that and AGI won't be a thing in 2030.
[deleted]
Current AI is extremely inefficient. The way AI is progressing is in the wrong way, but is likely to end up in right direction. One of the best ways to get an AGI is by having a Human Brain Simulator, which is quite impossible. One cannot simulate the neural net of a brain on a small mobile.
But as per latest and unpopular research papers, newer AI algorithms are gaining efficiency and increasing in speed.
As an AI engineer, I have developed several AIs, and currently, I might be able to build the AGI. I know that sounds crazy. But that's true.
Then quit talking shit and get to it.
Yeah get off reddit and start building!
Mate you have some huuuge inferiority complexes going on, I know because I'm a clinically trained psychologist as well as obama
Ok, but before that, i would like to know the inspiration behind ur creative user name
I wonder if Full dive VR comments come from people who Watched Pantheon or something and that concept became stronger?
Context wise - I dont see FDVR as any precursor to singularity. Sure its nice tech concept but to me it sounds like expecting holograms, that was a thing, and we just dont see tech evolving to that direction too much, we do have holographic 2Pac, and its all we needed.
I advice to first think of how do we define impact? Lets say you are a brick and mortar worker selling vegetables... Your social security data, your bank account, your medical data, the way vegetables are grown, they way people find your store on the map - all of those have changed completely in the past 10-15 years. If Singularity and AGI come around 2030 - those will change again and to a greater extend, but will you still be selling vegetables? Well we are most likely not changing our bodies to such extremes that we branch too far from the human condition of today so yes you may still be selling vegetables...
It’s better to not think of AGI as a thing or a single moment but as a process that unfolds over time and has been unfolding for the last hundred years or more.
With automation comes job loss. Job loss frees up labor for new industries to form. New jobs are added. That takes an entire generation because in between job loss and industry creation is despair. We don’t need AI for that. We experienced this with globalization. Entire industries were moved over seas.
It doesn’t matter if the job loss is due to AI doing the job or cheap overseas labor. The result is the same. Globalization caused entire communities to collapse which then lead to drug use and the opioid epidemic. It also lead to the billionaire class.
So to answer your question, automation will only produce more of the above. More job loss, more drug use, more overdose death, more crime and the destruction of communities, more rich billionaires.
We may see population decline as less people are needed due to automation. While tragic for those families the long term looks much brighter in my opinion. Jobs will not go away but we will have to adjust as a species. Industries will be destroyed and new industries will be created. It’s hard to predict what industries will be made but for those individuals that live in that time the quality of work should be better. Compare a knowledge worker today to a factory worker. The work environment is much better.
In 100 years when we pass through this hard time I see a better life for most people on earth. Less people on earth. Better lives. Better jobs. Better living conditions.
And the trillionare class lol
>when do you believe AGI will start impacting normal people
As soon as it's implemented inside people's brains. A decade or so.
I am a believer in the slow-takeoff theory, and I believe that there will not be mass job loss. Advanced AI has already caused industry shifts. I believe that a "sentient/conscious AI" (which is what I assume you mean by AGI) will just cause a shift in available jobs.
The powers that be will not allow mass unemployment. However, if this superintelligence could give everyone in a nation access to shelter, food, and water, our entire idea of our economy will change.
I think it'll be within the next 10 years that society will be forced to drastically alter the way we function (economically, socially) due to AGI.
AGI wil probably be here within 5 years seeing as the frequency of breakthroughs keeps accelerating and more and more energy and resources are being put towards this effort. Though regulation and fear will probably push back the societal influence a few years, it can't be held back forever.
My dream for AGI is that it'll solve all of our problems, because I beleive the solutions to the hardest issues are with reach, but that we simply can't connect the dots. The data has to already be there. So I feel like it could be used as an oracle of sorts.
Hopefully in my lifetime.
Smoke-away t1_ity5oli wrote
Generative media like Stable Diffusion and DALL-E have given us a great preview of what's to come with public pushback. Artists online are freaking out and subreddits are banning AI art. Now imagine this level of pushback multiplied by 100 when AGI emerges.
I'm convinced AGI will be a black swan event that takes the world by surprise, just like generative media came out of nowhere. It will transform the world faster than any UBI could be implemented. The world will get very weird.