Submitted by Particular_Leader_16 t3_xwow19 in singularity
ihateshadylandlords t1_ir7vh08 wrote
It’s interesting. What I’m curious about is how long until the public takes notice and understands the implications.
Smoke-away t1_ir8pjam wrote
The public probably won't understand the implications. AI researchers can't even come to a consensus on the timeline/implications.
AGI will likely be a black swan event that takes most by surprise and instantly moves us to a Post-AGI Era that we can't turn back from. It either destroys us or accelerates us faster than comprehension towards the singularity.
dreamedio t1_ir8xssd wrote
Wdym by surprise? You think AI researchers don’t know what they are doing? Plus I feel like you think when AGI or ASI is developed everything is gonna change in the blink of an eye….it’s pure hopium nothing could happen
Wassux t1_ir9f529 wrote
At some point AI will suddenly be able to take over. And then nobody knows what is happening as we won't have anything to do with it
matt_flux t1_ir9sr1f wrote
Take over what?
Wassux t1_ir9t9g1 wrote
Anything you can imagine
matt_flux t1_ir9telz wrote
Making your food? Collecting your bins? I don’t get it
DataRikerGeordiTroi t1_ir9yn51 wrote
ikr.
your user name is fabulous btw.
i mean worse case a sentient ai could control like water and power grids. best case they optimize stuff, a la the TV show Silicon Valley.
most accounts on this sub are bots and high school kids. they just typing stuff. they dont know know from numpy.
matt_flux t1_ira08t0 wrote
Thanks mate! I just liked the sound of it.
Yeah, the expansion of IoT really concerns me. If I had it my way we would actually decouple important infrastructure from networks completely.
I dunno; people here seem smart, but in terms of predictions they are all vague, unfalsifiable, and dare I say idealistic
3Quondam6extanT9 t1_ira2lv7 wrote
Redditors aren't required to be genius level professors. It's a social media platform. Expectations should be low, but that doesn't mean we discount everything being discussed or the people discussing them.
The context of "taking over everything" may be clear to the redditor and may be a rational conclusion based on their available knowledge.
I do think it's important to discuss what is meant without discouraging them from being involved in that discussion through passive aggressive remarks or slights to their intelligence.
That being said I think they probably meant through a combination of integral human systems AGI could replace the need for human interaction at various levels. To them that might mean government, enterprise, technology innovation, and utilities.
Personally I don't see it as being so straightforward as one AGI to rule them all, but in certain respects over the next few decades we could see industry adopting stronger AGI influence and control within various sectors.
There will be a lot of nuance and this is what some people may not recognize, thereby assuming it's a binary outcome.
matt_flux t1_ira3821 wrote
I didn’t make any remarks like that.
In my experience it takes way less effort/cost for a human to improve a business process, or any process really, than to calibrate an AI for the problem and collect enough data etc.
I just want some concrete predictions about what AI will “take over”.
3Quondam6extanT9 t1_ira5vqw wrote
I'm not targeting anyone, just the overall dialogue between you two held slightly condescending context with regard to redditors intelligence.
I'm sure you're familiar with the amount of AI out in the world and it's different forms and uses under the development of different sectors and entities.
I think it would be virtually impossible to offer any concrete predictions about what exactly AI will "take over".
Your comment regarding business use of AI and its efficiency is fairly reductionist though. It assumes that the goal of a company is linear and that it will have to make a binary choice between human or AI influence.
Generally there is a slow integration of AI input as industry models for software and calculation. It's not one or the other, it's a combination of the two to start and over time you tend to see a gradual increase in use of the AI model in those specific use cases.
matt_flux t1_ira6wzs wrote
So you admit it’s just speculation?
People here aren’t presenting it as speculation, but are also unable to give specific predictions.
I’ve seen billions poured into AI analysis of big data, for 0 returns
3Quondam6extanT9 t1_iracj87 wrote
I didn't say it wasn't speculation, but that was never the point.
You're mentioning big data without considering the simple to moderate AI tasks which have been operating at different levels in different sectors for years. Not in terms of "return" but in efficient data management, calculation, logistics, and storage.
Those are basic automated operations that are barely considered AI but still a function of business in day to day management.
But thats enterprise, we aren't even talking about sectors like entertainment and content creation which utilize AI far more readily. We see a lot of AI going into systems that render and utilize recognition patterns like indeep fake and rotoscoping.
Your perception of AI integration equaling a 0 return omits an entire world of operation and doesn't consider future integration. As I said, reductionist.
matt_flux t1_iraemvd wrote
Those things would certainly deliver a return, but at the moment are algorithms programmed by humans. So what, in practical terms, will AI “take over” exactly?
3Quondam6extanT9 t1_irajrw3 wrote
In context to what the redditor was talking about, I'm not sure. I'm assuming they may be basing their perspectives on pop culture concepts like Skynet.
I don't think one AGI will take over "everything", but I do think various versions of AGI will become responsible for more automated system throughout different sectors. It won't be a consistent one size fits all as some business and industry will adopt different approaches and lean into it more than others.
In fact I think we'll see an oversaturation of AGI being haphazardly applied or thrown at the wall to see what sticks.
It wouldn't be until an ASI emerges that it's "possible" for unification to occur at some level.
Until that point though I personally do not see it "taking over". But thats just me.
matt_flux t1_irak5er wrote
Fair enough, I share the same view. Often manual(?) setting up of automation is more practical than AI though.
3Quondam6extanT9 t1_irando4 wrote
We automate most systems currently through manual setup so I can only assume this will continue on until AI has developed enough to self program, at least at limited scale.
matt_flux t1_irat0b5 wrote
Pure speculation. How would the AI know if it made an improvement to, or worsened its code? Human reports? If that’s the case it will perform no better than humans do.
3Quondam6extanT9 t1_iravwfo wrote
You're right, it is speculation, and initially it would likely be no better than human influence.
However limited improvement itself should be able to be written into code that at the very least is given the parameters to analyze and choose between the better options.
The AI that goes into deep faking, image generation, and now video generation is essentially taking different variables and applying them to the outcome through a set of instructions.
So it wouldn't be beyond the realm of possibility to program a system that can choose between fewer options with a given understanding that each variable outcome has with it an improvement of some sort.
That improvement could alter the speed at which its calculating projections or increasing it's database.
Call it handholding self-improvement to begin. I would like to think over time one could "speculate" that an increasingly complex system is capable of these very limited conditions.
[deleted] t1_ira6ukt wrote
[deleted]
DataRikerGeordiTroi t1_ira5plv wrote
Literally no one said that.
I like yr exegesis tho.
Reminder that 50% of all social media is bots.
Wassux t1_irqy7fo wrote
Ofcourse I know what numpy is, use it all the time for writing code in python. Especially for arrays to model AI algorithms to.
Wassux t1_irqyi7c wrote
What do you not understand about anything? As predictions are right now, AI will be capable of anything humans can do, in another couple years more than we can even think of.
AI has always been an endgame technology, it most likely will be the last thing humans work on.
matt_flux t1_irqzrbi wrote
Do you have any evidence for the prediction that AI will make you food?
Wassux t1_irr2bl2 wrote
Predictions can never have evidence, otherwise they wouldn't be evidence.
But it's completely logical, why would they do anything in existence but not make your food?
Let me repeat they'll be better than humans at literally everything.
matt_flux t1_irr2mff wrote
The entire point of science is predictability. Perhaps you meant to say guess rather than prediction?
Wassux t1_irr3mod wrote
No I meant perfect what I said. And it is predictable, if you can't see that, ask me questions I can answer so I can help you.
Maybe I should add I'm a major in Applied Physics with a minor in electrical engineering, and am now following a masters in AI and engineering systems. Hope that gives you a little credibility to what I'm saying.
dreamedio t1_ir9f6ee wrote
According to who? You?
Wassux t1_ir9jcxx wrote
What am I supposed to do with this comment? I'm not looking to have a fight. If you have a question not aimed at a fight, I'd love to answer it.
fatalcharm t1_ir9lx0f wrote
That’s exactly what “the singularity” event is about, and why this sub exists. The Singularity event is when AI takes over its own evolution, and at that point we have no idea what is going to happen.
It’s a widely accepted theory of AI, I don’t know who the original person who came up with that theory is, but it’s out there now and the one that we are going with.
Ezekiel_W t1_ir8j3pq wrote
My guess is around 2025.
doodlesandyac t1_ir8kua1 wrote
Yeah that’s probably about right, I’m utterly amazed at how unenthused lay folk are for stable diffusion
ahundredplus t1_ir99m7k wrote
We’re surrounded by oversaturation of content. People aren’t really excited to consume AI art but they’re very excited to make it.
T51bwinterized t1_ir9fflv wrote
I think that's mostly just not quite understanding the implications, because AI porn is disproportionately not very good *yet*. However, we're a very short period of time from a *deluge* of all the porn in all the genres you could ever want
AdditionalPizza t1_iradytz wrote
The general population has constantly moving goal posts for what impresses them with ai. They say ai will never be able to do something, and then when it does they say ok but it will never be able to do something else.
doodlesandyac t1_irahfab wrote
Yeah I remember when the one of the highest bars was “ai that can create art we find compelling” guess that’s changed
DungeonsAndDradis t1_irajhz0 wrote
We all thought the humanities (writing, art, knowledge work) would be the last holdouts of AI takeover.
And they're the first. Shit's wild.
AdditionalPizza t1_iramzom wrote
Yup. Considering video is being done by AI with prompts, and music. I wonder what will be next after entertainment mediums.
dreamedio t1_ir8y1uo wrote
Because it’s not super special I always thought that existed but in all seriousness that is cool but the trend will die out
NeutrinosFTW t1_ir968qr wrote
Based on this and your other comments in this thread I gather that you don't really understand the significance of current developments. I suggest you read up on the topics you so confidently misunderstand.
sipos542 t1_ir9qzpp wrote
Nah too soon. I say 2029 we will have general AI smarter then human. By 2040 AI will have full control of humanity and planet earth.
Talkat t1_ir9vr7y wrote
I agree 2029ish is a good date for general AI. 2032 at the latest. But once we get that, getting to full control AI must only be 12 months away surely. 2 years at absolute tops. How do you see 11 years? That is a hellllll of a long time.
Talkat t1_ir9vk8w wrote
Oh wow, that seems very optmistic to me. I was like 2030 is decent, 2028 would be a bit early. 2026 would be insanely early, 2025 is unprecedented. But, predicting something that has never happened is obs hard.
Do you have much reasoning behind it?
Like we will have great photos in say 12 months? And perfect in 24? With perfect videos around the same time frame. Good music would be in 24-36 months. Good voice a bit after that.
Xstream3 t1_ir8qq5d wrote
Its annoying trying to explain it to people. You can tell them about existing tech NOW and explain how it'll be over a million times better in 10 years (since it's doubling every 6 months) but they still insist that everything that isn't available today won't be available for another thousand years.
dreamedio t1_ir8y4p8 wrote
Because nobody knows I mean we still haven’t made the effort to go back to the moon and mind reading tech (predicted back in early 1900s retro futurism) exists but nobody for the most part gives af
DungeonsAndDradis t1_irajq6j wrote
Why would we go back to the moon, and what does that have to do with AI progress?
gravelbikeguy t1_ir9nl9o wrote
You are the public.
Viewing a single comment thread. View all comments