Submitted by Particular_Leader_16 t3_xwow19 in singularity
matt_flux t1_ira3821 wrote
Reply to comment by 3Quondam6extanT9 in The last few weeks have been truly jaw dropping. by Particular_Leader_16
I didn’t make any remarks like that.
In my experience it takes way less effort/cost for a human to improve a business process, or any process really, than to calibrate an AI for the problem and collect enough data etc.
I just want some concrete predictions about what AI will “take over”.
3Quondam6extanT9 t1_ira5vqw wrote
I'm not targeting anyone, just the overall dialogue between you two held slightly condescending context with regard to redditors intelligence.
I'm sure you're familiar with the amount of AI out in the world and it's different forms and uses under the development of different sectors and entities.
I think it would be virtually impossible to offer any concrete predictions about what exactly AI will "take over".
Your comment regarding business use of AI and its efficiency is fairly reductionist though. It assumes that the goal of a company is linear and that it will have to make a binary choice between human or AI influence.
Generally there is a slow integration of AI input as industry models for software and calculation. It's not one or the other, it's a combination of the two to start and over time you tend to see a gradual increase in use of the AI model in those specific use cases.
matt_flux t1_ira6wzs wrote
So you admit it’s just speculation?
People here aren’t presenting it as speculation, but are also unable to give specific predictions.
I’ve seen billions poured into AI analysis of big data, for 0 returns
3Quondam6extanT9 t1_iracj87 wrote
I didn't say it wasn't speculation, but that was never the point.
You're mentioning big data without considering the simple to moderate AI tasks which have been operating at different levels in different sectors for years. Not in terms of "return" but in efficient data management, calculation, logistics, and storage.
Those are basic automated operations that are barely considered AI but still a function of business in day to day management.
But thats enterprise, we aren't even talking about sectors like entertainment and content creation which utilize AI far more readily. We see a lot of AI going into systems that render and utilize recognition patterns like indeep fake and rotoscoping.
Your perception of AI integration equaling a 0 return omits an entire world of operation and doesn't consider future integration. As I said, reductionist.
matt_flux t1_iraemvd wrote
Those things would certainly deliver a return, but at the moment are algorithms programmed by humans. So what, in practical terms, will AI “take over” exactly?
3Quondam6extanT9 t1_irajrw3 wrote
In context to what the redditor was talking about, I'm not sure. I'm assuming they may be basing their perspectives on pop culture concepts like Skynet.
I don't think one AGI will take over "everything", but I do think various versions of AGI will become responsible for more automated system throughout different sectors. It won't be a consistent one size fits all as some business and industry will adopt different approaches and lean into it more than others.
In fact I think we'll see an oversaturation of AGI being haphazardly applied or thrown at the wall to see what sticks.
It wouldn't be until an ASI emerges that it's "possible" for unification to occur at some level.
Until that point though I personally do not see it "taking over". But thats just me.
matt_flux t1_irak5er wrote
Fair enough, I share the same view. Often manual(?) setting up of automation is more practical than AI though.
3Quondam6extanT9 t1_irando4 wrote
We automate most systems currently through manual setup so I can only assume this will continue on until AI has developed enough to self program, at least at limited scale.
matt_flux t1_irat0b5 wrote
Pure speculation. How would the AI know if it made an improvement to, or worsened its code? Human reports? If that’s the case it will perform no better than humans do.
3Quondam6extanT9 t1_iravwfo wrote
You're right, it is speculation, and initially it would likely be no better than human influence.
However limited improvement itself should be able to be written into code that at the very least is given the parameters to analyze and choose between the better options.
The AI that goes into deep faking, image generation, and now video generation is essentially taking different variables and applying them to the outcome through a set of instructions.
So it wouldn't be beyond the realm of possibility to program a system that can choose between fewer options with a given understanding that each variable outcome has with it an improvement of some sort.
That improvement could alter the speed at which its calculating projections or increasing it's database.
Call it handholding self-improvement to begin. I would like to think over time one could "speculate" that an increasingly complex system is capable of these very limited conditions.
[deleted] t1_ira6ukt wrote
[deleted]
Viewing a single comment thread. View all comments