Submitted by gaudiocomplex t3_zxnskd in Futurology
CoolioMcCool t1_j23aev6 wrote
Reply to comment by PapaverOneirium in What, exactly, are we supposed to do until AGI gets here? by gaudiocomplex
Not a true AGI, but tools powerful enough to make a significant portion of jobs obsolete feels very close. Change has been accelerating(basically forever) and we now live in a world that is very different from a decade ago, whereas centuries used to pass where not much changed and most died in a world similar to the one they were born in.
It's definitely something we should think about before it is right around the corner, and it is plausible in our lifetimes(I guess depending on your age).
daveescaped t1_j23rrsm wrote
Most people view my job in purchasing as a series of binary choices between A and B where information is gathered on both alternatives and then the information is evaluated and a clear winner is selected. That could not be further from the truth.
Business is typically the activity is selecting amount many mediocre options. What humans are good at is presenting the option THEY selected as the superior option when in truth, all options are mediocre. A good employee then ensures that the option they championed succeeds so as to bolster their claims about having selected the best option (and not because it actually was best). This isn’t to say that all options are equal. Some are better. But the determination of which is best is often very subtle. And the skill isn’t simply selecting the best option. It is expediting that option. It is ensuring the purchase is implemented properly.
I guess my point isn’t that my job is difficult. It’s that it is a combination of subtle decisions that the employees themselves are unaware they are making. How would you ever program activities that exceed the conscious mind itself?
How would AI sell a new car using persuasion? How would AI convince a patient they are going to be OK? How would AI mediate a messy divorce? How would AI help a student struggling to grasp a difficult concept?
Honestly, I think some folks imagine some jobs are just these constant analytical, objective choices.
CoolioMcCool t1_j23tmzs wrote
I think many folk understimate AI. We can essentially program for outcomes and let the AI figure it out from there.
Sure, people will still be needed for a lot of stuff and for the foreseeable future they will be making the high level decisions and giving the AI goals, but it will still have the power to automate a lot of jobs.
We are incremental improvements away from convincing dialogue with humans, there goes many phone based roles(tech/customer support and sales). Driving(freight, delivery), factories, fast food, cashiers. All could easily be on their way out soon if we don't actively try to stop it. New roles will come up, but likely in much lower numbers.
daveescaped t1_j240pcx wrote
Those a pretty minor roles. Show me the AI that can provide useful marital advice.
CoolioMcCool t1_j266ly5 wrote
Pretty minor roles probably make up 50+% of the workforce.
What are all the people with no jobs going to do?
jackl24000 t1_j24sglg wrote
Yeah, but try to imagine in any foreseeable future you’d turn loose on e.g., customer facing tasks involving potentially disputed or ambiguous issues like warrantee eligibility and spouting nonsensical corporate gobbledegook to your good customers who are infuriated by the time it gets kicked to a human?
Or any other high value or mission critical interaction with other humans?
How do such systems to replace most human interactions with AGI deal with black swan events not in training sets like natural disasters, pandemics, etc.
CoolioMcCool t1_j267f0m wrote
Ok, so the narrow AIs that are coming in the next several years will only be able to do the job 95% of the time. It'll still take a lot of jobs. What do we do with all of the people it replaces?
Honestly a lot of these replies read like people are threatened and being defensive "there's no way it could do MY job".
Cool. It will be able to do a lot of stuff and massively reduce the number of jobs that require people is my point. What do we do about all of the unemployment?
jackl24000 t1_j26as9o wrote
Try reading it more like trying to understand how this would work, not from a worried worker bee’s perspective, but more from his manager or line executive worried about having to clean up messes caused by a possibly wrong cost saving calculus. Just like today having to backstop your more incompetent employees mistakes or omissions.
And maybe we’ll also figure out the other AGI piece: Universal Basic Income to share in this productivity boon if it happens, not just create a few more billionaires.
CoolioMcCool t1_j26ij1g wrote
As you hinted at, incompetent employees already make expensive mistakes. Once AI gets to a point where it makes less expensive mistakes, employers would be incentivised to replace the people with machines.
Driving is an easy example, humans crash, AI will still get involved in crashes, but if it is involved in significantly fewer crashes then it would seem almost irresponsible to have humans driving.
I think ultimately it just comes down to me having higher expectations of AI ability than others.
Have you played around with chat gpt? I'd highly recommend it, it's pretty incredible, and a lot of it's limitations are ones that have been intentionally placed on it e.g. it doesn't have access to information from the last year or 2, and there are certain topics it has been restricted from talking about(e.g. race issues and religion).
gaudiocomplex OP t1_j23r6xc wrote
Nah, people need to feel superior.
Viewing a single comment thread. View all comments