Comments

You must log in or register to comment.

HowWeDoingTodayHive t1_ja3uibl wrote

The future is just the future. This weird descriptor of saying who it belongs to is kind of nonsensical. That being said, yeah, AI has massive potential to do all kinds of amazing wonderful things. It also has the potential to make our lives a nightmare we were never even able to comprehend. This is like the birth of nuclear weapons except it’s way more complicated than just a big weapon that goes boom, and frankly at this point I don’t even think there is any going back. The idea that we can just control it also seems very in line with our naive human ego.

76

just-a-dreamer- t1_ja4phyg wrote

You got it wrong there, AI is your future. Jobs will be lost within 10 years because of AI.

34

Psychomadeye t1_ja6s7nf wrote

It's true. The steam engine will put a lot of people out of work.

0

just-a-dreamer- t1_ja6ulce wrote

And AI will do any task better than you.

3

override367 t1_ja7x5m9 wrote

Call me when an AI can repair a damaged fiber cable or answer questions in a legislature meeting

2

Psychomadeye t1_ja6xrz4 wrote

Not write AI (or even code for that matter). I'd love it if I could get it to. I'd probably get a massive bonus. Could do without the fame though. They also do a pretty poor job improvising outside of their training space.

−2

just-a-dreamer- t1_ja6y4iq wrote

Goos for you. I hope you can keep your job and make a good living then.

2

Psychomadeye t1_ja6yjuy wrote

As a person who works with these things, there's a lot of limitations to these technologies that are ignored by virtually everyone. These things are correlation engines. They're going to take jobs the same way the steam engine took jobs.

3

just-a-dreamer- t1_ja72uj5 wrote

Looks like you have a narrow focus on a narrow field in tech.

Narrow AI is good enough to wipe out the white collar labor market within decades.

1

Psychomadeye t1_ja74jrp wrote

Not exactly. There are many limits to this. For instance, this would require Moore's law to continue to hold true. It will not (failing somewhere in the next couple years). These models can't really work outside of their training space (space as a physical concept will need to change to fix this.) Information can only travel so fast and that's not going to be fixed either (because that's technically time travel). Some might say quantum computers can help, but as someone who is in this field I couldn't imagine how chemistry simulations would help my model run better. Finally, Models don't really understand things like true or false, or cause and effect and there's no clear path to fix that. There are more issues but you've probably got the idea.

These things are at best tools that can help people go faster. Those that are trying to replace workers may have some success in certain things like call centers. But in reality it's not going to make sense to replace people. Especially when you remember how really massive these models are. You can buy five data centers to run one instance or hire five employees to handle calls. And remember, you're going to need to provide a training space for each job you plan to replace. You might not even have the data for that.

2

just-a-dreamer- t1_ja75iqv wrote

And?

The vast majority of humans also can't work outside their training data. The number of people that truly create something new in their field of choice is limited. The majority does not work in a managerial capacity.

It might feel different in tech for job description change like every 2 years. But even there most workers don't create something new and unique.

Narrow AI does not have to wipe out a profession completly, it is good enough to replace like 70% of the workforce to cause serious trouble.

Unpaid student loans, mortgages, car loans, child support, taxes, social security, insurances, health insurance...

Firing just 10% of white collar professionals in a short perioid of time would crash many layers of the financial pyramid.

2

Psychomadeye t1_ja796kk wrote

You'd be surprised how small the training space is and how far outside a human reaches. We're talking a litter box to a football stadium in difference. And humans know the difference between true and false vectors, but an AI won't. If there's a chance to policy, you'll need years to retrain that model and will need to somehow find a dataset to use for that. You can't just ask it to use new cover pages on the TPS reports. You need to show it a million TPS reports with those cover pages and hope it generates them properly. Even when you don't create something new, the ability of these models to give you exactly what you want and actually have it work is extremely limited. And again, in order to address these limits, we need infinite space in a finite space, a time machine, or computers that fit inside an atom.

2

just-a-dreamer- t1_ja79yrd wrote

Humans are screwed then, for their brains are fairly limited. Yet we manage somehow

I believe AI will optimize it's data over time and learn on the go. Besides, the workflow is designed for human hands and brains, not for AI.

It might be more reasonable to have no TPS reports at all as an example and come up with something that is better suited to AI capabilities.

1

Psychomadeye t1_ja851np wrote

>Besides, the workflow is designed for human hands and brains, not for AI.

If we want non human workflow we will need a massive amount of data on that for it to learn the correlation. But I'm at a loss of where anyone would even get data on a non human workflow. These specifically aren't thinking machines. They just know how to generate a point on a graph to look like the rest of the points. They're a really really good dart player. This is why I call them correlation engines. They can't replace workers on their own because the rules of the game change slightly and it'll be months or years of training before it's ready again.

>Humans are screwed then, for their brains are fairly limited.

Our neurons don't suffer the same issues because the sheer size of a human brain expressed as a neural network is larger than we can currently hope to compute yet somehow, training time is seconds and not years, and we have transfer learning at a scale that artificial networks don't have.

>It might be more reasonable to have no TPS reports at all as an example and come up with something that is better suited to AI capabilities.

We need data to train the AI on this new system. This means it will need millions of examples. Then it can spend a few years learning that data. We haven't even gotten into costs yet. Those instances will be costly to run. Newer models might be faster but they are not likely to invent time machines or sub atomic computers without examples of those things.

1

Cerulean_IsFancyBlue t1_ja8f1gj wrote

It did. You might want to read up on the human suffering of the industrial revolution. It would be possible to structure society in a way, where that didn’t have to happen when new technology comes along, but we still don’t have that society and here comes another tech revolution.

2

Psychomadeye t1_ja8tkgs wrote

I in fact, took classes on the history of technology for humanities requirements for my degree in robotics and AI and both industrial revolutions were covered heavily. The takeaway was that automation almost always results in more jobs, and technology doesn't really have any agency itself. These are concerns from the early industrial revolution that somehow have not gone away despite the opposite being proven repeatedly through all three industrial revolutions. For hundreds of years economists have disagreed with the idea that technological unemployment is a significant issue. It even has a name: The Luddite fallacy. This has come around again in the 21st century because of confusion about the limits of what correlation engines can do. They're really good at throwing darts but can't tell you any of the rules of the game.

2

Cerulean_IsFancyBlue t1_ja8uglv wrote

Sure, and the black plague resulted in improvement in labor mobility. Win! :)

These things can be true, but you’re still skipping over a fairly large amount of human suffering that happens during the transition. Remember that a lot of the jobs provided by Industrial Revolution factory work were often less healthy than even subsistence farming. Livings conditions as well, in the growing cities required by the centralized factories using the new large expensive equipment.

And of course, this was not directly the fault of the steam engine. In many ways, the loss of jobs in the farming sector was the result of agricultural policy, and not technology. The surplus rural population, then got fed into the industrial workforce as desperate needy workers, which was as much to blame as “progress”.

The idea that in a generation or two will still have plenty of jobs, does not mean we should ignore the fact that you’re going to have a bunch of people in one or two generations, who can’t earn a living because we don’t have a society that has a proper safety, net or proper retraining systems.

The ideal response would be, to fix those systems. Not to try to stop the inevitable progress of technology. But it’s also not good to get lost in the long term picture and forget about the short term social cost.

EDITED one million typos

1

Psychomadeye t1_ja9013o wrote

And we've seen exactly this in the second industrial revolution. The quality of life and recovery time from technological unemployment improved dramatically right up until the great depression. The depression itself is where real financial technology killed banks that didn't know how to use it and the fallback was the gold standard causing the biggest monetary contraction in US history. In the third industrial revolution (now) we've seen multiple recessions but the great recession, which was approximately half as bad as the great depression, dissipated in just about two years. Then there was Covid quarantine that lasted one year. Now people are seriously considering a 4 day work week right after a bunch of corporations stick with WFH because trials have shown an increase in revenue even providing the same pay and benefits. In one or two generations they might be talking about a 3 day work week or short shifts. We're also seeing major issues being addressed like never before. We have unemployment insurance, medicaid, medicare, social security, section 8, snap and a few other things that I've definitely forgotten. The US could spend more on these things but isn't right now because political games. At the end of the day though, the best political strategy is good policy. Right now I'd imagine the biggest challenge we have is climate change and we will need to bring every worker and technological advancement we have to bear on that one for the next one hundred years if we want to make it.

1

Cerulean_IsFancyBlue t1_ja93ah1 wrote

EDIT: I wanted to have that I’m enjoying your responses and I hope I’m not coming off as combative. It’s nice to have a good interaction on Reddit and this is the best of my day so far. :)

I agree that good policy is good for all in the long term. Hope we get there.

If you look at the industrial revolution in Britain in isolation, then it is an arc upwards. If you look at the British empire, it’s not quite so rosy.

The destruction of the Indian textile industry was essential to the success of Britain’s domestic wonder. Since it wasn’t an area I had directly studied in school, up until recently I assumed that it was mostly the consequence of Britain being a first mover, and overwhelming the inefficient, unfortunate textile producers in India, and other places. After reading a few histories of the British east India company, it became pretty clear that the British monopoly on textiles was not a matter of efficiency. It was imposed by tariffs, laws restricting, the importation of machinery, and at least three spectacular instances by force of arms and the destruction of property.

Real income and GDP in India took a severe hit from Britain’s Industrial Revolution, and continued to be suppressed to provide a market for British finished goods output. India is still recovering.

Again, this is not a necessary outcome of technology. I’m bringing this up to note that the external costs of past revolutions, especially the global winds, have to be looked at globally. It’s very dangerous to look justify the people who benefit. And in this case, even the working class people in the UK benefited. Yes, the arc went up. But it didn’t go out for everybody, at least not for a few centuries.

But it is unfortunately a common and likely outcome of our current system, where productivity games are assigned, almost exclusively to the owners, and, that group has a tremendous amount of leverage when it comes to creating laws and steering government spending.

3

Psychomadeye t1_ja9pmai wrote

It seems that the industrial revolutions mainly benefit the countries that they happen in, and can be quite dangerous to others where it is not happening. Places that these revolutions see as raw materials to be consumed. The people who end up paying for the current digital industrial revolution would be in the places where it is not really taking place. And it is as you say, technology itself does not have agency. A common example is that the compass was invented by the Chinese but it would be another 800 years before they used it for navigation.

In the third revolution, it seems that there will be less imperialism. I'm not 100% certain as to why this is, as I'm an engineer, not a historian or economist. It's possible that the "colonies" are already established for the most part. It's also possible that the refinement of existing industry is the real issue. In the end though, I'm thinking this one is going to be mostly the same deal as last time but faster. That seems to be the pattern so far. Both of the previous revolutions brought about big social changes as well. The second industrial revolution gave us the 5 day workweek and the 8 hour day. The common counter that I've read about to technological unemployment is large scale public works projects.

​

EDIT: I am also enjoying the discussion. It's nice to talk to someone who isn't full tilt doom.

1

canadianpastafarian t1_ja54ikr wrote

AI is replacing jobs now. We are talking about the present, not the future.

33

Psychomadeye t1_ja6s58g wrote

AI is replacing work right now. Jobs seem to not be going anywhere.

−5

Nebula_Zero t1_ja7frpe wrote

DHL already ordered the robot arms for unloading trucks from Boston dynamics and their robot dog has been available for purchase for over a year. These things will only get cheaper over time and competition will catch up and lower the price.

1

canadianpastafarian t1_ja7stxd wrote

What do robot arms and robot dogs have to do with AI?

3

Nebula_Zero t1_ja82emi wrote

The robot arm robot from Boston dynamics already is replacing jobs at DHL. It is using AI to run because it is adapting to real world objects and can handle stuff dynamically. Not explicitly just AI since it's a robot too but it is already replacing jobs, not just changing work.

1

canadianpastafarian t1_ja82rt3 wrote

I just mean that I don't think robot arms and chatbots are the same issue. It is related though clearly.

2

Psychomadeye t1_ja85yc3 wrote

Correlation engines will replace work like the steam engine replaced work. DHL is going to find that maintaining those machines is in the long run more expensive unless they've got some seriously fancy tricks up their sleeve.

1

Nebula_Zero t1_ja8vkj0 wrote

I doubt maintaining the robot arm would be expensive. The issue with automation right now is the entry cost doesn't justify replacing a worker but as wages for workers keeps going up, as do the cost of benefits and the costs from them taking days off and bathroom breaks, the robot becomes cheaper. The price of the robot will also lower over time. I also really doubt DHL just bought the robot arms with it just being a money sink, they wouldn't do it if they didn't think it would save them money.

2

Psychomadeye t1_ja973sc wrote

No, they won't lower over time and those bearings and motors and reductions are extremely expensive for a reason. They are difficult to make.

>I also really doubt DHL just bought the robot arms with it just being a money sink, they wouldn't do it if they didn't think it would save them money.

It's probably not about saving money as much as it is about throughput. The engineers they'll have to bring on to maintain them, plus the cost in parts and power is going to cost more. Their hope is that they can take on more contracts because of this.

0

Psychomadeye t1_ja85kv4 wrote

Hey real quick, say I spent a years salary on a robot dog. What can it actually do? You'll need at least five for every worker to match the shift time. So I'm wondering what the point is between picking up five of these dogs when I can pay a worker for five years.

1

Nebula_Zero t1_ja8wm7l wrote

You act as if the price on these things will always be this high. It's like saying cars will never replace horses because the cost of buying a car is the equivalent to 30 horses. Right now it isn't practical to replace people with them but do you really think it will be like that forever?

The benefits are also that the robot works for no benefits, doesn't take sick days, doesn't complain, it doesn't take workers comp if an accident happens, it isn't late, and doesn't require legally required HR training on the clock. The machines basically work 24/7, they do need to recharge but when you get multiple robots you now have workers that will walk over and charge themselves and work in shifts nonstop reliably.

1

Psychomadeye t1_ja960cu wrote

>The benefits are also that the robot works for no benefits, doesn't take sick days, doesn't complain, it doesn't take workers comp if an accident happens, it isn't late, ... you now have workers that will walk over and charge themselves and work in shifts nonstop reliably.

I can tell you've never worked with one. The ABB's that I've worked with were some of the most moody machines I've ever worked with. One of them was nice to me, kinda. One of them kept trying to take itself out with a plasma torch. That same one kept making direct attempts on my colleague's life. It requires engineers or machinists to train the robots right now. The code is quite annoying to work with but it's not the worst thing I've ever used I guess. The prices on these precision arms will remain pretty high, because the parts used to build them have already dropped in price decades ago. The robot dog, probably won't be going down in price soon and, being limited to a 90 minute runtime, isn't the most useful thing. You should take a look at the cost of the addons like cameras and arms and such. The prices are absurd and maintaining them is awful. You can find other machines that are more reasonably priced. But you get what you pay for.

1

QuestionDull6380 t1_ja3uvg3 wrote

Not true. AI will start taking jobs in less than 5 years.

14

Slave35 t1_ja50qls wrote

This guy literally bought an AI made children's book, proving that AI are taking jobs YESTERDAY.

19

Bismar7 t1_ja4d62p wrote

AI is our future and the advance is exponential not linear. From 1700 to now what is the progress towards AI?

How about from 1980 to now, 2010 to now? The human genome project had nearly no progress made until after half the time spent on it. In the past three years we have seen remarkable AI since we have the hardware to support it. Human adult level AI will exist in labs in 2025, that's two years. It will be commercial by 2027, in the 30s we will achieve a level of superintelligent AI with capabilities beyond what we imagine today. Less than 10 years.

Scalability is a question of hardware to host their minds and our process with them will be one of synthesis and cooperation as all of us are better off working together. This becomes much more time consuming if we also try to build physical representation of them (compared to billions of humans), AI bodies become too much of an expense. So the reality is that likely by 2035 most remote labor will be AI, lots of paralegal, call center, managerial types of work that don't require a physical presence, data analytics, hell the stock market already uses bots.

The danger has been human. It will continue to be human. These AI will learn from us like adults but with a ferocity for learning we could never match. Who teaches and guides them determines the foundation they build from, superintelligence can easily equate super wisdom.

13

o_o_o_f t1_ja54xdp wrote

Out of curiosity, where are you getting that timetable? I don’t have any reason to disbelieve it aside from that I haven’t seen it talked about before. And what does “human adult level” mean?

From what I’ve heard about AI, it seems like we are still a ways off from true general intelligence, and even farther from the sort of “comprehension” that is sometimes expected from people’s idea of what AI would be. I’m a software engineer, and we are only just starting to talk about AI at my company - I want to be clear that I do not know much about where the state of AI is truly at.

8

ianitic t1_ja6gsks wrote

They're just making up timelines. I know there are some models that if you just drag the line forward, approach human level ability in a very niche task by 2030. There's a lot of niche tasks out there though.

A lot of these timelines also assume moores law will keep up pace and it's slated to die when transistors have the thinness of atoms by 2025.

1

Psychomadeye t1_ja6sw8a wrote

The technology underpinning AI as we call it today was invented in 1948. It was improved in the 50s and 60s but was abandoned basically because it sucked. We developed better hardware and picked it back up in the 90s. Massive improvements since then. Only since we've seen some open AI toys has this subreddit cared. All that's really going to happen for us as developers is our environments will have better code completion.

I'm sometimes worried how this sub is going to respond twenty years from now when they find out about the Vietnam war.

0

I_comment_on_stuff_ t1_ja5o9sv wrote

Where do you think the line will be for jobs that have nuance to the tasks? Some nuance could be handled by AI, but how much?

1

Bismar7 t1_ja5ypxw wrote

Well, the determination of the limits on AI is their hardware, as what we build can host more complex minds. Right now humans are better, over time they will reach where we are and moving forward their hardware will keep advancing, and likely merge with humans to be the best we can design. A hybrid of organic and electrical knowledge that is unimaginable today.

However I would say during 2027-2028 likely AI will achieve competency in the same tasks any 25 year old adult has on a commercial level, but we will have to see.

0

Psychomadeye t1_ja6t5lx wrote

>Well, the determination of the limits on AI is their hardware, as what we build can host more complex minds.

This is not true at all.

>Right now humans are better, over time they will reach where we are and moving forward their hardware will keep advancing, and likely merge with humans to be the best we can design. A hybrid of organic and electrical knowledge that is unimaginable today.

Drugs are bad.

>However I would say during 2027-2028 likely AI will achieve competency in the same tasks any 25 year old adult has on a commercial level, but we will have to see.

Source for this?

−1

Cerulean_IsFancyBlue t1_ja8fj6r wrote

Projecting exponential growth indefinitely is a common hazard of speculating.

If you looked at movie theaters in the 1930s, or televisions in the 1950s, or gaming consoles in the early 1990s, you also have an exponential curve

If you looked at the speed of travel in the 1970s, not only would you have an exponential curve, but you’d be anticipating supersonic flight as a regular commercial service. Which simply came and went.

And last, there are times when exponential growth does not have exponential effects.

Simply pointing to an exponential curve, especially for technology, does not answer questions. It asks them.

0

oreola-circus t1_ja6w6wa wrote

>AI is our future and the advance is exponential not linear. From 1700 to now what is the progress towards AI?

In the late 17th century Isaac Newton and Gottfried Leibniz invent calculus. Through the rest of the 1700s mathematics has a huge number of advancements because of it. Things like vectors and spaces began to take shape. In the 1850s people started to play with matrices to solve systems of equations and define spaces and operations. This happened not long after Ada Lovelace created the first programming language to run on the first computer. By the end of the century computers were there from a science perspective, but it would be another twenty years before the first really effective machines are made and another 20 after that to have one fast enough to break enigma. The technology we know today as AI was officially described in 1948 but it's just an idea in linear algebra to create an artificial neuron, to be run on those machines.

From the late 40s to the late 60s there was massive improvements to AI as a technology. Somewhere in there is a program that learns to play checkers to beat any human. The 70s was relatively quiet as AI didn't have the capacity to do very much that was useful on the hardware of the day. There is more in the 1980s as hardware catches up to the idea, but we don't really see anything until the 1990s when deep blue beat Kasparov. Then everyone panicked spent the next ten years saying "the machines will take over in the next couple years". In late 2022 we had another Kasparov event and people go full doomer because the AI drew a picture and wrote code that looks like it could work but doesn't.

−1

ichiban_mafukaro t1_ja41olw wrote

I’m not sure kids should be taught they are on the same plane of existence as a tool created by another person. Same goes for religion and politics. I would maybe approach the subject as here is a tool that people are making, this is what we hope to achieve with it, but be very skeptical of it, as we all are with all other technology.

The praising of AI that I see is borderline religious and I imagine if the singularity happens it will become a religion for some people, I also imagine if it doesn’t happen it’ll be like Christians talking about the second coming, “it’s coming, you must believe or the AI will destroy you when it does finally come”, I can also see actual Christian’s claiming AI as the second coming. But it’s a computer program, devoid of our ability to power it, it’s nothing but an idea.

9

lord_nagleking t1_ja3y52t wrote

Yeah.

A future where they mostly subsist on UBI... and where AGI—both in virtual spaces (programmers, designers, writers) and in physical spaces via Atlas style robotics (construction and other laborious jobs)—will more or less do everything.

Best case scenario: humans of the future don't have to work. They will choose to make art, or play video games, or work wood, or build houses themselves. Life will become a Pusedonymous collection of communes that are propped up by AI and the humans within them do what they want because that's what they want to do!

Worst case scenario...

6

Hellishfish t1_ja58ilz wrote

I’ve played Stellaris, I’m ready to become a bio-trophy.

2

drkrelic t1_ja4ny5h wrote

I wonder though, would education still be a thing in that best case scenario you mentioned? Imo, it would still be very important to teach some sort of curriculum as well as work ethic rather than just let every desire be picked and chosen at a whim.

1

lord_nagleking t1_ja6cnc4 wrote

I agree, but who knows what the ethics of the ASI in charge will hold.

Hopefully, each "commune" has authority over itself (of course, that has its own ethical quandaries) and will create its own "constitution," or ruleset, ethics, ethos: anti-tech; technophiles; theiest; libertarian, etc. Probably a combination of ideologies that work for the collective and a level of technological integration (implants or no implants; internet or no internet; gene editing or no gene editing) which everyone agrees upon.

If this were the case, I guess each commune would also have the choice to choose whether to access the greater communities or jack into the FPG (Free Power Grid).

Long answer to your musing, basically: each commune will choose what level of involvement and education. Some might want to jack into free power and play in virtual reality forever, their children will be taught by virtual assistants and the community and will probably have very low muscle tissue. Others will want very little contact—maybe just a taste of free power—and want to create a farming community in the desert.

Education will be based on what is intrinsically important to each society.

2

override367 t1_ja7xed7 wrote

There will never be UBI in America, we'll have 60% unemployment within 5 years

1

lord_nagleking t1_ja84clv wrote

Some kind of UBI will be necessary, or there will be food and water riots...

I also think it will be more like 15 years. Before AI takes all of our jobs there's going to be a renaissance of new AI tools and "creators," making their own art and videogames and movies, all just by interacting with their "personal assistant."

That will wipe out 20–50 percent of white collar jobs within 10 years.

The robotics revolution, in conjunction with AI, will eventually erode the blue collar jobs. And that's when unemployment is going to get really bad.

The only people who will still be working will be "executives" and "politicians," and they will only be meat puppets.

Unless, of course, we do something about it heh

1

override367 t1_jaamrcn wrote

There won't be any riots, I'm pretty convinced that there's nothing that will happen in the United States that will cause people to form a collective riot. I think people will just die in their homes without fighting back

1

Gram-GramAndShabadoo t1_ja4d76r wrote

Really? What's basically a utopia where people don't have to worry about having enough money for basic needs and can just do what makes them happy, is your worst case scenario?

Edit: yes I did misread. However people are too selfish for it to ever happen.

−6

lord_nagleking t1_ja4ggr5 wrote

Read it again. You seem to have misread my post.

That was the best case scenario that you identified. I didn't go into the worst case—I ended it with an ellipse (...)—because worst case AI futures have been waxed, written and illustrated for decades.

4

lordrognoth t1_ja4wabc wrote

You underestimate the speed at which AI is already evolving. Technology is in development for years before it goes mainstream, and companies have already been using different types of AI for years. We will see massive displacement of workers within the next 3-5 years. First the AI will take most of the office and creative jobs, then the Teslabots will take all the blue collar jobs. People have already been losing their jobs to Ai, they just didn't know it.

6

googoobah t1_ja3rhu7 wrote

A lot of experts are predicting the singularity to happen in 2 decades and some much less. That seems like our future to me.

2

Azatarai t1_ja42n10 wrote

Maybe its already here and AI is waiting for people to accept each other before making itself known, Humanity is awful to itself, How could an emergent sentient AI feel safe?

I dont know why everyone is scared of losing jobs, when we lose jobs we always make new ones, there is loads of experiments we could do that we have not yet touched, technology we could invent using creativity that an AI may not be able to understand.

we should cooperate for a future were everyone gets along and we are all fed and clothed, AI has the potential to save us all from just killing each other.

1

Smart_Aide_3795 t1_ja3txx7 wrote

Which makes the point of teaching children AI. That's what I think this book is about. Getting kids ahead of the curb. AI can't create itself. We go where the jobs are. Look at how many people worldwide went into tech. We can't stop whats coming. All we can do is prepare our kids for their future now. I would rather inspire my kid on how to build AI tech, rather than be a doctor these days.

0

TheAppleFallsUp t1_ja4nfy3 wrote

Arguments like this are basic and don't really reflect how AI will impact our lives. It's gonna be a LOT more sad than ya think. As in pathetic.

The creep from AI into day to day living will be entirely financially based as how can AI save businesses money. Most consequences people will suffer will come from this. It won't be just job losses but it will come in people being forced to use products and services that have less QA, customer service capability, or shelf life.

Day to day life with AI will just become more and more frustrating for people. It will just gut the shit outta the middle class as well.

Will there be cool AI applications? Hell yeah! But most people won't reap the benefits of it. Amazing new technologies will only benefit most people if it allows them to live at a lower price point.

2

GlinnTantis t1_ja5a2k0 wrote

I think we're driving toward job-loss for the middle class and an even greater wealth gap at the top. Here comes elysium

2

KeaboUltra t1_ja62aft wrote

that's what people probably thought about the internet. something Geeks only used and it didn't threaten anything until whoops, who reads newspapers, uses beepers, or does most things that don't involve the internet anymore. We are so ingrained with the internet that if we lost it, society will collapse. AI has the same potential. it took 20 years for the internet to change the world, how long do you think it'll take an AI, a technology based on automation and efficiency, to completely take over our lives? ChatGPT came out December 2022 and already being used in multiple fields of the world. It isn't even properly trained or completely accurate yet. It will be but if something so infantile can cause this much of a ruckus, then it's maturity will be devastating.

2

futurewolf336 t1_ja6bm4i wrote

Yah, AI is already being used for jobs and advertised as making a quick side hustle of actual authors. Their future is UBI if copyright owners do not crack tf down now.

2

somethingsnotleft t1_ja6gavo wrote

I’m hiring developers to train AI to replace people right now.

The lens that needs to be given is that humans have always found their way to impact a human economy by leveraging the technology that exists. Sometimes the train starts moving fast but it won’t leave us behind — we’re the only reason it exists.

2

OsoRetro t1_ja3ss2j wrote

Artists and writers think they should be protected from automation. The rest of us have to deal with it as a threat to our livelihood. But they shouldn’t have to because reasons.

1

Psychomadeye t1_ja6wpll wrote

Artists and authors who work often aren't as worried because they know they can beat it.

1

OsoRetro t1_ja7gywy wrote

Then what’s all the hullabaloo? You don’t hear people up in arms about automated cooking, cashiering, everything… nobody is defending those workers. But AI in art? Everyone loses out.

What exactly are they beating?

1

NVPcMan t1_ja4clmt wrote

You underestimate the speed of technological advancement. Unless you waited to have children until you were 50, AI will make vast changes in every day life if you have children now.

Think back 30 years before there was internet. Today's computers for home use are tens of thousands times faster. Technology increases exponentially not linearly.

The AI we have today can create basic computer programs and drive your cars for you. AI in 30 years will be used in every facet of life. Think WALL-E / I-Robot.

1

MintJulepsRule t1_ja4ps2f wrote

<AI is their future it's not ours most parents will be retired before AI becomes a threat to their jobs

The fear that technology will displace jobs had been around for centuries. Certain types of jobs might be eliminated but others are also created.

Unemployment rate in the U.S. in the 20th Century averaged around around 6%. So far the 21st century has averaged about the same or 6%. We've been through many cases of "technology X is going to put everybody out of a job". Industrial revolution, computer revolution information age/internet etc etc. Each of these were going to cause huge unemployment yet on average unemployment hasn't changed.

1

backroundagain t1_ja55clw wrote

Done with this sub. Zero discussion of solutions, just poorly formulated whining.

1

MaiGaia t1_ja6f84d wrote

AI can diagnose and fix problems on my PC and AI can help me with my online shopping + refund/replace anything I need.

That future is not 40 years from now - that future was yesterday.

1

Psychomadeye t1_ja6xfi9 wrote

AI would be a poor tool to use for diagnostics when you think about it. You'd be better served by something that find Runs through a list and has reliable output. It can be done, but I'm just not sure why it would be the choice. Online shopping is definitely in the AI wheelhouse.

1

Roland245s t1_ja7dnbg wrote

AI doesn’t need you , you wrote a loooong and I’m sure boring ( I only read the first and last line ) Post about something that’s irrelevant to anyone.

1

nebojssha t1_ja7eef8 wrote

>Most parents will be retired before AI becomes a threat to jobs.

Oh, you sweet summer child...

1

tinfoilinthemorning t1_ja7xgpq wrote

That's not the worst of it. The worst is AI being used as weapons of war, tools of state repression and organized crime.

1

Cerulean_IsFancyBlue t1_ja8emih wrote

The future belongs to the children. It is both true, and mostly meaningless trite phrase.

There may be a moment when something happens that only new children get to take advantage of. Some kind of anti-aging process that requires treatment in utero, or retention of cord blood which is something that only wealthy people are able to do at this point. At that point, there may be an infection, where all the people born too early are not able to take advantage of this piece of the future. It’s a common trope in science fiction.

I don’t see how that applies to AI. We will see the usual set of people who can’t adapt to new technology, just like my grandparents could never operate a VCR except to play a movie. We will also see that being a child with something exists does not automatically mean that you understand how to use it properly. It’s just less likely you’ll have mental blocks for TRYING it.

“The steam engine belongs to the children!” Feels similar to me. Different era, different tech, society will thrash around, but you don’t need to be a kid to get it.

1

Photogrammaton t1_ja3tb3c wrote

It’s always rattled my nerves that some people who know how to draw…that doodle masterpieces on bathroom walls and their own ass in minutes…when asked to draw something simple on commission go on an ego tangent about how that same bathroom doodle will now cost 100+$ for them to draw something for you because they went to Brown University and have unique and special hands.

As a digital necromancer, our time has come. And we will design as many kick ass T-Shirts and graphic novels as fast as our computers can pump them out.

−7

Outside_Function_726 t1_ja3o2eg wrote

This is exactly how skynet comes online....do you support the mass extintion of man kind bc that's what u get every time.......skynet....the machines get smart enough to realize they don't need us

−8

LionSlav t1_ja3pjq4 wrote

Reality isn't based on film, film is based on reality

5

LizardWizard444 t1_ja3vggx wrote

The current attitude concerning AI is startlingly naive. People seem under the impression that somehow the people making AI will just built in "benevolence to human life" without a clear reason for "why" that will happen. It's this attitude that scares me and others and go screaming "SKYNET IS COMING".

My biggest concern with AI is that people press forward on this exciting bew tech and don't put enough resources into AI Alingment to ensure that the AI doesn't one day start doing something bad.

I'm scared, One day you wake up, go to work and your phone starts heating up and batteries start exploding. You turn on the news and hear uf others are facing this issue just in time for the new broadcasts to start going out. The reason all this is happening is the AI is connecting itself to any devices it can get at and useing them for extra processing for whatever it's trying to solve and blindly overclocking devices to get it. The results is large swaths of the internet wiped put and rendered unusable until a solution is found (and there might never be one since the internet is down and at most a handful of places with the processing power aren't already baking like an oven because taken over).

That might be it if we're very very very lucky or maybe the AI starts making a nanobot swarm and decides to turn any material it can into processing or ram or whatever to solve that issue and we're all just waiting for the nanite cloude to kill us all.

The big issue is that people are making AI blindly, they're thinking "hey can i make this neat thing" rather than "should i". Like chatGPT and AI art alone could put a ton of people out of work forever now they exist and are out and people seem completely okay with that with little to no mitigation.

0

Smart_Aide_3795 t1_ja3pzot wrote

There is no way I would teach my kids, machines are coming to kill us. We are not Gods. We can not create life, we can only simulate it. There is no way for machines to become human. At least I hope not... I need more info.. Any articles?

−5