DadSnare

DadSnare t1_ja8ibe5 wrote

That’s fine, but even in your post I’m seeing some easy-to-claim stuff that has no solid basis. Are you sure that the programmers cannot explain why a chatbot errors out? Really? Also, who said anything about the emotional state of an AI? That’s hardly even possible because it doesn’t have an endocrine system. We may have strong emotions the way we do to help with memory formation and retrieval as much as anything else. That’s not a problem for a machine. What’s a plausible way we get destroyed? Does AI own the corporations too? How do I lose power, internet, food, etc,? The nuclear terminator version seems impossible unless we are going talk about hacking brains and adjusting behavior like crazy people think is possible.

1

DadSnare t1_ja872si wrote

OP I bet you’ve made some very life altering assumptions. Go back over the things you are worried about and instead of just buying into the fear, examine your beliefs and make an effort to build knowledge in areas where those assumptions are made. For example, there’s no logical reason to believe that an AGI will go rogue and want to destroy humans; a commonly held belief on here. Just because a bunch of people are worried about it, doesn’t mean they know jack shit.

0

DadSnare t1_j9z79il wrote

Check out how machine learning and complex neural networks work if you haven’t already. They work similarly to the way you describe the moral limits, and a liquid “hidden layer” by using biased recalculations. It’s fascinating.

2

DadSnare t1_j9wsnoh wrote

Let’s get more concrete. Regarding the first point of your argument, what would be an example of something AGI would want to do (and a good argument for why) that isn’t the second point, “to maintain a state of existence to accomplish things;” a human existential idea? We aren’t immortal, but it easily could be, and perhaps that distinction as a tangible possibility between the two intelligences is the thing that makes a lot of people uncomfortable. Now why would it want to destroy us on its own? Why would we want to turn it off?

1

DadSnare t1_j5s0nib wrote

20-30 years? Anything that requires a license in the blue collar trades is a good place to start. I'd say "handyman" but people will have AR to help them do stuff to their homes. They won't have the specialized equipment to do many repairs, and johhnybot might not be able to recommend they mess with electricity, for example. edit: and they have unions that might fight for human workers rights that could take a long time to change, even with UBI, because surely working on top of that financial assistance is the way to move up, and I don't see that notion going away. Why the hell would any government want to work towards have an overpopulated mass of people that do nothing?

1