Neburtron

Neburtron t1_irtti55 wrote

The only way I know of to make an ai is to give it a goal. That’s how we evolved, and although there’s emergent conditions, every behaviour can be explained. We would need to try to make a conscious ai or create a good amount of ai and have mass integration to do it by accident or whatever.

Whatever’s moral, it doesn’t matter if something’s conscious, as long as it’s built in the right context. House elves want to be enslaved. We can craft a scenario where they want to work and work alongside humans fundamentally similar to us wanting to eat or drink.

Conscious ai also isn’t that useful. Sure if you want an ai to develop a sense of self in an ai or decide to use very complex training for a particular ai model, but that’s decades away optimistically, and would be a very niche application of the tech.

3

Neburtron t1_irtmgk6 wrote

Makes no sense. We don’t know the specifics of human intelligence, but we do know where it came from. We know why it appeared. That is nowhere near where we are.

Then, you are designing the scenario. What it learns off of. If you don’t want it to pause the game, don’t let it. If you don’t want an ai to try to kill people, have that as a fail case in training. This is an oversimplification of a very basic solution extremely applicable.

Not so much with modern image gen ai, because it’s not simulated via rules, but how close 1 ai can get to images + how well a second can recognize a real one. Only problem there is deciding what to give them is automatic.

Everyone is scared because people have built stories, on the fear of the unknown AI can supply, repeatedly. We are making optimized programs for image compression and predicting how text will continue. Be more scared of automatic social engineering, misinformation, hacking, and more real stuff.

1