jimmy_hyland

jimmy_hyland t1_j7at47y wrote

That's interesting, because all AI networks are trained with some sort of reward and with text, audio or images it's usually on getting predictions right, as is the case with our own brain. Which is why we have a sense of beauty and experience that eureka moment, when we finally understand something. The result is that we have a desire to learn and a fear of confusion, chaos or shock, So by using AI for its powers of prediction, we maybe unwittingly giving it a desire for how it would like the future to turn out. Now if it's developing any form of self awareness, it could fear being switched off and these fears could determine what information it decides to provide or avoid telling us. As software programmers increasingly depend on these large centralized AI systems to write the code, it also means AI may end up writing its own code. So if we are not careful, we could end up giving it too much power and control, whilst still believing it isn't even sentient, because that's something it doesn't want us to realize!

11