abudabu t1_ivfbjwm wrote
Reply to comment by michaelhoney in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
> but have you even an order-of-magnitude idea of just how long it would take for humans to simulate an AGI?
I do. That's part of the point I'm making. Either Strong AI cares about computation time - in which case it needs to explain why it matters - or it doesn't in which case many, many processes could qualify as conscious.
Also - who is to say what a particular set of events means? For example, if you had a computer which reversed the polarity of TTL logic, would the consciousness be the same? Why? What if an input could be interpreted in two completely different ways by doing tricks like this. Are there two consciousnesses for each interpretation? Does consciousness result from observer interpretations? The whole thing is just shot through with stupid situations.
> yet those rituals resulted in (very slow!) intelligent predictions…
I can't see how to finish this sentence in a way that doesn't make Strong AI look completely ridiculous.
EscapeVelocity83 t1_ivh9svd wrote
Maybe many humans aren't sentient since a robot can produce a better conversation and do better than them at customer service and do better at factory work etc....
Viewing a single comment thread. View all comments