Viewing a single comment thread. View all comments

helpskinissues t1_j8ygr1q wrote

I don't get the post. Computers already process faster than us.

1

SirDidymus OP t1_j8yjtan wrote

Yes, but they need not yet take sentience into account. Imagine yourself playing a game of chess, making a move and having your opponent take 8 days to answer, with what is not necessarily a good move. That might be what an AGI or ASI experiences, and how it reacts to that is unclear.

1

helpskinissues t1_j8yksgb wrote

We'll have to ask them, simply. As with any other human. They can't be our slaves, that's obvious. If that's your concern, forget about it. A true AGI or ASI won't be functional as a slave, just like humans aren't functional as slaves.

1

AsheyDS t1_j8zkrkk wrote

An AGI with functional consciousness would reduce all the feedback it receives down to whatever timescale it needs to operate on, which would typically be our timescale since it has to interact with us and possibly operate within our environment. It doesn't need to have feedback for every single process. The condensed conscious experience is what gets stored, so that's all that is experienced, aside from any other dynamics associated with memory, like emotion. But if designed correctly, emotion shouldn't be impulsive and reactionary like it is with us, just data points that may have varying degrees of consideration in its decision making processes, depending on context, user, etc. And of course would influence socialization to some degree. Nothing that should actually affect its behavior or allow it to feel emotions like we do. This is assuming a system that has been designed to be safe, readable, user-friendly, and an ideal tool for use in whatever we can apply it to. So it should be perfectly fine.

1

Superschlenz t1_j8zze4u wrote

When you have solved today's problems and have some computing power left over, try solving tomorrow's problems.

1