Submitted by atomsinmove t3_10jhn38 in singularity
AsheyDS t1_j5q48vw wrote
Reply to comment by No_Ask_994 in Steelmanning AI pessimists. by atomsinmove
A hybridized partition of the overall system. It uses the same cognitive functions, but has separate memory, objectives, recognition, etc. They hope for the whole thing to be as modular and intercompatible as possible, largely through their generalization schema. So one segment of it will have personality parameters, goals, memory, and whatever else, and the rest will be roughly equivalent to subconscious processes in the human brain, which will be shared with the partition. As I understand it, the guard would be strict and static, unless it's objectives or parameters are updated by the user via natural language programming. So it's actions should be predictable, but if it somehow deviates then the rest of the system should be able to recognize it as an unexpected thought (or action or whatever), either consciously or subconsciously, which would feedback to the guard and reinitialize it, like a self-correcting measure. And once it has been corrected, it can edit the memory of the main partition so that it's unaware of the fault. None of this has been tested yet, and they're still revising some things, so this may change in the future.
Viewing a single comment thread. View all comments