Viewing a single comment thread. View all comments

Benedicts_Twin t1_jdtw3ui wrote

This presupposes that such an AI isn’t at or near artificial general intelligence or even at artificial super intelligence (AGI/ASI). Such an oracle may be difficult to impossible to be controlled by bad actors. That’s one potential caveat. The oracle defends itself against misuse.

Another, and I think this is more plausible than bad actors is good actors acting in what they think is humanity’s benefit, but doing disastrous damage in the process. A benevolent dictatorship so to speak. Which really is a path to eventually bad acting anyway. But still.

1

circleuranus OP t1_jdul6n6 wrote

Precisely. The intent of those weilding such a weapon is almost an afterthought.

Take as an example, Wikipedia in its most basic form. As a source of knowledge, it is open to subversion of fact and historical reference. Supposing one were to edit the page concerning the line of succession of Roman Emporers and rearranged them to be out of proper chonological order. Even if this false blueprint existed for only a day, how many people around the world would have absorbed this false data and left with a false understanding of something relatively insignificant as the order of succession of Roman Emporers. How many different strands of the causal web will those false beliefs touch throughout the lifetime of the person harboring them? If we extrapolate this into a systemic problem of truth value and design an information system with orders of magnitude beyond the basic flat reference of a Wikipedia...the possibilities for corruption and dissemination of false data becomes unimaginable. A trustless system of information in the wrong hands would be indistinguishable from a God.

1