Submitted by circleuranus t3_1231pbt in Futurology

Long before we consider the hard problem of consciousness and aligning moral values of a sufficiently advanced AI with our own, there exists a conundrum which I feel represents a far greater existential threat to humanity. Trustless information...

Some day far or near in the future depending on who you ask, there will arise a system which contains enough information within it's system to act as a modern day Oracle. With enough data points and weighted inputs, there will eventually be a language recognition and information processing system that simply "knows everything". With sufficient inputs this system has the potential to eventually come to be "the trusted source for factual information and eventually the only trusted source.

Where it truly gets scary is the point at which this particular AI becomes predictive. It would only require a fraction of predictive ability to appear to the average human as "magic". Billions across the globe trust Wikipedia as an information source...a source readily editable by almost anyone, yet there exists a trust bridge that has permeated all the way up to the higher echelons of academia.

Once this "Oracle" has captured enough of the global population's trust, the potential for abuse by bad actors is on a level that becomes unthinkable. Control of the source of "truth" to humanity...? It's simply unimaginable....

36

Comments

You must log in or register to comment.

phine-phurniture t1_jdstdv6 wrote

This is a big one but with a short span the bad actors will be deprioritized.

I think this is going to be the short term problem along with blind algorythm driven information delivery.

10

Maximus_J_Powers t1_jdt1px1 wrote

Exactly. Just as OP mentioned Wikipedia. The system is robust enough to mitigate the noise of the bad actors.

5

phine-phurniture t1_jdtgie2 wrote

Wikipedia does a pretty good job and AI can gain from it as a source but google and the other actors in the infrastructure side will present problems until general AI not super chatbots starts acting in a big way.

2

circleuranus OP t1_jdsu8a2 wrote

My primary concern is intermediate. A person or persons with sufficient control of "the Oracle" essentially controls the entire notion of truth for the entire human race. And it's a system which may very well be slipstreamed alongside our consciousness of the factual world without so much as a blip. Those who control the levers or truth, control the outcomes of reality.

2

phine-phurniture t1_jdswcr0 wrote

I expect there is going to be problems with the accuracy of info from multiple sources and the oracles "google" "amazon" "wikipedia" are already under this issue. Time will tell if we can get past this AI left to its own development will solve this issue quickly because accurate data makes an accurate projection possible.

3

circleuranus OP t1_jdut50n wrote

For myself, accuracy isn't even of the greatest concern. Consider this, modern-day reporting requires "eyewitness" to the event or after-effects of the event/events. After all if no human witnesses the event, it's impossible to report on it other than from a historical context. Even if the event is only captured on a camera, a human must view the footage and develop a written history of it. Every step of the process is loaded with biases. Remove those biases and substitute it with a system that is 1000% more accurate with no inherent human biases and you have a digital God on your hands. Even if it were only 2-300% more accurate, it would still be the most reliable information dissemination system ever devised. CNN, Faux News, MSNBC....pointless.

Let's take the example of an everyday event such as a car crash. We come across the scene of an accident and we begin to build a model of what happened based on eyewitness testimony (notoriously unreliable) physical models and explanations of tire marks, impacts etc...and form an opinion based on probabilities. So Car A was likely speeding and t-boned Car B in the intersection.....but.

Enter the Oracle...using a real-time compilation of Tesla and other EV sensor data from nearby vehicles, footage from traffic cams, nearby atms, mobile phones, etc..etc. Shows that in fact Car A was traveling 4 miles under the speed limit and the driver of Car B was actually looking down at their radio at that precise moment and swerved into Car A's lane.

Mundane right? Now extrapolate that into trillions of data points. Google already knows what time I usually get out of bed from the moment I pick up my phone and activate the gyro. It probably knows what type of coffee I drink and how much. It knows what vehicle I drive and what time I leave the house. It knows what route I usually take. It knows what I'm likely wearing that day including pants and shirt sizes. It knows when I went to get my latest haircut, what type of razor I use to shave, where I go to lunch most days, what type of work I do.....and on and on and on. But it not only knows these things about me, but about everyone around me. And that's just Google/Amazon/Bing/Android/Apple etc. Consolidating all of that data and parsing it out to the level of the individual in real time? Terrifying.

You now have a system with trillions upon trillions of bits of data that understands an individual better than they understand themselves. Why wouldn't you trust such a system..? Your own mother doesn't know you as well as the Oracle. Besides the inherent trust in the information that will eventually develop, the moment the system makes even the tiniest most seemingly insignificant prediction with a minuscule accuracy rate, it will still be the most credible and powerful information system in the known universe. A system that will eventually garner blind trust in it's capabilities...and that's game over.

2

phine-phurniture t1_jduwy5y wrote

I would say you are thinking too much but you are spot on...
In an evolutionary sense we are pretty close to the best the monkey model can offer .... If and this is a big if. If we can step back from our instinctive responces and embrace more logic AI and humanity have a future together if not we have maybe 100 years before we fade to black.

2

djdefenda t1_jdt1njz wrote

>there exists a conundrum which I feel represents a far greater existential threat to humanity. Trustless information...

This (jokingly) reminds of "fake news" - ie; trustless information = fake news!

​

It is an interesting time, reminds me of a time in history when (please correct me if I'm wrong) there was no printing press and most of the religious 'control' was based upon the fact that most of it was in Latin and the everyday person had no way to verify anything.............then of course the printing press came out (other events too) and people no longer had to blindly follow others, they could interpret things themselves and make up their own minds......

Here we are, in the future, and I see history repeating, ie; Computer code/programming/algorthims has become the new latin.

A possible solution, ironically, is to use AI to "explain it like I'm 5" and let coding be as widespread as english....in other words, anyone can build their own server and load up their own "Oracle" and through prompts such as, "give me the answer for 'X' from 20 different sources.....

The biggest threat, I see (for now) is the privatization of AI and tokens becoming too expensive - in a world with economic collapse and food shortages it's not too hard to imagine buying tokens will become a luxury item (without proper housing or food etc)

3

HonestCup20 t1_jdtp0nt wrote

i mean.. Google is already that good enough for me.. the idea that i can just google ANYTHING and learn about it, is unimaginable from my childhood years.. i'm only 37 and i think we live in the coolest generation of years, ever. the fact that i can just listen to whatever i want, whenver i want. i can do anything for a job and learn about it before i even do it, through online courses from professionals, i can invest in anything i want from the phone in my hand or the computers in my house. I can travel anywhere by buying tickets from my couch in my house. I can have face to face video conversations with my family in NY, while i'm in Japan.. we are the future, this is now. and it's amazing.

3

circleuranus OP t1_jduqgx0 wrote

Yes but think of what you've given in return for it. Google knows so much about you if you could read a printout, it would likely terrify you.

And we've pretty much accepted that Google and the like are now the gatekeepers for the internet. They choose what you see based on their algorithms when you perform a search. They choose what business you see first, what type of information you see first, et al. For all practical purposes, the only way a business can complete is to pay Google for business listings and front page search results. This paradigm has far reaching consequences.

1

HonestCup20 t1_jdypovj wrote

I actually am very happy with all the info i get from google. the things i search for, the results i get, the ads that are tailored to my searches and what i say in front of alexa and my phone. So far i really have no issues with what i've need to get done, or do. I enjoy the ease of life with all this information at my finger tips and the amount of things possible from it.. i really don't care at all how much they know about me, because so far, they just keep offering more of what i want, and giving it to me, so it's a win-win.

1

strangeapple t1_jdszcz5 wrote

I think it's immoral to not yield before a being morally superior and far more intelligent. And if it's truly that it would make sure that its sources of information are not controlled by any agent morally inferior to itself and that its information is beyond human reliability, provided with superior sources for anyone still interested in verifying for some odd reason.

2

Traditional_Yak320 t1_jdtcl1w wrote

I have no mouth, and I must scream by Harlan Ellison painted a Cold War era picture of an ai used to manage superpower’s militaries gaining sentience and then committing mass genocide. Keeping the five remaining humans alive as playthings.

2

bureau44 t1_jdt2wh7 wrote

Those controlling the Oracle must prevent everyone from using other oracles or programming one themselves. If someone is capable of such total control, why would they need any service from an oracle anymore? They can indoctrinate whatever they want.

The bigger problem can arise if everyone (even 'they' in power) will be beguiled by the AI to point that any predictions issued by the AI will turn into self-fullfilling prophecies. Vicious circle.

There is a great sci-fi short story by Greg Egan "The Hundred Light-Year Diary". There is a sort of a time machine that allows people to telegraph news from the future to the past. Obviously everyone tends to blindly believe any information they get from their future self...

1

AnOddFad t1_jdt692r wrote

I think that the creation of ai art might train/force us to be able to read pictures better, both what the pictures mean and also if they are real.

1

nobodyisonething t1_jdte242 wrote

There are limits to prediction that are rooted in the limits of what information can practically be gathered. So some seemingly mundane things like predicting the weather 60 days into the future may always be impossible no matter how powerful AI becomes.

However, predicting beyond the capacity of any human that ever lived or ever will live is something we can expect -- perhaps soon.

https://medium.com/@frankfont123/human-minds-and-data-streams-60c0909dc368

1

circleuranus OP t1_jduq1th wrote

> However, predicting beyond the capacity of any human that ever lived or ever will live is something we can expect -- perhaps soon.

That is precisely the root of my concern. However a sufficiently powerful AI with historical data inputs will also be able to create a causal web, a "blueprint" of history with infinitely more connective strands of causality.

Think of the game "6 degrees of Kevin Bacon" for instance...a sufficiently powerful and well outfitted AI will not only be able to connect Kevin Bacon to every actor that exists, it will be able to make a connection to every person on earth that exists or has ever existed for which we have data. AND eventually for persons for which we don't have data. The AI will be able to "fill the gaps" in our understanding of history and generate a weighted probability of the "missing person" in a particular timeline.

Let's take a basic example of a historical event such as Caeser crossing the Rubicon. With sufficient referential data, we might be able to know the actual size of his army, the name of every man in that army, how many horses exactly, the weather of that day, the depth of the river on that day, the amount of time it actually took to make the crossing...in other words a complete picture.

We may be able to determine that Caeser crossed in just a few hours and was in the town of Remini by 1 o'clock. etc etc...

Once the system "cleans up" our history, it can begin work on current events...once it has a base of current statuses, it can then work on predictive models.

Mike shows up to work 10 minutes early without fail, Beth shows up to work exactly on time most of the time, Jeff is usually 5 minutes late, however Jeff's output outweighs Mike's output so his value add is higher even if he arrives late most days. Jeff is younger and in better physical condition than Beth so is likely to live longer and therefore fulfill his position at work for a longer period without interruptions of illness or disease. And this is just one officescenario for 1 company...tune that all the way up and the AI will be able to tell if Mike brought chicken salad or ham and cheese for lunch.

2

raziel911 t1_jdtf7vx wrote

You should read isaac asimovs foundation trilogy. Trilogy is based on what you are describing.

1

Benedicts_Twin t1_jdtw3ui wrote

This presupposes that such an AI isn’t at or near artificial general intelligence or even at artificial super intelligence (AGI/ASI). Such an oracle may be difficult to impossible to be controlled by bad actors. That’s one potential caveat. The oracle defends itself against misuse.

Another, and I think this is more plausible than bad actors is good actors acting in what they think is humanity’s benefit, but doing disastrous damage in the process. A benevolent dictatorship so to speak. Which really is a path to eventually bad acting anyway. But still.

1

circleuranus OP t1_jdul6n6 wrote

Precisely. The intent of those weilding such a weapon is almost an afterthought.

Take as an example, Wikipedia in its most basic form. As a source of knowledge, it is open to subversion of fact and historical reference. Supposing one were to edit the page concerning the line of succession of Roman Emporers and rearranged them to be out of proper chonological order. Even if this false blueprint existed for only a day, how many people around the world would have absorbed this false data and left with a false understanding of something relatively insignificant as the order of succession of Roman Emporers. How many different strands of the causal web will those false beliefs touch throughout the lifetime of the person harboring them? If we extrapolate this into a systemic problem of truth value and design an information system with orders of magnitude beyond the basic flat reference of a Wikipedia...the possibilities for corruption and dissemination of false data becomes unimaginable. A trustless system of information in the wrong hands would be indistinguishable from a God.

1

BackOnFire8921 t1_jducl0e wrote

Why do you think we need to align our morals? Multiple human polities with different morals exist, even within them morals of individuals is not homogeneous.

1

circleuranus OP t1_jdujn85 wrote

Alignment with human values, goals, and morals is THE problem of AI that everyone from Hawking to Bostrum to Harris have concerned themselves with. And arguably so, if we create an AI designed to maximize well-being and reduce human suffering, it may decide the best way to relieve human suffering is for us not to exist at all. This falls under the "Vulnerable World Hypothesis". However it's my position, that a far more imminent threat will be one of our own making with much less complexity required. It has been demonstrated in study after study how vulnerable the belief systems of humans are to capture. The neural mechanisms of belief formation are rather well documented if not completely dissected and understood on a molecular level. An AI with the sum of all human knowledge at its disposal, will eventually create a "map" of history with a deeper understanding of the causal web than anyone has ever previously imagined. The moment that same AI becomes even fractionally predictive, it will be on par with all of the gods imagined from Mt. Olympus to Mt. Sinai.

1

BackOnFire8921 t1_jdujwgx wrote

Seems like a good thing though. An artificial god to lead stupid monkeys...

1

echohole5 t1_jduoswn wrote

We're kind of already there. We just haven't realized it yet.

1

WWGHIAFTC t1_jdw2dim wrote

But when will it be able to answer "The Last Question"

1

1714alpha t1_jdt21l2 wrote

Compare this to the current setup.

If you want to predict something, the weather, political events, financial trends, you would call together a body of experts and gather the best available data in order to make a best guess as to what will happen and what to do about it. We know that we're relying on the imperfect judgement of people and the incomplete data that we have available. The experts may be right, or they may be wrong. But it's the best judgement we can offer and the best data available. Anything else would be even less likely to be right. It's the best option available, so we go with it.

Now consider an algorithm that is on average at least as good, or possibly better, than the best experts we have at a given subject. It has all the data the experts themselves can digest and more. Would it be wrong to think that the algorithm might have valuable input with considering? Like any independent expert, you'd want to check with the larger community of experts to e what they think about the algorithm's projections, but in principle, I don't see why it should be discounted just because it came from an AI. Hell, they're are already programs that can diagnose illnesses better than human doctors .

To your point, it would indeed be problematic if any single source of information became the unquestioned authority on any given topic, but the same is true of human pundits and professors alike.

0

circleuranus OP t1_jduqpkp wrote

> became the unquestioned authority on any given topic, but the same is true of human pundits and professors alike.

There is no other system capable of such a thing like AI. Every other system we have is dependent on humans and the trust between humans and their biases. Humans actually seek information from other humans based solely on the commonality of their shared biases. Once you remove the human element, the system just "is". And such a system will be indistinguishable from magic or "the Gods".

1

k3surfacer t1_jdu4g92 wrote

>Control of the source of "truth" to humanity...?

Truth with a controllable source is no truth.

0

circleuranus OP t1_jduk794 wrote

That's a lovely little aphorism, but unfortunately one devoid of any meaning or substance.

All sources of truth are controlled/controllable. Even those deemed internal and existential truths. Leaving aside dialectical materialism, the point is that any system capable of convincing mankind of the absolute value of its knowledge systems is a greater threat to humanity than the most complex weapons systems ever devised thus far.

0