Viewing a single comment thread. View all comments

Adghar t1_j67nhc0 wrote

Currently, artificial intelligence (AI) and machine learning (ML), which ChatGPT make use of, are simply the science of statistics being applied heavily.

If you take a sample of 10,000 English sentences, you expect to encounter certain patterns. Maybe 3 of the sentences have "rock" after the word "the," maybe 15 of the sentences that contain 6 or fewer words contain "I." Depending on how frequently these patterns appear, you can make predictions; if 9,996/10,000 of those sentences have "rock" after the word "the" and you're given the word "the," you can predict that you should follow it with "rock."

Now take this principle and scale it up greatly with the most sophisticated pattern-finding levers the company could come up with for the program. Feed it examples of countless oceans of language in different contexts associated with different prompts. It's then a matter of calculating based on each model and coming up with the most probable word that should follow the previous word given the entire context (your question, the sentence, the paragraph, the conversation). At that point, you can reasonably expect the program to "act like" whatever the training data was. And the training data was well-labeled and captured across many contexts, allowing the program to feel intelligent.

50

thetomahawk42 t1_j68a1cj wrote

It's important to note that ChatGPT doesn't "understand" things in the same way we do, and doesn't "think". So it does tends to get a lot of stuff wrong.

That being said, it's quite a good bit better than previous attempts at similar things.

10

paquer t1_j68een5 wrote

You forgot to add in the censorship , and ideological parameters.

Does it just have a database of things it has to conform to / things it’s supposed to ignore?

ie most historical texts and all biology textbooks up to year x would tell you that men cannot get pregnant or menstruate. But in 2023, wouldn’t chatgpt tell you a man can menstruate and get pregnant?

And how does it deal with input data known to be false / lies?

Would it tell you George santos is a Jew”ish” black Hispanic male who’s family avoided the holocaust via being born as white Christians in South America?

4

ixtechau t1_j68fka5 wrote

You’re gonna be downvoted to oblivion for raising this issue but you are 100% correct in the biases of machine learning. We will only ever hear about the biases the mainstream is interested in though (e.g. face scanners not triggering on people with dark skin), and they will intentionally ignore anything that furthers their perceived reality.

3