Viewing a single comment thread. View all comments

Outrageous_Apricot42 t1_j74jacv wrote

This is not how it works. Check out papers how chat gpt was trained. If you use biased training data you will get biased model. This is known since inception of machine learning.

9

FacelessFellow t1_j74nqc6 wrote

Is AI not gonna change or improve in the near future?

Is all AI going to be the same?

−4

Sad-Combination78 t1_j74y7wa wrote

Think about it like this: Anything which learns based on its environment is susceptible to bias.

Humans have biases themselves. Each person has different life experiences and weighs their own lived experiences above hypothetical situations they can't verify themselves. We create models of perception to interpret the world based on our past experiences, and then use these models to further interpret our experiences into the future.

Racism, for example, can be a model taught by others, or a conclusion arrived at by bad data (poor experiences due to individual circumstance). I'm still talking about humans here, but all of this is true for AI too.

AI is not different. AI still needs to learn, and it still needs training data. This data can always be biased. This is just part of reality. We have no objective book to pull from. We make it up as we go. Evaluate, analyze, and expand. That is all we can do. We will never be perfect. Neither will AI.

Of course one advantage of AI is that it won't have to reset every 100 years and hope to pass on enough knowledge to its children as it can. Still, this advantage will be one seen only in age.

6

FacelessFellow t1_j75215s wrote

So if a human makes an AI the AI will have the humans biases. What about when the AI start making AI. Once that snowball starts rolling, won’t future generations of AI be far enough removed from human biases?

Will no AI ever be able to perceive all of reality instantaneously and objectively? When computational powers grow so immensely that they can track every atom in the universe, won’t that help AI see objective truth?

Perfection is a human construct, but flawlessness may be obtainable by future AI. With enough computational power it can check and double check and triple check and so on, to infinity. Will that not be enough to weed out all true reality?

1

Sad-Combination78 t1_j75312i wrote

you missed the point

the problem isn't humans, it's the concept of "learning"

you don't know something, and from your environment, you use logic to figure it out

the problem is you cannot be everywhere all at once and have every experience ever, so you will always be drawing conclusions from limited knowledge.

AI does not and cannot solve this, it is fundamental to learning

6

FacelessFellow t1_j757skq wrote

But I thought AI was computers. And I thought computers could communicate at the speed of light. Wouldn’t that mean the AI could have input from billions of devices? Scientific instruments nowadays can connect to the web. Is it far fetched to imagine future where all collectible data from all devices could be perceived simultaneously by the AI?

1