Viewing a single comment thread. View all comments

[deleted] t1_j73y8gl wrote

You have to be joking! The bias that has been baked into this AI is overwhelming.

12

__OneLove__ t1_j741xyx wrote

Some, just done get it. I don't think most are even vaguely aware of just how many AI projects have been cut/canceled due to the fact that ultimately 'we humans are training them' and therefore, AI (at least currently) suffers from the same human traits @ this juncture. AI is moving fast & I fear too many are jumping on the AI bandwagon in full force prematurely IMHO. ✌🏽

8

Fake_William_Shatner t1_j749s9d wrote

>The bias that has been baked into this AI is overwhelming.

You can fix these sorts of data models. It's likely SEEING the bias already in the system and not thinking like a human to obscure the unpleasantness.

1

__OneLove__ t1_j74cv72 wrote

Hmmm...who exactly is 'fix[ing] these sort of data models'? 🤔

2

Fake_William_Shatner t1_j74gyii wrote

Um, the people developing the AI.

To create art with Stable Diffusion, people find different large collections of images to get it to "learn from" and they tweak the prompts and the weightings to get an interesting result.

"AI" isn't just one thing, and the data models are incredibly important to what you get as a result. A lot of times, the data is randomized at it is learned -- because order of learning is important. And, you'd likely train more than one AI to get something useful.

In prompts, one technique is to choose words at random and have an AI "guess" what other words are there. This is yet another "type of AI" that tries to understand human language. Lot's of moving parts to this puzzle.

People are confusing very structured systems, with Neural Nets, Expert systems. Deep Data, and creative AI that use random data and "remove noise" to approach many target images. The vocabulary in the mainstream is too limited to actually appreciate what is going on.

−1

__OneLove__ t1_j74pyql wrote

Respectfully, smoke & mirrors imo...

TLDR;

Um, the people developing the AI. 🤦🏻‍♂️

2

Fake_William_Shatner t1_j77i4ch wrote

>TLDR;

It's really a shitty thing about reddit that the guy who makes that comment gets more upvotes than the person attempting to explain. "Smoke and Mirrors" -- how about which aspect of this are you saying that applies to? Be specific about the situation where they used AI to determine choices in business, society, planning. These are all different problems with different challenges and there are so many ways you can approach them with technology.

And, this concept that "AI do this" really has to go. They are more different in their approaches than people are. They are programmed AND trained. There's a huge difference between attempts to simulate creativity and attempts to provide the best response that is accurate, to making predictions about cause and effect. The conversation depth on this topic is remedial at best.

AI can absolutely be a tool here. It just takes work to get right. However, the main problem is the goals and the understanding of people. What are they trying to accomplish? Do they have the will to follow through with a good plan? Do the people in charge have a clue?

0

__OneLove__ t1_j77m3wj wrote

Look, don’t take it personally, ultimately, you’re stating ‘people’ (known to be naturally prone to bias) are going to ‘program the bias’ out of AI (speaks for itself imo). That was exactly the point I was making & apparently other sub members agree. Simply put, its such a poor argument imo, to the point that I am not willing to sit here & read paragraphs of text to the contrary. I don’t state that to offend you (whom I don’t know), I’m just keeping it 💯 from my perspective. You are obviously entitled to your opinion as well, hence my keeping my response short/succinct vs. trying to convince you otherwise.

At a minimum, I might suggest not taking these casual internet discussions with strangers so personally. Nothing more then a suggestion…

Peace ✌🏽

1

Fake_William_Shatner t1_j77rmew wrote

>vs. trying to convince you otherwise.

Yes, that would require you to know more about what you are saying. "Succinct" would require you to actually connect your short observation to SOMETHING -- what you did was little more than just say; "Not true!" and people didn't like my geek answer and how it made them feel so you got the karma. I really don't care about the Karma, I care about having a decent conversation. I can't do that with "Smoke & Mirrors" when I could apply it to at least a dozen different aspects of this situation, and I have no idea what the common person thinks. And the idea that people have one point of view at a time -- that's foreign to me as well.

>At a minimum, I might suggest not taking these casual internet discussions with strangers so personally.

Oh, you think my observation about "this is a shitty thing" is me being hurt? No. It's ANNOYING. It's annoying that ignorant comments that are popular get upvotes. Usually I cracking jokes and sneaking in the higher concepts for those who might catch them -- because sometimes that's all you can do when you see more than they seem to.

I could make a dick joke and get 1,000 karma and explain how to manipulate gravity and get a -2 because someone didn't read it in a textbook.

However, the ability for people to think outside the box has gotten better over time, and it's not EVERYONE annoying me with ignorance, just half of them. That's a super cool improvement right there!

0

__OneLove__ t1_j77tajo wrote

Please, by all means, keep both proving my point & justifying my unwillingness to engage with this passive aggressive dribble 🙂

...and yet this 🤡 continues to wonder/question why he warrants downvotes 🤔🤣✌🏽

1

Fake_William_Shatner t1_j78j1zv wrote

>why he warrants downvotes

Some people seem to think up and down votes prove the quality of the point being made. No, it's just the popularity in that venue at a given moment.

You could always explain what your comment meant. You don't have to, though. It's important not to take these comments too seriously. But, if you keep commenting on everything else BESIDES what you meant by "smoke and mirrors" then I will just not worry.

I have to commend you however on some top notch emoji usage.

1

__OneLove__ t1_j78jt4s wrote

Take care of yourself & have a nice life internet stranger. In the interim/simply put, I am blocking you. ✌🏽

1

JenMacAllister t1_j73zk7c wrote

It's easy to program out the bias. We have seen just how hard that is to that with humans. (over and over and over ....)

−3

__OneLove__ t1_j74d80j wrote

So who exactly is 'program[ming] out the bias'? 🤔

7

[deleted] t1_j740ayi wrote

Yes, you are technically correct. But around half of society live in a place where feelings are more important than facts. Remember the AI that was profiling potential criminals? Well, that feely segment of society didn't like the factual outcome and the AI was pulled. You will never get an objective outcome while feelings beat hard facts.

2

Fake_William_Shatner t1_j74ao1i wrote

>Remember the AI that was profiling potential criminals?

Oh, it doesn't sound like you are the "rational half" of society either.

I can definitely predict the risks of who will become a criminal by zip code. Predicting crime isn't as important as mitigating the problems that lead to crime.

Feelings are important. If people feel bad, you need to convince them, or, maybe have some empathy.

It's not everyone being entitled. Some people don't feel any control or listened to. And the point of not having "bias" is because cold hard logic can create bias. If for instance, you ONLY hire people who might 'fit the culture in tech support' -- then the bias would inherently look at who already has tech support jobs and who already goes to college for it. So, you have more of those demographics and reinforce the problem.

It's not necessarily LOGIC -- it's about what you are measuring and your goals. What is the "outcome" you want? If you ONLY go on merit, sometimes you don't allow for people to get skills that didn't yet have merit. Kids will parents who went to college do better in college -- so, are you going to just keep sending the same families to college to maximize who logically will do better? No. The people enjoying the status quo already have the experience -- but, what does it take to get other people up to speed? Ideally, we can sacrifice some efficiency now, for some harmony. And over time, hopefully it doesn't matter who gets what job.

Society and the common good are not something we are factoring in -- and THAT looks like putting your finger on the scale.

1

[deleted] t1_j74hh9v wrote

Cancel the AI project, some dude on reddit can predict by zip codes. Well, I guess that one is done! (joking!)

Feelings are important? Yes they are and that is why we should have real humans, with real families and real life experience acting as judges and juries, my reasoning follows.

But the Tech sector DOES employ people who fit the culture, just not in the way you suggest. Take a wild guess on how many people employed in Silicon Valley who vote the same way, who feel the same about Trans issues, who feel the same about gun control, who feel the same about Christianity, who feel the same about abortion.

THIS is the key problem, the AI is being developed and maintained exclusively by this group, lets say they make up half of the population - where does that lead?

I feel AI is incredible but I really think it needs to be given bounds, building better mouse traps (or cars, planes, energy generation, crop rotation etc, etc) NOT making decisions directly for human beings.

−1

Fake_William_Shatner t1_j77j8u5 wrote

>Take a wild guess on how many people employed in Silicon Valley who vote the same way, who feel the same about Trans issues, who feel the same about gun control, who feel the same about Christianity, who feel the same about abortion.

They vote the way educated people tend to vote. Yes -- it's a huge monoculture of educated people eschewing people who ascribe light switches to fairy magic.

>THIS is the key problem,

No, it's thinking like yours that is the key problem when using a TOOL for answers. Let's say the answer to the Universe and everything is 42. NOW, what do you do with that?

>NOT making decisions directly for human beings.

That I agree with. But not taking advantage of AI to plan better is a huge waste. There is no putting this Genie back on the bottle. So the question isn't "AI or not AI" the question is; what rules are we going to live by, and how do we integrate with it? Who gets the inventions of AI?

It's the same problem with allowing a patent on DNA. The concept of the COMMON GOOD and where does this go in the future has to take priority over "rewarding" someone who owns the AI device some geek made for them.

1

JenMacAllister t1_j741tz5 wrote

Yes it did. Anything created by humans will contain the biases of those humans. However others will recognize this and point it out so it could be removed in future versions.

I don't expect this to be 100% non bias on the first or even 100th version. I do not think all the humans on this planet could agree even what that would mean.

But over time I'm sure we could program an AI to be far more non bias than any human and most humans would agree that it was.

−1