Viewing a single comment thread. View all comments

izumi3682 OP t1_irbpbnh wrote

Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.


From the article.

>“Just as our Constitution’s Bill of Rights protects our most basic civil rights and liberties from the government, in the 21st century, we need a ‘bill of rights’ to protect us against the use of faulty and discriminatory artificial intelligence that infringes upon our core rights and freedoms,” ReNika Moore, director of the American Civil Liberties Union’s Racial Justice Program, says.

>“Unchecked, artificial intelligence exacerbates existing disparities and creates new roadblocks for already-marginalized groups, including communities of color and people with disabilities. When AI is developed or used in ways that don’t adequately take into account existing inequities or is used to make decisions for which it is inappropriate, we see real-life harms such as biased, harmful predictions leading to the wrongful arrest of Black people, jobs unfairly denied to women, and disparate targeting of children of color for removal from their families. The Blueprint for an AI Bill of Rights is an important step in addressing the harms of AI.”

I don't know whether this development is too little, too late or not. I see AI is explosively evolving before our eyes. I know that new iterations of already existing extraordinarily powerful and impactful AIs are going to be released in just this next year alone, if not actually this year. I know that, for example GPT-4 which compared to the currently powerful and controversial GPT-3, is going to demonstrate new powers of AI that we might have thought were impossible. All of this is developing with incredible rapidity.

And like I have always maintained, these AIs do not have to be conscious or self aware at all. But I bet this next generation of AI will make a lot of people think it is conscious and self-aware.

So I watched this video where the researchers are testing various GPT-3 NLP AIs with varying conditions intrinsic to the AIs being tested. One is where an AI has hostile regards to humans. I know it is just a test and can't go anywhere (I hope). The idea being that we want to find where a given AI can have dangerous to humans, sentiments and settle down those sentiments quickly. If such a thing is possible if an AI actually gets "mad" at us for reasons.

Here is a video that shows a testing AI get angry and threatening towards humans. I don't think this is staged, but I could be wrong. It's hard to tell for sure with AI these days. Even a highly trained AI expert was apparently completely fooled by an AI that had no idea what it was communicating. He was not alone. Some other highly trained AI experts also were feeling substantial unease as to how fast these NLP programs were progressing. If these AIs can fool the experts, what chance do us hoi polloi laymen have? Anyway, here is a video concerning that. Just ignore the Elon Musk parts. I want you to see these conversations with these GPT-3 AIs.

https://www.youtube.com/watch?v=Fbc1Xeif0pY&t=112s (6 Oct 22)

16

kuchenrolle t1_irc3kxu wrote

Who exactly are those AI experts that are "feeling substantial unease as to how fast these NLP programs were progressing"? Worrying about unexpected consequences of AI (regardless of conscience) is fair. But worrying about GLP-3 "getting mad at us" is not and I'd like to see what experts say otherwise and with what arguments.

4

Denziloe t1_iregp4b wrote

Current models like GPT-3 do not "get angry". They really have no conception of the world. They can replicate textual styles similar to what they've seen on the internet. It contains no more genuine anger than a photocopier copying a picture of an angry face.

1