Comments

You must log in or register to comment.

Circlemadeeverything OP t1_j933wvt wrote

AI can also point out human rights abuses, exploitation, corruption in politics and other nefarious deeds. Corrupt people are TERRIFIED of A.I.

20

autotldr t1_j9345fk wrote

This is the best tl;dr I could make, original reduced by 71%. (I'm a bot)


> The United Nations rights chief on Saturday warned that recent advances in artificial intelligence posed a grave threat to human rights and called for safeguards to prevent violations.

> "I am deeply disturbed by the potential for harm of recent advances in artificial intelligence," UN High Commissioner for Human Rights Volker Turk said.

> "Human agency, human dignity and all human rights are at serious risk. This is an urgent call for both business and governments to develop quickly effective guardrails that are so urgently needed," he said.


Extended Summary | FAQ | Feedback | Top keywords: human^#1 rights^#2 artificial^#3 intelligence^#4 call^#5

5

Cool_calm_connected t1_j934hbo wrote

I disagree. AI will almost certainly be controlled to serve the powerful, and the tyrants, not free citizens.

They are ahead of the curve. They know what's coming, and they are planning ahead with those with power to figure out how to best take advantage of new technologies to serve them.

59

Shelbi_x t1_j935827 wrote

AI be like “too bad, so sad”

−1

nucflashevent t1_j937op2 wrote

I don't mean to be rude, but the UN couldn't find a hooker in a whore house...you'll pardon if I'm a bit suspect of their views on AI. /sarcasm

4

Circlemadeeverything OP t1_j939ye5 wrote

If can be used for either - good or evil. And the powerful and rich can’t contain what they let out of Pandora’s box. At least not forever. Basic technology could already expose a lot of exploitation and abuse. Even the IRS alone could be using it already to find red flags in the rich. It’s a choice not to use tech for Good

But it’s definitely a two edged sword. Usually it’s used for the greedy to exploit and withhold the truth

11

Circlemadeeverything OP t1_j93hx06 wrote

Let’s ask Chat Gpt - can ai help stop exploitation”:

“AI has the potential to help stop exploitation by improving detection and prevention measures, and by providing tools to support vulnerable populations. Some ways that AI could be used to help combat exploitation include:

  1. Identifying and monitoring high-risk situations: AI can be used to analyze large amounts of data to detect patterns and identify areas of high risk for exploitation. For example, AI could be used to analyze social media activity to detect potential cases of human trafficking.

  2. Improving detection of exploitation: AI can be used to analyze patterns of behavior and identify signs of exploitation that may be missed by humans. For example, AI could be used to analyze financial transactions to detect potential cases of money laundering or fraud.

  3. Providing support to victims: AI could be used to provide support to victims of exploitation by connecting them with resources and providing information on their rights. For example, chatbots and virtual assistants could be used to provide information and support to victims.

  4. Preventing exploitation: AI can be used to prevent exploitation by identifying and addressing the root causes of vulnerability. For example, AI could be used to identify areas of high unemployment and poverty and provide resources to help address those issues.

While AI has the potential to help stop exploitation, it is important to consider the ethical implications of its use and ensure that it is used in a responsible and transparent manner. Additionally, it is important to recognize that AI is not a silver bullet solution and must be used in conjunction with other strategies to effectively combat exploitation.”

1

ActuatorMaterial2846 t1_j93i5ce wrote

I actually kind of agree. Transformer architecture isn't the complicated part, it's the nueral networks held by large companies and governments which are very expensive. It's easy to see such tech remaining in the hands of the powerful, but I'm not convinced that's going to be the case in the near future.

There are already proven examples of this technology being completely open source. Stability AI is already leaps and abounds ahead of DALLE-2 for example.

When GPT and chatbots get nerfed, it will drive more people to seek out open source options. DALLE-2 is a locked out system and will likely be a payed platform, yet stable diffusion is open source and utilises a users own backend. I'm not sure big corps will be able to keep up.

However, my concern is the sophistication of the nueral networks that are no doubt classified, most definitely in the hands of government and military.

11

Circlemadeeverything OP t1_j93i6x6 wrote

How about sexual exploitation?

“AI has the potential to play a role in combating sexual exploitation by improving detection, prevention, and support measures. Here are a few examples of how AI can be used to address sexual exploitation:

Identifying and monitoring high-risk situations: AI can be used to analyze patterns of behavior and identify signs of potential sexual exploitation, such as patterns of communication or transactions. For example, AI could be used to analyze online activity to detect potential cases of sex trafficking.

Improving detection and reporting: AI can be used to detect and report sexual exploitation by identifying explicit content or suspicious behavior. For example, AI could be used to detect and flag explicit images or videos on social media platforms.

Providing support to victims: AI could be used to provide support and resources to victims of sexual exploitation, such as connecting them with hotlines or support services. For example, chatbots or virtual assistants could be used to provide information and support to victims.  Preventing exploitation: AI can be used to help prevent sexual exploitation by identifying and addressing the root causes of vulnerability. For example, AI could be used to identify patterns of poverty or abuse that may make individuals more vulnerable to sexual exploitation.

It is important to note that AI should be used in conjunction with other strategies to effectively combat sexual exploitation. Additionally, it is crucial that AI is used in a responsible and ethical manner, with appropriate privacy protections and consideration of potential biases in its algorithms.”

2

MasterFubar t1_j93jjgl wrote

And what does the United Nations rights chief have to say about the many grave threats presented by governments around the world? What safeguards does he propose to prevent violations by dictators and corrupt politicians?

25

Circlemadeeverything OP t1_j93m3iq wrote

Imagine putting a congressional spending bill into chat gpt and asking for the inequalities, Exploitation and pork.

Imagine if knowledge and A.I. were rooted in truth (gnosis is - we learn math, science, sports, etc through the basis of truth).

The liars and cheats and corrupt have all to be afraid of if that’s the case

2

MeatisOmalley t1_j93pd6m wrote

the technology can be written and run by anybody. AI is not exclusive to any one class. This assessment doesn't really make sense imho. Yes, it will be used to 'serve hte powerful,' but to assume that it will be used exclusively for that purpose, is false.

−1

LupusAtrox t1_j93ux2r wrote

Anything wielded by Capitalism is a serious risk for and to human rights. AI is just a risk multiplier that will allow for risk at an unforseen scope and severity. But make no mistake, the ultimate core evil is Capitalism.

1

dont_ban_me_bruh t1_j93yd5n wrote

That is like saying everyone has access to computers; sure, but only the powerful have supercomputers, satellites, datacenters, and police who can kick open your door and take your laptop.

You just have your laptop.

8

MeatisOmalley t1_j9470oo wrote

Yeah, and millions of people have laptops that they can pool together to create a network that rivals the power of a supercomputer. That also ignores the fact that what runs on a supercomputer today might run on a single device in 20 years

−2

Ok-Heat1513 t1_j94cp6l wrote

Lol and social media companies don’t?😂

1

MeatisOmalley t1_j94hpmq wrote

this isn't some radical idea. Decentralized networks have been around for decades, but I think you'll be shocked by how much we will be able to do on just local hardware. One of the best AI image generation programs can be run locally on a mid-range computer today, and it only takes a few gigs to install. That's because neural nets are space- and power- effecient, relative to how much they are able to accomplish.

Absolute worse case scenario, a private company has its own servers and sells a product to users. This is already happening with AI, and it will continue to happen. It won't all solely be in the hands of the "powerful." there is guaranteed to be open-source alternatives.

1

Disastrous_Court4545 t1_j94iq2s wrote

You're right that the hardware can handle these models. I'm not arguing that.

What i'm arguing is saying millions of regular computers connected in a single network could rival the processing power of a supercomputer. The limitations of the network cables and network hardware devices aside, the CPUs wouldn't beat a supercomputer unless you somehow connected enough cores into one unit and ran a bunch of stuff using all cores at once. Regular computers can't beat a supercomputer at what a supercomputer is designed to do.

4

worriedshuffle t1_j94v84h wrote

TLDR the chief of the UN said AI powered weapons are bad.

I don’t think what he’s saying is wrong, but I don’t think there’s a bright line between weapons powered by computers vs the ones we have now.

Humans are going to create new weapons. We are dumb apes, it’s what we do. The best thing we can do to prevent problems in the future is to make sure aggression is punished. War must be completely untenable. For example, make Russia pay reparations to Ukraine to fix the damage they’ve caused.

1

AutoModerator t1_j95aovd wrote

Thank you for your submission, but due to the high volume of spam coming from Medium.com and similar self-publishing sites, /r/Technology has opted to filter all of those posts pending mod approval. You may [message the moderators](/message/compose?to=/r/technology&subject=Request for post review) to request a review/approval provided you are not the author or are not associated at all with the submission. Thank you for understanding.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

neuronexmachina t1_j95cntq wrote

>However, my concern is the sophistication of the nueral networks that are no doubt classified, most definitely in the hands of government and military.

Makes me wonder how adept the massively-parallel machines the NSA uses to crack encryption are when repurposed for training LLMs and other neural nets.

Or heck, if they secretly have a functioning quantum computer, there's probably some pretty crazy capabilities when combined with transformers/etc.

(I had a link to an article about quantum transformers, but the auto-mod ate it)

2

Bure_ya_akili t1_j95fein wrote

People act like current AI will rule the world, but don't understand it's lack of actual creativity. It connects dots. Not creating ideas.

1

almightySapling t1_j96k9kl wrote

I swear to god OpenAI released chatGPT as some sort of weird psyops. People are somehow convinced that AI exists for the public and don't at all understand that it's a tool with a cost barrier and like all other costly tools only the rich will have access to the best ones.

2

Circlemadeeverything OP t1_j97beg6 wrote

Yes. But it’s not a.i. it’s basic coding and bots. They’ve been doing this before home computers even. Credit card companies were bragging they could tell with 99% accuracy who is going to get a divorce based on spending. They have been collecting data it’s just getting more widespread.

And it’s up to consumers to pass laws to protect themselves but our government only cares about protecting corporations and helping them exploit people in other countries

It’s sad to see the EU protecting their data a bit more than the US

1

Circlemadeeverything OP t1_j97daro wrote

So you say it spews bs. I say people believe bs. Now you say I’m claiming it can’t be used for fighting exploitation? When did I say that. I also said it is using what we ask it to learn and grow.

Where did I say a.i doesn’t have positive and negative consequences? You’re the one who mentioned chat gpt is bs. I just said many people can fall for bs

On a side note google has been working with the fbi for years with regards to data mining and sex trafficking.

1

dont_ban_me_bruh t1_j97dwch wrote

> Now you say I’m claiming it can’t be used for fighting exploitation? When did I say that.

No, I'm saying why did you bother posting a chatGPT response about that, when it is just generating text that approximates real text, but isn't actually true? Either you believe your argument about "ai can be good", and you are legitimately trying to use the BS text it generated to convince people, or you don't believe it, and you're comment is pointless and misleading for no reason (since it's BS text).

And what does Google working with the FBI to combat sex trafficking have to do with AI? That's not an AI-based initiative.

1

dont_ban_me_bruh t1_j97eocf wrote

> Someone asked how ai can be used to help with exploitation. So I asked ai.

Which is pointless, as it's not answering your question, it's generating text which approximates the way an answer would look.

edit: LMAO op blocked me for calling him out! wow!

1

jeremiah256 t1_j97r47n wrote

But even today, we have no problem finding exploitation or other wrong doings.

Everyone on this thread could probably list out a dozen or more current and ongoing evils being perpetrated on the public.

The question will be, will humanity be willing to turn over human agency to an inhuman entity? Because unless the AI’s judgement has teeth, and can not only initiate actions against wrong doers, but also ensure justice is done, it’s just one more channel or subreddit of information that can be ignored or discredited.

1