Comments

You must log in or register to comment.

lonely40m t1_je83nie wrote

The problem with regulation is that it only extends so far. Do you think the Chinese AI will be developed with the same ethical considerations? The cat is out of the bag, you can't put it back in. People with terrible ideas are going to train their own AI models to do unethical things and there's basically nothing we can do about it anymore except prepare for whatever may come our way.

37

drlongtrl t1_je8n7ly wrote

There´s this video on Computerphile where they talk about how you can program the AI so that it´s output is somehow mathematically traceable to being created by AI. The premise was to prevent cheating by students. And my first thought was "Well, so the´re just gonna wait till someone abroad offers it without this feature".

3

Hironymus t1_je8w0an wrote

Which is why on top of regulations on how AIs might be designed there also need to be AIs able to detect if something has been AI created. You can not trust that everyone will be playing by the rules (if anything expects that, they really haven't been paying attention) so our only hope is trying to detect if someone is not.

1

real_grown_ass_man t1_je8paig wrote

Regulation will make certain uses of AI illegal, and punishable. Just like murder is still possible despite being illegal, bit the law certainly helps in preventing murder by making clear that committing murder will have consequences.

2

UwUHowYou t1_je9y2fx wrote

Yeah, this train has no brakes and if they stop no one else will.

The irresponsible thing would be to cease advancing it and let less responsible people take the forefront.

1

Safe_Register_856 t1_je8q9ix wrote

Why would you regad the Chinese to be the ones with moral and ethical bankruptcy and not the other way round?

−6

Hironymus t1_je8w5yo wrote

Last time I checked the Chinese are deporting their Uyghur population to concentration camps. It's hard to get more morally and ethically bankrupt.

3

Bewaretheicespiders t1_je8adww wrote

> It's interesting how ChatGPT-4 agrees with most of the article.

ChatGPT does not agree or disagree with anything. It spews statistically probable words given a long context and he corpus its been optimized with.

Man I can't wait for adversarial attacks to make people understand that this is a text generator, not an AI oracle.

27

Hot-Pea1271 OP t1_je8b2n1 wrote

I know. I was thinking that if ChatGPT responds that way, it's because the vast majority of people think so. After all, it was trained on a large corpus of data that contains, among many other things, what people think about the future of artificial intelligence.

4

svachalek t1_je8m64z wrote

I suspect certain topics like this one have been seeded with a very curated set of training articles.

2

count0- t1_je8ndfz wrote

You can always ask it: “Is the above response hardcoded?”.

2

hukep t1_je8vg87 wrote

It's an ultimate politician.

1

Spasticwookiee t1_je8op7d wrote

We can’t even collectively agree to stop burning down the only house we can live in because the people with the power benefit while the rest of us deal with the consequences. I feel like it will go exactly the same way with AI.

I read a headline earlier that firms are actively recruiting “AI whisperers” to better hone the responses for AI and for AI users, and paying huge salaries to those people.

The cat is already out of the bag. Effective regulations should have been in place already, but governments are famously reactive, as opposed to proactive.

So, like most things in this world, the rich will get richer from it, and everyone else will have to deal with the consequences of it when it goes to shit.

5

ltdunstanyahoo t1_je8kzi5 wrote

What was the prompt used…. The prompts can manipulate the output. I can tell by the output that the OP asked more than simple and promoted for a specific type of output “write an essay agreeing with this article…. (Paste Article)”

3

judasblue t1_je8vsis wrote

Ding ding, we have a winner! Any time you see these things without the whole prompt stream leading to them, you should just assume shenanigans. They are still going to just bogus the shown prompts, but at least it shows they are putting in the effort.

1

Light01 t1_je8up7f wrote

The problem with chat gpt at the moment is that if you reformulate the response, it'll say something opposite.

1

donaldtrumpsucksmyd t1_je8v2wr wrote

What’s like worst case scenario? Like y2k but things actually stop working?

1