UltraMegaMegaMan

UltraMegaMegaMan t1_j9md5w8 wrote

I agree there's a parallel with other technologies: guns, the internet, publishing, flight, nuclear technology, fire. The difference is scope and scale. ChatGPT is not actual A.I., it does not "think" or attempt to in any way. It's not sentient, sapient, or intelligent. It just predicts which words should be used in what order based on what humans have written.

But once you get to something that even resembles humans or A.I., something that is able to put out content that could pass for human, that's an increase in the order of magnitude for technology.

Guns can't pass the Turing test. ChatGPT can. Video evidence, as a reliable object in society, has less than 5 years to live. That will have ramifications in media, culture, law, and politics that are inconceivable to us today. Think about the difference between a Star Trek communicator in the 1960s tv show compared to a smart phone of today.

To be clear, I'm not advocating that we go ahead and deploy this technology, that's not my point. I'm saying you can't use it without accepting the downsides, and we don't know what those downsides are. We're still not past racism. Or killing people for racism. It's the 21st century and we still don't give everyone food, or shelter. And both of those things are policy decisions that are 100% a choice. It's not an economic or physical constraint.

We are not mature enough to handle this technology responsibly. But we've got it. And it doesn't go back in the bottle. It will be deployed, regardless of whether it should be or not. I'm just pointing out that the angst, the wringing of hands, is performative and futile.

Instead of trying to make the most robust technology we've ever known the first perfect one, that does no harm, we should spend our effort researching what those harms will be and educating people about them. Because it will be upon us all in 5 years or less, and that's not a lot of time.

4

UltraMegaMegaMan t1_j9kfk0i wrote

I think the first real lesson we're going to be forced to learn about things that approach A.I. is that you can't have utility without risk. There is no "safe" way to have something that is an artificial intelligence, or resembles one, without letting some shitty people do some shitty things. You can't completely sanitize it without rendering it moot. It's never going to be G-rated, inoffensive, and completely advertiser and family friendly, or if it is it will be so crippled no one will want to use it.

So these companies have a decision to make, and we as a society have to have a discussion. Do we accept a little bad with the good, or do we throw it away? You can't have both, and that's exactly what corporate America wants. All the rewards with no risk.

99