Viewing a single comment thread. View all comments

ebolathrowawayy t1_irj2u9t wrote

> You're naive and looking past the obvious here. We already have a tiny form of this. Go on any social media - YouTube, Twitter, Facebook, what have you - and you are fed stimuli that has been personally tailored to the choices and clicks you make. And - you claim we are mentally doing well because of these things? Aren't a lot of societies facing declining birth rates, increase rates of mental health issues and suicide, and political polarization?

I am doing mentally well. Not everyone has turned into a drone, but yes that future could be very dark.

The tech is unavoidable though, so we should be shaping it early before it gets out of hand. That's part of why I think we need to embrace this tech and not avoid it. It doesn't matter to the developers or the users of SD how much harm it does to a subset of artists because it doesn't affect them. So now is the time to debate the implications as we're doing now to help shape the development and use of AI tools. Avoiding it altogether though just doesn't help anyone. Like, have you used it extensively and built anything using SD as a foundation? The process of doing that might change your mind.

> These are the fantastical scenarios I was speaking about - riddled in AI-favored rhetoric. It's very predictable and not far from a sales pitch.

IDK, I think that future is inevitable. There's no sign of an AI winter coming, so it's only going to get better and better.

> Just because people don't blindly believe the same things you do, and question how to use certain system, tools, or whatever, responsibly - doesn't make them people who are yelling at clouds. Besides, why not? Show some kindness.

Yeah, sorry, it's hard sometimes to empathize.

> No, everyone isn't preparing for AI because not everyone agrees with you. It's as simple as that.

I think everyone should be preparing. That's why it's so hard for me to empathize with those who don't. It's so obvious to me that AI is going to first assist everyone on a daily basis and then eventually make most humans obsolete. Preparing for this might be building skills in other domains (but fuck if I know which ones, AI is coming for art and coding before more manual labor which is a shock) or by becoming an early adopter of AI tools to remain relevant.

> I mean, hell, if you sit a person down in front of a screen and they play their favorite video game all day long, you aren't going to have a very happy, satisfied person. This is measurable right now, actually.

Maybe? I haven't looked much into that. I used to play SC2 competitively for 10 hours a day with a fulltime job many years ago and it wrecked me, but I also gained some cognitive benefits that still persist to today. I don't regret those days. I don't know if it's completely cut and dry. Also, games in the future may be very different than today's if they're personalized for the individual and if there's nothing else for humans to productively do.

1

the_coyote_smith t1_irjxlc0 wrote

I agree we should shape it responsibly. Which means sometimes criticizing, let’s say, SD and LAION from scrapping medical records and copyrighted images from other artists who did the real work. And yes - it was knowingly done - because there is a double standard happening with Harmonai, which explicitly does collects via an opt-in approach.

https://techcrunch.com/2022/10/07/ai-music-generator-dance-diffusion/

If it’s hard to empathize, than maybe that is something you could work on.

Your points boil down to - (1) tech is inevitable so just don’t question, (2) we don’t know what could happen, (3) this tech is harmful to peoples psyche and social stability but I’m fine so just accept it. (4) leave the ones who question behind.

Like - duh, I want AI to be helpful for everyone. I want it used responsibly. I used to study Cognitive Science and NLP in college, I was all in. I want this tech to truly help everyone responsibly with just intent. But, I just don’t think gutting artists work opportunities - and creating a world where all art has the is shadow of doubt over it (I.e “was this made by a person or a robot? I can’t tell …”) - is the way to go. I just can’t imagine what good could come out of a world where someone who is suicidal picks up a phone - calls the suicide hotline - but isn’t sure if a real person is behind the phone. Hell, they may have not even bothered to call knowing it could be a robot and not a person.

1

ebolathrowawayy t1_irk3z1n wrote

> I agree we should shape it responsibly. Which means sometimes criticizing, let’s say, SD and LAION from scrapping medical records and copyrighted images from other artists who did the real work. And yes - it was knowingly done - because there is a double standard happening with Harmonai, which explicitly does collects via an opt-in approach.

I'm pretty sure SD didn't have time to comb through however many billions of images in the LAION dataset. I doubt SD wanted medical records in their model or if they do I'm sure they'll be happy to remove any that violate HIPAA.

Copyrighted images are fair game unless the law changes. They used it for training only. If artists' work aren't included in the training data then you get a pretty shitty model.

> Your points boil down to - (1) tech is inevitable so just don’t question, (2) we don’t know what could happen, (3) this tech is harmful to peoples psyche and social stability but I’m fine so just accept it. (4) leave the ones who question behind.

None of those are my points.

  1. Tech is inevitable, I didn't say don't question

  2. I have very high confidence about what will happen in the next 10-20 years. I have vague ideas about what will happen after that, but that can be dealt with when it's nearer

  3. It may be harmful, but so are psychopathic CEOs and kitchen knives. It's not unique to AI. I personally don't think AI is likely to be net-harmful, even when ASIs come online

  4. No, I just don't feel bad for people who lose their jobs because they couldn't see the future staring them in the face. I don't feel bad that tech lifted some 90% of the world's population out of having to do farm work all day either. They shouldn't be left behind though, UBI will be essential

> But, I just don’t think gutting artists work opportunities

They will be gutted soon with or without their work included in the training data. It might delay it by a year or less because some artists will volunteer their work and there's a lot of good work done by long dead artists that can be used. Maybe the people who are so threatened by SD should move on to making things that aren't furry porn and other basic stuff. Or learn how to use it to assist them in whatever they're doing.

> I want this tech to truly help everyone responsibly with just intent. But, I just don’t think gutting artists work opportunities - and creating a world where all art has the is shadow of doubt over it (I.e “was this made by a person or a robot? I can’t tell …”)

As a consumer of the works of artists of all kinds, I don't care whether an AI or a person made something.

> I just can’t imagine what good could come out of a world where someone who is suicidal picks up a phone - calls the suicide hotline - but isn’t sure if a real person is behind the phone. Hell, they may have not even bothered to call knowing it could be a robot and not a person.

Why would that matter if they deploy an AI for this purpose and see a reduction in suicides? If they deploy it and suicides increase then yeah sure, it failed, just stop doing that and ban that practice.

I want to live in a world that's similar to Star Trek and I think it's foolish to try to halt progress.

1

the_coyote_smith t1_irk7332 wrote

Yeah - I’m done arguing because it’s just clear you don’t care about people at the moment.

I’m glad you think your fantasy of living in Star Trek will happen.

I hope you find compassion and empathy one day.

1