Viewing a single comment thread. View all comments

EndTimer t1_j9lcc8k wrote

I'm talking about everything from fake news to promoting white supremacy on social networks.

I'm thinking about what it's going to be like when 15 users on a popular discord server are OCR + GPT (>=) 3.5 + malicious prompting + typing output.

AI services and their critics have to try to limit this and even worse possibilities, or else everything is going to get overrun.

3

Standard_Ad_2238 t1_j9lkvrf wrote

People always find a way to talk about what they want. Let's say Reddit for some reason adds a ninth rule: "Any content related to AI is prohibited." Would you simply stop doing that at all? What the majority of us would do is find another website where we could talk, and even if that one starts to prohibit AI content too, we would keep looking until we find a new one. This behavior applies to everything.

There are already some examples of how trying to limit a specific topic on an AI would cripple several other aspects of it, as you could clear see it on a) CharacterAI's filter that prevented NSFW talks at the cost of a HUGE overall coherence decrease; b) a noticeable quality decrease of SD 2.0's capability of generating images with humans, since a lot of its understanding of anatomy came from the NSFW images, now removed from the model training; and c) BING, which I think I don't have to explain due to how recent it is.

On top of that, I'm utterly against censorship (not that it matters for our talk), so I'm very excited to see the uprising of open-source AI tools for everything, which is going to greatly increase the difficulty of limiting how AI is used.

5