Submitted by BronzeArcher t3_1150kh0 in MachineLearning
currentscurrents t1_j8zz4n3 wrote
Reply to comment by BronzeArcher in [D] What are the worst ethical considerations of large language models? by BronzeArcher
Look at things like replika.ai that give you a "friend" to chat with. Now imagine someone evil using that to run a romance scam.
Sure the success rate is low, but it can search for millions of potential victims at once. The cost of operation is almost zero compared to human-run scams.
On the other hand, it also gives us better tools to protect against it. We can use LLMs to examine messages and spot scams. People who are lonely enough to fall for a romance scam may compensate for their loneliness by chatting with friendly or sexy chatbots.
ilovethrills t1_j90noyx wrote
But that can be said on paper for thousands of things. Not sure if it actually translates in real life. Although there might be some push to label such content as AI generated, similar to how "Ad" and "promoted" are labelled in results.
Viewing a single comment thread. View all comments