Submitted by Surur t3_10h4h7s in Futurology
currentscurrents t1_j57hol4 wrote
Good. Most "AI safety" I've seen has been political activists whining about things they don't understand.
HeavensCriedBlood t1_j57xibw wrote
Gab doesn't count as an accurate source of information. Git gud, scrub
Feni555 t1_j5bhs7b wrote
Hop on the subreddit or community for any group that uses those LLMs, like AI Dungeon, Novel Ai, character ai, and more.
They all agree with the guy on top. These filters always lobotomize the AI and its always for a stupid religious reason or a stupid political reason. This is well known to anyone who has been in that sphere for a while. There's partially a "cover my ass" mentality these corporations have going on but I can tell you that is bizarrely not that in the cases of LLM companies, it is weirdly personal for a lot of their decision makers and senior staff.
You just assumed you knew what he was talking about and you decided to be an obnoxious asshole to this dude for no reason. I hate redditors so much.
yaosio t1_j5a8pp1 wrote
AI safety concerns have always come from corporations that thought they were the sole arbiter of AI models. Now that multiple text and image generators are out there suddenly corporations have decided there are no safety concerns, and they swear it has nothing to do with reality smacking them and showing them they won't have a monopoly on the technology.
Viewing a single comment thread. View all comments