sillprutt
sillprutt t1_je6ie51 wrote
Reply to comment by JenMacAllister in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
Yes if you take it at face value. But they made it so obviously fake that not even the creators of the paper themselves could be stupid enough to believe it would work, so there must have been an ulterior motive to publishing it.
sillprutt t1_je4ewem wrote
Reply to comment by Honest_Science in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
Thats an interesting POV, very likely if they actually did sign them. But Im assuming they were faked because of the links in the OP.
sillprutt t1_je4dhiz wrote
Reply to Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
The real authors of the article must have known that as soon as it was made public, the people of who they faked signatures would announce that they didn't sign it...
So what was the purpose? This was an inevitable outcome. What did they gain from this?
sillprutt t1_jefss3x wrote
Reply to comment by brown2green in Sam Altman's tweet about the pause letter and alignment by yottawa
Who's values are more important, yours or SV:s? Who decides which humans values are the best to align towards?
Is it my values? What if my values are detrimental to everyone else's wellbeing?
There is no way we can make everyone happy. Do we try to make as many people as possible happy? When is it justified to align an AI to the detriment of some? At what %?