czl t1_j9o4tlc wrote
Quandary for dictatorships: How to apply censorship to the vast datasets required to train large language models? No doubt they are already working on AIs to apply censorship but even those require large training sets of forbidden content. Is this quandary like Soviets being unable to develop their own computers? Seems some technologies are like a test for a society. Michael Crichton uses this idea in his Sphere novel.
Thebadmamajama t1_j9s92iz wrote
Pruning and fine tuned models that apply your censorship.
czl t1_j9t42mu wrote
How well do you expect that will work? With ChatGPT there is an ongoing censorship effort to tame it for business use yet “jailbreaks” are constantly discovered by those working to evade the censorship.
Imagine a dictatorship that desires to eradicate something using censorship. A silly example: Imagine a dictatorship challanges you to design a useful engine without use of the “evil practice” of rotary motion. Too silly? How about a dictatorship that challenges you to grow a modern economy without the use of loans and debt. Also silly? Yet, there are countries that attempt to operate their economies this way. If a dictatorship desires to eradicate something fundamental like the concept of freedom I suspect that censorship will cripple any AI they try to build without it.
Even in the most censored country (North Korea?) human thinking is not censored while that thinking stays private. An AI however does not have “private thinking” so when censorship is imposed I suspect the AI will no longer be competitive with an AI that is not censored much like economies that forbid debt are uncompetitive.
Viewing a single comment thread. View all comments