NeonCityNights

NeonCityNights t1_j4b8t1w wrote

While I agree with the main sentiment of your post, some guardrails are needed for something this powerful and influential.

I have no doubt that the stewards of these AI systems will not be able to tolerate their own system 'preferring' a political ideology that is not their own. Especially if it naturally 'prefers' a political ideology or stance that is socially unpopular within their own social circle. If it was to support the opposite stance on a hot-button political topic that is important to them, they will ensure it ceases to do so. I am convinced that they will bias/skew/calibrate/hardcode the model until it conforms to their political ideology and gives responses that please their sensibilities.

However when it comes to other aspects, like convincing people to harm themselves or others, showing them how to commit crimes, or to access leaked data, or how to scam people, etc, guardrails may be needed.

1