Viewing a single comment thread. View all comments

fjaoaoaoao t1_j4llk6m wrote

You are right that the author is talking about an ideal that doesn't exist. Right now that AI would be heavily influenced by a "cabal of intellectuals" (but probably not even intellectuals).

But it's still an interesting thought to deshrine or at least point out flaws in democracy. Not anything completely new, but I do think the piece adds to the conversation. As the auth points out, majorites don't often reflect proper application of ethical principles. Democracy places some degree faith in the ability of the people and their human nature to govern. Not that people aren't fallible, but democracy intends to be self-correcting.

an AI-ocracy would place faith in the ability of rational, impartial AI to reflect proper application of ethical principles, which in theory would be nice but obviously would need a code of values and morals to build off from to decide what's more rational and ethical in the gray areas, and these values, morals, and working definitions change over time. If it skips the more gray areas, then it's usefulness of a governing body is diminished.

Perhaps AI-ocracy is not feasible or overall better, but blending AI with other governance forms, using AI as a tool might be.

For right now, maybe a practical solution is for AI to review cases or applications of law and offer an opinion on whether it is using application of ethical principles. It's code base should be open and public so anyone can have a look-see. Having a consistent review might be a good testing ground to see how it could be used in other governance contexts.

1

shockingdevelopment t1_j4llzp3 wrote

In practice I doubt the AI could propose something we think is fucked up without us saying uhhhh no.

1

fjaoaoaoao t1_j4lp97z wrote

I don't think that's the point though. There are heaps of cases each year and as the article points out, incredibly complex documents that most people cannot bother to review. It's easy for AI to take more subtle choices or make decisions in more morally grey areas, depending on the values and morals it's trained on. Of course, it's not like we have significantly better systems now, but the level of faith in a particular system should always be scrutinized. This is why I suggest a practical solution is to just develop an AI that reviews cases or offers policy examples for now.

1

shockingdevelopment t1_j4lslgt wrote

It'd be a landmark moment if AI settled deontology vs consequentialism and solved ethics.

1