Viewing a single comment thread. View all comments

TheLGMac t1_j1tu4b7 wrote

AI is still an interpreter of data; there is no perfectly “true” interpretation of raw data. There is always a process of interpreting data to have some meaning. Interpretation is prone to bias.

If the machine learning model makes interpretations based on prior interpretations made (eg “historically only white or male candidates have been successfully hired in this role”) then this can perpetuate existing bias. Until recently the engineers building these models have not been thinking to build in safeguards against bias. Laws like these ensure that these kinds of biases are safeguarded against.

Think of this like building codes in architecture/structural engineering.

1