Published in AI

California might make AI a legal liability

by on11 April 2022


AI can't be used to screen out applicants

A newly proposed amendment to California's hiring discrimination laws would make AI-powered employment decision-making software a source of legal liability.

The proposal would make it illegal for businesses and employment agencies to use automated-decision systems to screen out applicants who are considered a protected class by the California Department of Fair Employment and Housing.
Lawyers Brent Hamilton and Jeffrey Bosley of Davis Wright Tremaine wrote that the law could be easily applied to "applications or systems that may only be tangentially related to employment decisions,".

Automated-decision systems and algorithms, both fundamental to the law, are broadly defined in the draft, Hamilton and Bosley said.

Because the law makers were not specific it means that technologies designed to aid human decision-making in small, subtle ways could end up being lumped together with hiring software, as could third-party vendors who provide the code.

Strict record keeping requirements are included in the proposed law that double record retention time from two to four years, and require anyone using automated-decision systems to retain all machine-learning data generated as part of its operation and training. Training datasets leave vendors responsible.

 

Last modified on 11 April 2022
Rate this item
(2 votes)

Read more about: