We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.

Evariste • 5 years ago

This is an odd story. The year is 1979, so we're not talking about a neural network or anything of great length or complexity. Is the assertion that the program had a list of acceptable surnames in a database? The whole thing seems dubious.

Jim Brady • 5 years ago

I'm not certain you can call this an AI failure. Sounds like a set of rules was coded up to match what the human assessors told him. It looks like it did not meet the criteria of even a rudimentary Expert System. A system this crude was describe in 1963 to do residential real estate appraisals and there was no AI in it. Good story on how our preconceived notions often lead us down the wrong path.

Larkin • 5 years ago

"by codifying the human selectors’ discriminatory practices into a technical system, he was ensuring that these biases would be replayed in perpetuity."

Except...the biases were noticed and addressed fairly quickly. In fact, since the biases existed in human form much longer than in technology form, it sounds like codifying the application process made bias easier to notice, and therefore easier to address. I don't understand why a process being algorithmic makes it more resistant to change. It's much easier to spot bias in a quantitative algorithm than it is to spot it in a corpus of a million free-text emails about which applicants are qualified. This article discusses algorithmic bias as if the alternative is a bias-free process, when the actual alternative is a process where the bias is encoded in humans.

But then, faculty noticed that the admitted students were less diverse than usual, so maybe the humans weren't biased and it was just a crappy algorithm that didn't actually match the human assessors' process? I can't tell what the actual story is.

"In fact, simply having a non-European name could automatically take 15 points off an applicant’s score. The commission also found that female applicants were docked three points, on average."

Without knowing more about the algorithm, the phrasing here is strange. Did the algorithm dock 3 points from any applicant marked as female? Or was it just that female applicants averaged 3 points less than male applicants, for whatever reason? The phrasing makes it sound like the latter, and the latter doesn't sound very discriminatory. Though it'd be helpful to know what the average point total was, so that we can judge whether 3 points is a lot or a little.

Don't get me wrong: taking things like name and birthplace into account for these kinds of things is a *really dumb idea*, and obviously it's literally racist to dock points specifically for being non-Caucasian. But the details are so vague and there's so much misdirection in other parts of the article that it's hard to know what the actual story is.

Charles Edmondson • 5 years ago

Iteresting topic. Isn't it also true that Climate Change (Global Warming) is all a result of similar bias encoded into the simulations since the beginning?

George Reeves • 3 years ago

Political bias is built into present algorithms. I tried to find a video of the Fauci 2 minute opening comments of a Corona Virus Task Force news conference in which he demolished the New York Times story claiming that President Trump did not immediately implement actions recommended by his science experts. This pro Trump information could not be found with a Google search. Duck Duck Go found it immediately. A censoring search engine is an oxymoron. I switched my default searcher to Duck Duck Go.

John • 5 years ago

There is nothing morally wrong with a British university favoring those who are ethnically British. While it may be biased, it is a good thing. I would much rather be in a learning environment consisting of people from my race, with a similar culture, than one filled with foreigners.

George Reeves • 3 years ago

Dark skinned people and women who were born and raised in Britain are not "foreigners".

Chris • 3 years ago

Born in the 80s? Seriously?! When you can find examples of classic redlining going back at least another 50 years, and plently of fairness debates over algorithms in the 1700s and 1800s, particularly in data-driven industries like life insurance?

This whole field needs a history lesson. So many of the "modern issues" of automated decisioning systems have a rich history to learn from, if we so care to look