Facebook has apologised for an incident in which its artificial intelligence incorrectly categorised a video of Black males as “primates,” calling it a “unacceptable error” that it was investigating to prevent from happening again. According to the New York Times, users who viewed a video posted by the UK tabloid Daily Mail on June 27th were prompted to opt-in to “continue seeing videos about Primates.”
Facebook immediately deactivated the entire subject recommendation feature after realising what was happening, a spokeswoman told The Verge in an email on Saturday.
“Clearly, this was an unforgivable error,” the representative stated. The company is currently examining the reason of the behaviour in order to avoid it from occuring again, the representative stated. “As we have stated, while we have made improvements to our AI, we recognise that it is not perfect and that we still have work to do. We sincerely apologise to anyone who may have come upon these obscene recommendations.”
The incident is the latest example of artificial intelligence systems exhibiting gender or racial bias, with facial recognition tools particularly prone to misidentification of individuals of colour. Google apologised in 2015 after its Photos app labelled images of Black individuals as “gorillas.” Facebook announced last year that it was investigating whether its artificial intelligence-trained algorithms — including those of Instagram, which Facebook owns — were racially biassed.
In April, the US Federal Trade Commission warned that AI systems that exhibit “concerning” racial and gender biases may violate consumer protection laws if used in credit, housing, or employment decision-making. “Take responsibility for your actions or be prepared for the FTC to do so for you,” FTC privacy attorney Elisa Jillson said in a blog post on the agency’s website.