Race Matters

Artificial intelligence and racial bias: Can robots be racist?

Runnymede Communications Manager Lester Holloway looks at how advances in the technology sector could be leaving black and minority ethnic (BME) people behind, or worse, targeting them for discrimination.

Artificial intelligence, or AI, is best known for driverless cars and as the inspiration for endless Hollywood movies. We invariably see a future where robots turn from servants to oppressors of humans because their burgeoning intelligence has made them realise the planet needs saving from people.

In the real world we are fairly sure that the ability of robots to self-learn (AI) is limited to within the parameters that are set for it. For example, robots that are programmed to make music are highly unlikely to turn on us because mankind has produced Justin Bieber.

AI also has more prosaic uses, such deciphering handwriting and gaming. Essentially AI covers anything where computers have the possibility of learning from the point where programming stops. Potentially AI could cover almost every aspect of our lives, from devising new drugs to lengthening our lives, or selling us products in ways most likely to find favour with individual customers.

The technologies, and their future possibilities, have sparked a digital gold-rush that is already dominated by a handful of tech heavyweight, namely Google, Amazon, Apple, Facebook, IBM and Microsoft. PitchBook, a data provider, has calculated that this year alone has seen those companies spend $21.3 billion in mergers and acquisitions related to AI. Much of that is to purchase the talent working for smaller, innovative companies, as much as the technology they are developing.

The movies warn us about technology being concentrated in the hands of one megalomaniac businessman. But it isn’t our survival as a species that is at stake, but fair access to services, such as ensuring that the technology does not discriminate against people reasons such as the colour of their skin.

There has already been a case of a ‘smart’ soap dispenser in a public lavatory that refuses to work on darker hands, yet it has no problem with white hands. A similar problem has been reported by dark-skinned users of wearable fitness trackers and heart-rate monitors. This has more serious consequences for people most at risk of a condition like heart disease. This unwitting ethnic bias in programming opens up a debate about ethnic representation in the tech industry.

A survey of Silicon Valley published in Business Insider UK found just 2% of Google, Facebook and Yahoo! employees were black, compared to 12% of the population. Microsoft and Twitter scored 3%. On the other side of the coin, people of east Asian background were over-represented. This community make up 4% of the US population but 44% of Yahoo! staff and 32% of Google.

Much more worryingly, and mirroring another popular sci-fi film, machines are already being used to predict future criminals. Last year ProPublica exposed how risk assessment scores used in criminal case in several US states were showing a bias against black defendants by labelling them as more likely to commit future crime due to the algorithms used in programming. 23% of white people were categorized as higher risk but did not reoffend, compared to 45% of black people who also did not reoffend. Meanwhile, 48% of white offenders labelled a low risk did commit another crime, compared to 28% of African Americans.
https://en.wikipedia.org/wiki/Minority_Report_(film)

Google has been the focus of several controversies around how its search engine works. It was forced to apologise to former first lady Michelle Obama after an image search under her name selected racist imagery portraying her as a monkey. Meanwhile, searching for ‘N****r house’ in Google Maps brought up the White House. While these are individual cases, there has been much written on how Google algorithms generally prioritise negative content for Africa, and predictive questions offer up racist, sexist, anti-Semitic and pro-fascist suggestions. Google say algorithms are formed partly through user demand, but if the tech giant can’t deal with racism in its core product, how can we have confidence that it will do any better with AI?

Another question under the concern of digital racism is one of access to services. For example, if a technology has difficulty recognising black faces or black people’s irises, it might prevent people going somewhere or buying something. That could contravene Britain’s equalities laws covering access to goods and services, although the fact that a machine is doing the discriminating makes it a grey area.

Next year, 2018, marks half a century since the second race relations law was introduced, which gave ethnic minorities recourse for challenging anyone denying goods or services on grounds of race. The law was introduced to combat the kind of prejudice that saw landlords put up signs saying ‘No Irish, No Blacks, No Dogs’ and refusing to serve people in shops or restaurants. With the advent of new technologies, lawmakers might be forced to update the statutes to cover denial of goods and services not by other people, but by machines making independent decisions based on flawed algorithms.

AI promises a technological jump for mankind, with machines processing masses of complicated data and working out solutions to countless problems. Control of such technologies could hand the Western world an advantage at the very moment when large parts of the developing world are finally catching up. Today many countries in the global south have fast growing economies after decades of struggle to recover from the long-lasting impacts of historical enslavement, colonialism and neo-colonialism, tariff barriers, petrodollar-induced indebtedness, foreign-owned agribusiness and mines, and dependence on seeds and parts for locally-owned agriculture. While China looks like keeping pace on AI, or even outpacing the West in the digital race, developing countries risk being left behind.

When you factor in the huge potential of AI, for example to plan your social life for you, or reply to friends saying what you would say if you had the time, the power such systems can have over every corner of our lives is immense. There are wider philosophical questions over whether we control it, or it controls us, questions people are already asking over the addiction to social media.

On the matter of racial prejudice, probably the first place to set off alarm bells would be the military. The Chatham House paper ‘Artificial Intelligence and the Future of Warfare’ envisages self-learning robots on active missions. If they were programmed to kill, the selection of the ‘correct’ targets will depend on the parameters within which AI works. Put simply, you don’t want the people who brought you smart soap dispensers or future criminal predictors working on those machines. 

Fear of the future, and in particular of technology, is natural. Especially when the technology is in the hands of powerful and unelected big business. Many fears never come to fruition, and some inventions that were predicted to cause chaos turn out to be benign or even useful. As a society we have limited control, even as consumers, over the direction of travel of AI. Politicians, as always, will attempt to play regulatory catch-up long after the technology was introduced.

Lester tweets: @brolezholloway
  Share this post