Skip to main content

Google Tweaks Email Program That Assumed An Investor Was Male

caption: A Google sign and logo at the Googleplex in Menlo Park, Calif. This week, Google project manager spoke to Reuters about a problem discovered in the firm's email service.
Enlarge Icon
A Google sign and logo at the Googleplex in Menlo Park, Calif. This week, Google project manager spoke to Reuters about a problem discovered in the firm's email service.
AFP/Getty Images

If someone were to tell you they have a meeting with an investor next week, would you assume that investor was a man?

The artificially intelligent Smart Compose feature of Google's Gmail did and, after the problem was discovered, the predictive text tool has been banned from using gendered pronouns.

In an interview with Reuters published Tuesday, Gmail product manager Paul Lambert divulged the issue and the company's response.

Users of Gmail — Lambert says there are 1.5 billion — are probably familiar with Smart Compose, while maybe not knowing it by name.

Similar to the predictive keyboard on most smartphones, Smart Compose likes to finish sentences by using patterns discovered by artificial intelligence in literature, web pages and emails. For example, if a user were to type the word "as" in the middle of a sentence, Smart Compose might suggest the phrase "soon as possible" to continue or finish the sentence.

"Lambert said Smart Compose assists on 11 percent of messages worldwide sent from Gmail.com," Reuters reported.

With that volume of messages, there are lots of opportunities for mistakes.

In January when a research scientist at Google typed "I am meeting an investor next week," Smart Compose thought they might want to follow that statement with a question.

"Do you want to meet him?" was the suggested text generated by the predictive technology, which had just assumed the investor was a "he" and not a "she."

Lambert told Reuters the Smart Compose team made several attempts to circumvent the problem but none was fruitful.

Not wanting to take any chances on the technology incorrectly predicting someone's gender identity and offending users, the company completely disallowed the suggestion of gendered pronouns.

Google might have exercised extra caution regarding potential gender gaffes because this is not the first time one of their artificial intelligence systems has been caught jumping to an offensive conclusion.

In 2016, The Guardian's Carole Cadwalladr reported typing in the phrase "are jews" into a Google search bar, which then suggested, among other options, that Cadwalladr might be wanting to ask, "are jews evil?"

And, in the summer of 2015, the company issued an apology after an artificial intelligence feature that helps organize Google Photos users' images labeled a picture of two African Americans as a species other than human.

However, these blunders are not entirely the fault of the algorithm's programmers and blame can honestly be assigned to the algorithm itself, according to Christian Sandvig, a professor at the University of Michigan's School of Information, who spoke to NPR in 2016.

"The systems are of a sufficient complexity that it is possible to say the algorithm did it," he says. "And it's actually true — the algorithm is sufficiently complicated, and it's changing in real time. It's writing its own rules on the basis of data and input that it does do things and we're often surprised by them."

Technologies like Smart Compose learn to write sentences by studying relationships between words typed by everyday humans.

Reuters reports:

"A system shown billions of human sentences becomes adept at completing common phrases but is limited by generalities. Men have long dominated fields such as finance and science, for example, so the technology would conclude from the data that an investor or engineer is 'he' or 'him.' The issue trips up nearly every major tech company."

Subbarao Kambhampati, a computer science professor at Arizona State University and former president of the Association for the Advancement of Artificial Intelligence, spoke to NPR in 2016 about AI ethics.

"When you train a learning algorithm on a bunch of data, then it will find a pattern that is in that data. This has been known, obviously, understood by everybody within AI," he said. "But the fact that the impact of that may be unintended stereotyping, unintended discrimination is something that has become much more of an issue right now." [Copyright 2018 NPR]

Why you can trust KUOW