Google clamped down on Timnit Gebru, the former co-lead of ethical AI team, because she not only revealed bias in its large language models, but also called for structural changes in the AI field, Gebru told ‘Going Underground.’
Dr. Gebru was the first black female research scientist at Google, and her controversial parting with the tech giant made headlines last year. It followed her refusal to fulfill the company’s demand to retract a paper on ethical problems arising from the large language models (LLMs) that are used by Google Translate and other apps. Gebru insists she was fired for her stance, while Google claims she filed her resignation.
In Monday’s episode of RT’s ‘Going Underground’ program, she told the show’s host, Afshin Rattansi, why she split with Google.
Ethical AI is “a field that tries to ensure that, when we work on AI technology, we’re working on it with foresight and trying to understand what the negative potential societal effects are and minimizing those,” Gebru said.
And this was exactly what she had been pursuing at Google, before she was – in her view – fired by the company. “I’m never going to say that I resigned. That’s not going to happen,” Gebru said.
The 2020 paper prepared by the expert and her colleagues highlighted the “environmental and financial costs” of the large language models, and warned against making them too big. The LLMs “consume a lot of computer power,” she explained. “So, if you’re working on larger and larger language models, only the people with those kinds of huge compute powers are going to be able to use them … Those benefiting from the LLMs aren’t those who are paying the costs.” And that state of affairs amounted to “environmental racism,” she said.
The large language models use data from the internet to learn, but that doesn’t necessarily mean they incorporate all the opinions available online, Gebru, an Ethiopian American, said. The paper highlighted the “dangers” of such an approach, which could potentially have seen AI being trained to incorporate bias and hateful content.
With LLMs, you can get something that sounds really fluid and coherent, but is completely wrong.
The most vivid example of that, she said, was the experience of a Palestinian man who was allegedly arrested by the Israeli police after Facebook’s algorithms mistakenly translated his post that read, “Good morning,” as “Attack them.”
Gebru said she discovered her Google bosses really didn’t like it “whenever you showed them a problem and it was inconvenient” or too big for them to admit to. Indeed, the company wanted her to retract her academic peer-reviewed paper, which was about to be published at a scientific conference. She insists this demand wasn’t supported by any reasoning or research, with her supervisors just saying it “showed too many problems” with the LLMs.
The strategy of major players such as Google, Facebook, and Amazon is to pretend AI’s bias is “purely a technical issue, purely an algorithmical issue … [that] it has nothing to with anti-trust laws, monopoly, labor issues, or power dynamics. It’s just purely that technical thing that we need to work out,” Gebru said.
“We need regulations. And I think that’s … why all of these organizations and companies wanted to come down hard on me and a few other people: because they think what we’re advocating for isn’t a simple algorithmic tweak – it’s for larger structural changes,” she said.
With other whistleblowers following in her footsteps, society has finally begun to understand the need to regulate developments in AI, Gebru said, adding that the public must also be ready to protect those who reveal the wrongdoings of corporations from their pressure.
Since leaving Google, Gebru has launched the Black in AI group to unite scientists of color working in the field of AI. She’s also assembling an interdisciplinary research team to continue her work in the field. She said she won’t be looking to make a lot a money with the project – which is a non-profit – because “if your number-one goal is to maximize profits, then you’re going to cut corners” and end up creating precisely the opposite of ethical AI.
Like this story? Share it with a friend!