Why did the former head of artificial intelligence at Google have to speak sooner about the dangers of the technology?

Why did the former head of artificial intelligence at Google have to speak sooner about the dangers of the technology?

Bloomberg’s View – It’s hard not to worry when Jeffrey Hinton, the godfather of artificial intelligence (AI), says he left Google (GOOG) and regrets his life’s work.

Hinton, who made a crucial contribution to artificial intelligence research in the 1970s through his work on neural networks, told multiple media outlets last week that big Technology companies have been moving very quickly in bringing AI to the public.

Part of the problem was that the AI ​​was catching up to human-like abilities faster than experts expected. “It’s scary,” he told the newspaper. The New York Times.

Hinton’s concerns certainly make sense, however It would have been more effective if it had appeared several years earlierwhen other researchers without a pension sounded the same alarms.

Frankly, Hinton sought in a tweet to shed light on how the The New York Times He described his motives as concern that the article indicated he had left Google for criticizing it. “I actually left so I could talk about the dangers of AI without thinking about how it might affect Google,” he said. “Google has acted with great responsibility.”

While Hinton’s fame in the field may have protected him from backlash, the episode highlights a chronic problem in AI research: AI research is so controlled by Big Tech that many of their scientists are afraid to voice concerns about AI research. . Careers.

You can understand why. Meredith Whitaker, Google’s former search director, had to spend thousands of dollars on attorneys in 2018 after helping orchestrate the exit of 20,000 Google employees due to the company’s contracts with the US Department of Defense.

See also  Huawei is ready to threaten Android and iOS

“It’s very scary going up against Google,” he said in an interview. And Whitaker, who is now the head of encrypted messaging app Signal, ended up resigning from the search giant with a public warning about the company’s direction.

Two years later, Google AI researchers Timnit Gebru and Margaret Mitchell were fired from the tech giant after issuing a paper that highlighted the dangers of big language models — the technology currently at the center of concerns about chatbots and generative AI.

They cited issues such as racial and gender bias, ambiguity and environmental cost.

Whitaker worries that Hinton is now the subject of glowing photos of his contributions to artificial intelligence, after others risked so much to stand up for what they believed in while working at Google.

“People with much less power and much more marginalized positions have been taking real personal risks to include problems with AI and the companies that control it,” she says.

Why didn’t Hinton speak sooner?

The scientist refused to answer questions on this topic. But it appears he’s been dabbling with artificial intelligence for some time, including years when his peers have struggled to take a more cautious approach to the technology.

Journal article The New Yorker 2015 describes him talking to another AI researcher at a conference about how politicians use AI to terrorize people.

When asked why he continued searching, Hinton replied, “I can give you the usual arguments, but the fact is that the probability of discovery is very good.” It was a deliberate echo of J. Robert Oppenheimer famous for the “technically sound” gravity of his work on the atomic bomb.

See also  Most of them believe that the average time needed to change smartphones is 4 years

Hinton says Google has acted “very responsibly” in spreading AI. But this is only partially true.

Yes, the company closed its facial recognition business due to misuse concerns and kept the powerful LaMDA language model under wraps for two years in order to work on making it more secure and less biased. Google has also restricted the capabilities of Bard, its ChatGPT competitor.

But being accountable also means being transparent, and Google’s history of eliminating internal concerns about its technology doesn’t inspire confidence.

It is hoped that Hinton’s departure and his warnings will inspire other researchers at big tech companies to speak out about their concerns.

The tech conglomerates have captured some of the brightest academic minds thanks to the lure of high salaries, generous benefits and the massive computing power used to train and experiment with increasingly powerful AI models.

However, there are signs that at least some researchers are considering being more forthright. “I often think about when to leave [inicialização de IA] Kathryn Olson, a member of the technical team at AI security firm Anthropic, tweeted Monday in response to Hinton’s comments.

“I can already tell that this change will affect me.”

Many AI researchers seem to have a stretch The inevitable acceptance that not much can be done to stem the wave of generative AI, now that it has been released into the world. As Anthropic co-founder Jared Kaplan told me in an interview published on Tuesday (2), “It leaked.”

But if today’s researchers are willing to speak up now, while this is important, rather than just before they retire, we could all potentially benefit.

See also  Two Point Campus diversifies the series formula with crazy courses

This column does not necessarily reflect the views of the editorial board or Bloomberg LP and its owners.

Barmy Olson is a technology columnist for Bloomberg Opinion. She previously worked for the Wall Street Journal and Forbes, and is the author of We Are Anonymous.

See more at bloomberg.com

Read also:

Adidas is betting on classic models after the end of the partnership with Kanye West

By Chris Skeldon

"Coffee trailblazer. Social media ninja. Unapologetic web guru. Friendly music fan. Alcohol fanatic."