Introducing the Formation of Google DeepMind’s AI Safety Organization

So in response, Google — thousands of jobs lighter than it was last fiscal quarter — is funneling investments toward AI safety. This morning, Google DeepMind, the AI R&D division behind Gemini and many of Google’s more recent GenAI projects, announced the formation of a new organization, AI Safety and Alignment — made up of existing teams working on AI safety but also broadened to encompass new, specialized cohorts of GenAI researchers and engineers. But it did reveal that AI Safety and Alignment will include a new team focused on safety around artificial general intelligence (AGI), or hypothetical systems that can perform any task a human can. The AI Safety and Alignment organization’s other teams are responsible for developing and incorporating concrete safeguards into Google’s Gemini models, current and in-development. One might assume issues as grave as AGI safety — and the longer-term risks the AI Safety and Alignment organization intends to study, including preventing AI in “aiding terrorism” and “destabilizing society” — require a director’s full-time attention.

If you ask Gemini, Google’s flagship GenAI model, to write deceptive content about the upcoming U.S. presidential election, it will, given the right prompt. Ask about a future Super Bowl game and it’ll invent a play-by-play. Or ask about the Titan submersible implosion and it’ll serve up disinformation, complete with convincing-looking but untrue citations.

It’s a bad look for Google needless to say – and provoking the ire of policymakers, who’ve signaled their displeasure at the ease with which GenAI tools can be harnessed for disinformation and to generally mislead.

So in response, Google – thousands of jobs lighter than it was last fiscal quarter – is funneling investments toward AI safety. At least, that’s the official story.

This morning, Google DeepMind, the AI R&D division behind Gemini and many of Google’s more recent GenAI projects, announced the formation of a new organization, AI Safety and Alignment – made up of existing teams working on AI safety but also broadened to encompass new, specialized cohorts of GenAI researchers and engineers.

Beyond the job listings on DeepMind’s site, Google wouldn’t say how many hires would result from the formation of the new organization. But it did reveal that AI Safety and Alignment will include a new team focused on safety around artificial general intelligence (AGI), or hypothetical systems that can perform any task a human can.

Similar in mission to the Superalignment division rival OpenAI formed last July, the new team within AI Safety and Alignment will work alongside DeepMind’s existing AI-safety-centered research team in London, Scalable Alignment – which is also exploring solutions to the technical challenge of controlling yet-to-be-realized superintelligent AI.

Why have two groups working on the same problem? Valid question – and one that calls for speculation given Google’s reluctance to reveal much in detail at this juncture. But it seems notable that the new team – the one within AI Safety and Alignment – is stateside as opposed to across the pond, proximate to Google HQ at a time when the company’s moving aggressively to maintain pace with AI rivals while attempting to project a responsible, measured approach to AI.

The AI Safety and Alignment organization’s other teams are responsible for developing and incorporating concrete safeguards into Google’s Gemini models, current and in-development. Safety is a broad purview. But a few of the organization’s near-term focuses will be preventing bad medical advice, ensuring child safety and “preventing the amplification of bias and other injustices.”

Anca Dragan, formerly a Waymo staff research scientist and a UC Berkeley professor of computer science, will lead the team.

“Our work at the AI Safety and Alignment organization aims to enable models to better and more robustly understand human preferences and values,” Dragan told TechCrunch via email, “to know what they don’t know, to work with people to understand their needs and to elicit informed oversight, to be more robust against adversarial attacks” and to account for the plurality and dynamic nature of human values and viewpoints.”

Dragan’s consulting work with Waymo on AI safety systems might raise eyebrows, considering the Google autonomous car venture’s rocky driving record as of late.

So might her decision to split time between DeepMind and UC Berkeley, where she heads a lab focusing on algorithms for human-AI and -robot interaction. One might assume issues as grave as AGI safety – and the longer-term risks the AI Safety and Alignment organization intends to study, including preventing AI in “aiding terrorism” and “destabilizing society” – require a director’s full-time attention.

Dragan insists, however, that her UC Berkeley lab’s and DeepMind’s research are both interrelated and complementary.

“My lab and I have been working on value alignment in anticipation of advancing AI capabilities, [and] my own Ph.D. was in robots inferring human goals and being transparent about their own goals to humans, which is where my interest in this area started,” she said. “I think the reason [DeepMind CEO] Demis Hassabis and [Chief AGI Scientist] Shane Legg were excited to bring me on was in part this research experience and in part my attitude that addressing present-day concerns and catastrophic risks are not mutually exclusive – that on the technical side mitigations often blur together, and work contributing to the long term improves the present day, and vice versa.”

To say Dragan has her work cut out for her is an understatement.

Skepticism of GenAI tools is at an all-time high – particularly where it relates to deepfakes and misinformation. In a poll from YouGov, 85% of Americans said that they were very concerned or somewhat concerned about the spread of misleading video and audio deepfakes. A separate survey from The Associated Press-NORC Center for Public Affairs Research found that nearly 60% of adults think AI tools will increase the volume of false and misleading information during the 2024 U.S. election cycle.

Enterprises, too – the big fish Google and its rivals hope to lure with GenAI innovations – are wary of the tech’s shortcomings and their implications.

Intel subsidiary Cnvrg.io recently conducted a survey of companies in the process of piloting or deploying GenAI apps. It found that around a fourth of the respondents had reservations about GenAI compliance and privacy, reliability, the high cost of implementation and a lack of technical skills needed to use the tools to their fullest.

In a separate poll from Riskonnect, a risk management software provider, over half of execs said that they were worried about employees making decisions based on inaccurate information from GenAI apps.

They’re not unjustified in those concerns. Last week, The Wall Street Journal reported that Microsoft’s Copilot suite, powered by GenAI models similar architecturally to Gemini, often makes mistakes in meeting summaries and spreadsheet formulas. To blame is hallucination – the umbrella term for GenAI’s fabricating tendencies – and many experts believe it can never be fully solved.

Recognizing the intractability of the AI safety challenge, Dragan makes no promise of a perfect model –

Avatar photo
Kira Kim

Kira Kim is a science journalist with a background in biology and a passion for environmental issues. She is known for her clear and concise writing, as well as her ability to bring complex scientific concepts to life for a general audience.

Articles: 867

Leave a Reply

Your email address will not be published. Required fields are marked *