DeepMind

“Jordan Hoffmann: Bridging the Gap between Microsoft AI and London – A Hub of Talent and Innovation”

Gettyimages 2041281128 E1712563728365
Microsoft has announced a new London hub for its recently unveiled consumer AI division. It will be fronted by Jordan Hoffmann, an AI scientist and engineer Microsoft recently picked up from high-profile AI startup Inflection AI, which Microsoft invested in last year. The news comes some three weeks after Microsoft CEO Satya Nadella unveiled a new consumer AI division headed up by Inflection AI’s founders, which include Mustafa Suleyman — co-founder of Deepmind, the AI company Google acquired in 2014. At the time, Nadella said that “several members of the Inflection team” also joined Microsoft’s new AI unit (Bloomberg reported that most actually joined). In a blog post today, Suleyman calls Hoffmann an “exceptional AI scientist and engineer,” and with Suleyman himself reporting directly to Nadella in the U.S., Hoffmann will take charge of the new London unit.

“Artificial Intelligence Mastermind Demis Hassabis Receives Knighthood for Contributions at Google DeepMind HQ in UK”

Gettyimages 1760047727 E1711731094140
Alongside Shane Legg and Mustafa Suleyman, who Microsoft hired from AI startup Inflection AI last week, Hassabis founded DeepMind out of London in 2010. So it does make sense that the U.K. would seek to honor one of its most high-profile AI figureheads. Other notable figures from the technology world to receive knighthoods include Apple’s Jonathan “Jony” Ive back in 2011 for “services to design and enterprise.”Delighted and honoured to receive a Knighthood for services to AI. It’s been an incredible journey so far building @GoogleDeepMind over the past 15 years, helping accelerate the field and grow the UK & global AI ecosystems. the king or queen at that given time — technically making the final decision on who receives them.

Google’s Deepmind AI learns to be your ultimate video game ally

Sima Instructions
AI models that play games go back decades, but they generally specialize in one game and always play to win. From this data — and the annotations provided by data labelers — the model learns to associate certain visual representations of actions, objects, and interactions. AI agents trained on multiple games performed better on games they hadn’t been exposed to. But of course many games involve specific and unique mechanics or terms that will stymie the best-prepared AI. And simple improvised actions or interactions are also being simulated and tracked by AI in some really interesting research into agents.

Competition Ramps Up in AI Video Generation as Former Deepmind Members Reveal Haiper

Haiper Image
AI-powered video generation is a hot market on the back of OpenAI’s releasing Sora model last month. Two Deepmind alums Yishu Miao and Ziyu Wang have publicly released their video generation tool Haiper with its own AI model underneath. Video generation serviceUsers can go to Haiper’s site and start generating videos for free by typing in text prompts. He noted that it is “too early” in the startup’s journey to think about building a subscription product around video generation. While investors are looking to invest in AI-powered video generation startups, they also think the technology still has a lot of room for improvement.

Introducing the Formation of Google DeepMind’s AI Safety Organization

Deepmind
So in response, Google — thousands of jobs lighter than it was last fiscal quarter — is funneling investments toward AI safety. This morning, Google DeepMind, the AI R&D division behind Gemini and many of Google’s more recent GenAI projects, announced the formation of a new organization, AI Safety and Alignment — made up of existing teams working on AI safety but also broadened to encompass new, specialized cohorts of GenAI researchers and engineers. But it did reveal that AI Safety and Alignment will include a new team focused on safety around artificial general intelligence (AGI), or hypothetical systems that can perform any task a human can. The AI Safety and Alignment organization’s other teams are responsible for developing and incorporating concrete safeguards into Google’s Gemini models, current and in-development. One might assume issues as grave as AGI safety — and the longer-term risks the AI Safety and Alignment organization intends to study, including preventing AI in “aiding terrorism” and “destabilizing society” — require a director’s full-time attention.

Isomorphic Inks Strikes Drug Discovery Deals with Eli Lilly and Novartis

Isomorphic Labs
Isomorphic Labs, the London-based, drug discovery-focused spin-out of Google AI R&D division DeepMind, today announced that it’s entered into strategic partnerships with two pharmaceutical giants, Eli Lilly and Novartis, to apply AI to discover new medications to treat diseases. Isomorphic will receive $45 million upfront from Eli Lilly and potentially up to $1.7 billion based on performance milestones, excluding royalties. Researchers recently used AlphaFold to design and synthesize a potential drug to treat hepatocellular carcinoma, the most common type of primary liver cancer. The latest version of AlphaFold can generate predictions for nearly all molecules in the Protein Data Bank, the world’s largest open access database of biological molecules, DeepMind announced in late October. Already, Isomorphic is applying the new AlphaFold model, which it co-designed with DeepMind, to therapeutic drug design, helping to characterize different types of molecular structures important for treating disease.

“Revolutionizing Robot Training: Google’s Groundbreaking Methods Utilizing Video and Extensive Language Models”

Unnamed
Google’s DeepMind Robotics researchers are one of a number of teams exploring the space’s potential. The newly announced AutoRT is designed to harness large foundational models, to a number of different ends. In a standard example given by the DeepMind team, the system begins by leveraging a Visual Language Model (VLM) for better situational awareness. A large language model, meanwhile, suggests tasks that can be accomplished by the hardware, including its end effector. LLMs are understood by many to be the key to unlocking robotics that effectively understand more natural language commands, reducing the need for hard-coding skills.