A federal judge sided against Elon Musk today, dismissing a lawsuit brought by Musk and X that targeted a nonprofit that researches online hate.
In the lawsuit, X claimed that it lost “tens of millions of dollars” as a direct result of the CCDH’s research.
Musk, who personally directed the lawsuit, called the CCDH “an evil propaganda machine” in replies on X.
The nonprofit, formed in 2018, researches trends in hate speech, extremism and misinformation on major social networks.
Unlike the CCDH lawsuit, X is suing Media Matters for America in Texas, which doesn’t share California’s protections against frivolous lawsuits designed to stifle free speech.
Elon Musk’s crusade against the extremism research organization the Center for Countering Digital Hate will have its day in court on Thursday.
After Musk’s takeover of Twitter, the CCDH published reports detailing rising hate speech on X and how unbanned accounts, including neo-Nazi Andrew Anglin, stood to make the company millions in ad revenue.
Unlike the CCDH lawsuit, X is suing Media Matters for America in Texas, which doesn’t have California’s anti-SLAPP protections.
A loss in court for the CCDH would likely have an immediate chilling effect on researchers who track hate speech and misinformation on social media.
“This ridiculous lawsuit is a textbook example of a wealthy, unaccountable company weaponizing the courts to silence researchers, simply for studying the spread of hate speech, misinformation and extremism online,” Ahmed said.
So in response, Google — thousands of jobs lighter than it was last fiscal quarter — is funneling investments toward AI safety.
This morning, Google DeepMind, the AI R&D division behind Gemini and many of Google’s more recent GenAI projects, announced the formation of a new organization, AI Safety and Alignment — made up of existing teams working on AI safety but also broadened to encompass new, specialized cohorts of GenAI researchers and engineers.
But it did reveal that AI Safety and Alignment will include a new team focused on safety around artificial general intelligence (AGI), or hypothetical systems that can perform any task a human can.
The AI Safety and Alignment organization’s other teams are responsible for developing and incorporating concrete safeguards into Google’s Gemini models, current and in-development.
One might assume issues as grave as AGI safety — and the longer-term risks the AI Safety and Alignment organization intends to study, including preventing AI in “aiding terrorism” and “destabilizing society” — require a director’s full-time attention.