In India, a government-run agency will now monitor and undertake fact-checking for government related matters on social media even as tech giants expressed grave concerns about it last year.
The Ministry of Electronics and IT on Wednesday wrote in a gazette notification that it is cementing into law its proposal from last year about making the fact checking unit of Press Information Bureau the dedicated arbiter of truth for New Delhi matters.
The Ministry of Information and Broadcast established the fact checking unit of Press Information Bureau in 2019 with the aim to dispel misinformation about government matters.
The unit, however, has been criticized for falsely labelling information critical to the government as misleading.
Relying on a government agency such as the Press Information Bureau as the sole source to fact-check government business without giving it a clear definition or providing clear checks and balances “may lead to misuse during implementation of the law, which will profoundly infringe on press freedom,” Asia Internet Coalition, an industry group that represents Meta, Amazon, Google and Apple cautioned last year.
Elon Musk’s xAI released its Grok large language model as “open source” over the weekend.
But does releasing the code for something like Grok actually contribute to the AI development community?
This isn’t the first time the terms “open” and “open source” have been questioned or abused in the AI world.
So where does xAI’s Grok release fall on this spectrum?
Is his nascent AI company really dedicated to open source development?
It’s very gratifying to help prepare the next generation of AI leaders to address multidisciplinary AI challenges.
I recently called for a global AI learning campaign in a piece I published with the OECD.
To reduce potential liability and other risks, AI users should establish proactive AI governance and compliance programs to manage their AI deployments.
Furthermore, in our increasingly regulated and litigious AI world, responsible AI practices should reduce litigation risks and potential reputational harms caused by poorly designed AI.
Additionally, even if not addressed in the investment agreements, investors can introduce portfolio companies to potential responsible AI hires or consultants and encourage and support their engagement in the ever-expanding responsible AI ecosystem.
Reach Capital closed its last investment vehicle during an unprecedented boom within tech, but with the industry now seeing a slowdown in growth, it seems the firm may have been…
The FTG+ announced that it would be cracking down on unfair, deceptive, and anticompetitive practices taking place in the gig economy, following an investigation into HomeAdvisor’s deceptive tactics. This fine…