Elon Musk’s decision to green light a robotaxi over an affordable EV might cost the company its lead.
Last week, Musk reportedly canned the effort in favor of a robotaxi, the sort of pie-in-the-sky project that defined his first decade at the helm.
Tesla was reportedly on the cusp of building a $25,000 EV.
Given flagging sales of the company’s existing product line, it would have been a welcome shot in the arm.
It also would have given the company a product to hold its ground against a predicted onslaught of inexpensive Chinese EVs.
According to the Form S-1, the new AI governance committee includes managers from Rubrik’s engineering, product, legal and information security teams.
Here’s why having AI governance could become the new normal.
“Aside from its strategic role to devise and oversee an AI governance program, from an operational perspective, AI governance committees are a key tool in addressing and minimizing risks,” he said.
The EU AI Act has teeth, and “the penalties for non-compliance with the AI Act are significant,” British-American law firm Norton Rose Fulbright noted.
Establishing AI governance committees likely will be at least one way to try to help on the trust front.
Controversial crypto biometrics venture Worldcoin has been almost entirely booted out of Europe after being hit with another temporary ban — this time in Portugal.
The order from the country’s data protection authority comes hard on the heels of the same type of three-month stop-processing order from Spain’s DPA earlier this month.
Portugal’s data protection authority said it issued the three-month ban on Worldcoin’s local ops Tuesday after receiving complaints Worldcoin had scanned children’s eyeballs.
By contrast, EU data protection law gives people in the region a suite of rights over their personal data, including the ability to have data about them corrected, amended or deleted.
As Tools for Humanity’s lead DPA, under the one-stop-shop (OSS) mechanism in bloc’s General Data Protection Regulation (GDPR), it is responsible for investigating privacy and data protection complaints about the company.
The eight platforms are designated as very large online platforms (VLOPs) under the regulation — meaning they’re required to assess and mitigate systemic risks, in addition to complying with the bulk of the rules.
These will test platforms’ readiness to deal with generative AI risks such as the possibility of a flood of political deepfakes ahead of the June European Parliament elections.
It’s recently been consulting on election security rules for VLOPs, as it works on producing formal guidance.
Which is why it’s dialling up attention on major platforms with the scale to disseminate political deepfakes widely.
The Commission’s RFIs today also aim to address a broader spectrum of generative AI risks than voter manipulation — such as harms related to deepfake porn or other types of malicious synthetic content generation, whether the content produced is imagery/video or audio.
Reddit’s long-awaited IPO is nearing, promising to be the largest social media IPO since Pinterest.
Meanwhile, Mastodon, and the wider network of apps connected to the “Fediverse” as the decentralized social web is called, has a combined 17.2 million users.
Just as some Twitter users broke away to join decentralized alternatives, once they became viable alternatives, Reddit users could also do the same.
If Meta fears the power of decentralized social networks enough to join the movement, surely Reddit is not immune?
Seeing their demands ignored and overridden could eventually drive them to find new homes on decentralized social media, where they would maintain control over their communities and user data.
It’s very gratifying to help prepare the next generation of AI leaders to address multidisciplinary AI challenges.
I recently called for a global AI learning campaign in a piece I published with the OECD.
To reduce potential liability and other risks, AI users should establish proactive AI governance and compliance programs to manage their AI deployments.
Furthermore, in our increasingly regulated and litigious AI world, responsible AI practices should reduce litigation risks and potential reputational harms caused by poorly designed AI.
Additionally, even if not addressed in the investment agreements, investors can introduce portfolio companies to potential responsible AI hires or consultants and encourage and support their engagement in the ever-expanding responsible AI ecosystem.
While it has positively impacted productivity and efficiency in the workplace, AI has also presented a number of emerging risks for businesses.
At the same time, however, nearly half (48%) said they enter company data into AI tools not supplied by their business to aid them in their work.
This rapid integration of generative AI tools at work presents ethical, legal, privacy, and practical challenges, creating a need for businesses to implement new and robust policies surrounding generative AI tools.
AI use and governance: Risks and challengesDeveloping a set of policies and standards now can save organizations from major headaches down the road.
The previously mentioned Harris Poll found that 64% perceive AI tool usage as safe, indicating that many workers and organizations could be overlooking risks.
Apple does not enjoy this, which should surprise exactly no one.
Somehow, despite that, society remains intact and people are mostly ok with using those platforms with reasonable success.
What isn’t so understandable is just how petulant the company is being about prying open fingers on its tightly closed fist when it comes to compliance here.
At best, it seems short-sighted: Yes, doing so will mean Apple’s revenue picture doesn’t materially change in the near-term.
And developers are increasingly irate at Apple’s antics.
It’s becoming increasingly clear that businesses of all sizes and across all sectors can benefit from generative AI.
McKinsey estimates generative AI will add $2.6 trillion to $4.4 trillion annually across numerous industries.
That’s just one reason why over 80% of enterprises will be working with generative AI models, APIs, or applications by 2026.
However, simply adopting generative AI doesn’t guarantee success.
However, only 17% of businesses are addressing generative AI risks, which leaves them vulnerable.
OpenAI is expanding its internal safety processes to fend off the threat of harmful AI.
In-production models are governed by a “safety systems” team; this is for, say, systematic abuses of ChatGPT that can be mitigated with API restrictions or tuning.
Frontier models in development get the “preparedness” team, which tries to identify and quantify risks before the model is released.
So, only medium and high risks are to be tolerated one way or the other.
For that reason OpenAI is making a “cross-functional Safety Advisory Group” that will sit on top of the technical side, reviewing the boffins’ reports and making recommendations inclusive of a higher vantage.