risks

Europe imposes new ban on Worldcoin over concerns for child safety

Worldcoin Project Co Founders Alex Blania L And Sam Altman R
Controversial crypto biometrics venture Worldcoin has been almost entirely booted out of Europe after being hit with another temporary ban — this time in Portugal. The order from the country’s data protection authority comes hard on the heels of the same type of three-month stop-processing order from Spain’s DPA earlier this month. Portugal’s data protection authority said it issued the three-month ban on Worldcoin’s local ops Tuesday after receiving complaints Worldcoin had scanned children’s eyeballs. By contrast, EU data protection law gives people in the region a suite of rights over their personal data, including the ability to have data about them corrected, amended or deleted. As Tools for Humanity’s lead DPA, under the one-stop-shop (OSS) mechanism in bloc’s General Data Protection Regulation (GDPR), it is responsible for investigating privacy and data protection complaints about the company.

“Major Platforms Under EU Scrutiny: Examining GenAI Risks before Elections”

Gettyimages 537374882
The eight platforms are designated as very large online platforms (VLOPs) under the regulation — meaning they’re required to assess and mitigate systemic risks, in addition to complying with the bulk of the rules. These will test platforms’ readiness to deal with generative AI risks such as the possibility of a flood of political deepfakes ahead of the June European Parliament elections. It’s recently been consulting on election security rules for VLOPs, as it works on producing formal guidance. Which is why it’s dialling up attention on major platforms with the scale to disseminate political deepfakes widely. The Commission’s RFIs today also aim to address a broader spectrum of generative AI risks than voter manipulation — such as harms related to deepfake porn or other types of malicious synthetic content generation, whether the content produced is imagery/video or audio.

“Reddit’s IPO Filing Minimizes Concerns over Developer Backlash and Decentralized Social Media”

Reddit Logo Broken
Reddit’s long-awaited IPO is nearing, promising to be the largest social media IPO since Pinterest. Meanwhile, Mastodon, and the wider network of apps connected to the “Fediverse” as the decentralized social web is called, has a combined 17.2 million users. Just as some Twitter users broke away to join decentralized alternatives, once they became viable alternatives, Reddit users could also do the same. If Meta fears the power of decentralized social networks enough to join the movement, surely Reddit is not immune? Seeing their demands ignored and overridden could eventually drive them to find new homes on decentralized social media, where they would maintain control over their communities and user data.

“Feminine Forces in Artificial Intelligence: A Spotlight on Lee Tiedrich, Global Partnership on AI’s Leading Specialist”

Women In Ai Tiedrich
It’s very gratifying to help prepare the next generation of AI leaders to address multidisciplinary AI challenges. I recently called for a global AI learning campaign in a piece I published with the OECD. To reduce potential liability and other risks, AI users should establish proactive AI governance and compliance programs to manage their AI deployments. Furthermore, in our increasingly regulated and litigious AI world, responsible AI practices should reduce litigation risks and potential reputational harms caused by poorly designed AI. Additionally, even if not addressed in the investment agreements, investors can introduce portfolio companies to potential responsible AI hires or consultants and encourage and support their engagement in the ever-expanding responsible AI ecosystem.

Blueprint for the Future of AI in 2024: Maximizing Potential and Mitigating Workplace Hazards

Gettyimages 1336275511
While it has positively impacted productivity and efficiency in the workplace, AI has also presented a number of emerging risks for businesses. At the same time, however, nearly half (48%) said they enter company data into AI tools not supplied by their business to aid them in their work. This rapid integration of generative AI tools at work presents ethical, legal, privacy, and practical challenges, creating a need for businesses to implement new and robust policies surrounding generative AI tools. AI use and governance: Risks and challengesDeveloping a set of policies and standards now can save organizations from major headaches down the road. The previously mentioned Harris Poll found that 64% perceive AI tool usage as safe, indicating that many workers and organizations could be overlooking risks.

Apple’s Reluctant Adherence to Regulations Will Erode Trust with Politicians and Developers

App Store 2024 V2
Apple does not enjoy this, which should surprise exactly no one. Somehow, despite that, society remains intact and people are mostly ok with using those platforms with reasonable success. What isn’t so understandable is just how petulant the company is being about prying open fingers on its tightly closed fist when it comes to compliance here. At best, it seems short-sighted: Yes, doing so will mean Apple’s revenue picture doesn’t materially change in the near-term. And developers are increasingly irate at Apple’s antics.

Effective Measures for Ethical Application of Generative AI by Corporate Leaders

Gettyimages 940875820
It’s becoming increasingly clear that businesses of all sizes and across all sectors can benefit from generative AI. McKinsey estimates generative AI will add $2.6 trillion to $4.4 trillion annually across numerous industries. That’s just one reason why over 80% of enterprises will be working with generative AI models, APIs, or applications by 2026. However, simply adopting generative AI doesn’t guarantee success. However, only 17% of businesses are addressing generative AI risks, which leaves them vulnerable.

Safety Measures Strengthened: OpenAI Grants Board with Final Authority over Risky AI

Openai Pattern 01
OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. In-production models are governed by a “safety systems” team; this is for, say, systematic abuses of ChatGPT that can be mitigated with API restrictions or tuning. Frontier models in development get the “preparedness” team, which tries to identify and quantify risks before the model is released. So, only medium and high risks are to be tolerated one way or the other. For that reason OpenAI is making a “cross-functional Safety Advisory Group” that will sit on top of the technical side, reviewing the boffins’ reports and making recommendations inclusive of a higher vantage.

Elon Musk’s Company Under Investigation by EU for Illegal Content, Moderation, Transparency, and UX Deception

Twitter X Logo Musk Pattern 1
Elon Musk’s X marks the spot of the first confirmed investigation opened by the European Union under its rebooted digital rulebook, the Digital Services Act (DSA). Its earlier actions were focused on concerns about the spread of illegal content and disinformation related to the Israel-Hamas war. So the Commission’s official scrutiny of X could have real world implications for how the platform operates sooner rather than later. However the Commission obviously has doubts X has gone far enough on the transparency front to meet the DSA’s bar. The investigation may also test Musk’s mettle for what could be an expensive head-on clash with EU regulators.

Datalogz secures $5M funding to conquer your enterprise intelligence sprawl

Datalogz Billboard Image
In recent years, there has been a proliferation of business intelligence tools that aim to help companies make critical business decisions based on data analytics. As data adoption increases at most companies, they are left with growing administration problems, said Logan Havern, co-founder and CEO of Datalogz. Other participating investors from the latest round include Graphene Ventures, Squadra Ventures, Berkeley Skydeck, Defined VC, Mana Ventures and Trajectory Ventures. These piling reports, Havern argued, could lead to thousands of dashboards with duplication, unused assets, security risks, inefficiencies and consequently unwanted costs. Part of Datalogz’s business is stepping into the turf of traditional consulting firms, which easily charge $1-10 million annually just to perform business intelligence audits and clean-ups, according to Havern.