LinkedIn has confirmed it will no longer allow advertisers to target users based on data gleaned from their participation in LinkedIn Groups.
In response to the complaint it received in February, the EC wrote to LinkedIn to request further information on how it might be enabling targeted ads based on sensitive personal data such as race, political allegiances, or sexual orientation.
While LinkedIn maintained that it complied with the DSA, the company has now removed the ability for advertisers to “create an advertising audience” in Europe using LinkedIn Group membership data.
“We made this change to prevent any misconception that ads to European members could be indirectly targeted based on special categories of data or related profiling categories,” Corrigan wrote on LinkedIn today.
LinkedIn will still allow targeted advertising, just not using data garnered from LinkedIn groups.
The European Data Protection Board (EDPB) has published new guidance which has major implications for adtech giants like Meta and other large platforms.
The guidance, which was confirmed incoming Wednesday as we reported earlier, will steer how privacy regulators interpret the bloc’s General Data Protection Regulation (GDPR) in a critical area.
The full opinion of the EDPB on so-called “consent or pay” runs to 42-pages.
However a market leader imposing that kind of binary choice looks unviable, per the EDPB, an expert body made up of representatives of data protection authorities from around the EU.
“Online platforms should give users a real choice when employing ‘consent or pay’ models,” Talu wrote.
The EU’s latest concerns about TikTok’s DSA compliance center on the launch of TikTok Lite.
TikTok has been given 24 hours to provide the risk assessment for TikTok Lite.
It’s not clear whether TikTok conducted a DSA risk assessment for the new reward program ahead of launching TikTok Lite in the two EU markets.
But the regulation’s focus on systemic risk essentially makes such a step obligatory for features that are likely to appeal to minors.
TikTok did tell us it requires TikTok Lite users to verify that they are 18 or older in order to collect points through their use of the app.
Yet a binary choice (aka “consent or pay”) is exactly what Meta is currently forcing on users in the region.
The European Data Protection Board (EDPB) has been meeting this week to discuss adopting an opinion on so-called “consent or pay”, following a request made back in February by a trio of concerned data protection authorities.
A spokeswoman for the EDPB confirmed to TechCrunch that it adopted an opinion on “consent or pay” on Wednesday morning, saying it will be published later today.
However the choice Meta gives EU users is a binary one: Either consent to its use of personal data for targeted advertisng or pay a monthly fee to access ad-free versions of its social networks.
But on the core issue of whether Meta’s mechanism complies with the EU’s long-standing data protection framework the Board’s opinion is key.
Apple is opening up web distribution for iOS apps targeting users in the European Union from today.
Apple’s walled garden stance has enabled it to funnel essentially all iOS developer revenue through its own App Store in the past.
An Apple rep described this as a baseline safety and security standard which they said iOS users expect to help ensure their device is protected from external risks.
Given Apple has only just started implementing web distribution for iOS apps it remains to be seen whether the EU will step in for a closer look at this aspect of its DMA compliance too.
It’s also unclear how much demand there will be among iOS developers for direct web distribution.
Additionally, in a notable step last month, the European Union opened a formal investigation into whether Meta’s tactic breaches obligations that apply to Facebook and Instagram under the competition-focused Digital Markets Act (DMA).
The Board’s opinion on “consent or pay” is expected to provide guidance on how the EU’s General Data Protection Regulation (GDPR) should be applied in this area.
It’s worth noting the Board’s opinion will look at “consent or pay” generally, rather than specifically investigating Meta’s deployment.
Nor is Meta the only service provider pushing “consent or pay” on users.
“However, the current ‘Consent or Pay’ model sets in stone a coercive dynamic, leaving users without an actual choice.
According to the Form S-1, the new AI governance committee includes managers from Rubrik’s engineering, product, legal and information security teams.
Here’s why having AI governance could become the new normal.
“Aside from its strategic role to devise and oversee an AI governance program, from an operational perspective, AI governance committees are a key tool in addressing and minimizing risks,” he said.
The EU AI Act has teeth, and “the penalties for non-compliance with the AI Act are significant,” British-American law firm Norton Rose Fulbright noted.
Establishing AI governance committees likely will be at least one way to try to help on the trust front.
The main development out of the sixth TTC meeting appears to be a commitment from EU and US AI oversight bodies, the European AI Office and the US AI Safety Institute, to set up what’s couched as “a Dialogue”.
The aim is fostering a deeper collaboration between the AI institutions, with a particular focus on encouraging the sharing of scientific information among respective AI research ecosystems.
“Working groups jointly staffed by United States science agencies and European Commission departments and agencies have achieved substantial progress by defining critical milestones for deliverables in the areas of extreme weather, energy, emergency response, and reconstruction.
We are also making constructive progress in health and agriculture.”In addition, an overview document on the collaboration around AI for the public good was published Friday.
The joint statement refers to 2024 as “a Pivotal Year for Democratic Resilience”, on account of the number of elections being held around the world.
Gómez’s research is grounded in the computational music field, where she contributes to the understanding of the way humans describe music and the methods in which it’s modeled digitally.
What I liked at the time from machine learning was its modelling capabilities and the shift from knowledge-driven to data-driven algorithm design — e.g.
There’s also PHENICX, a large European Union (EU)-funded project I coordinated on the use of music; and AI to create enriched symphonic music experiences.
What advice would you give to women seeking to enter the AI field?
They should learn about the working principles and limitations of AI algorithms to be able to challenge them and use them in a responsible way.
“Through the AI Act and through the [AI safety- and security-focused] Executive Order — which is to mitigate the risks of AI technologies while supporting their uptake in our economies.”Earlier this week the US and the UK signed a partnership agreement on AI safety.
Wider information-sharing is envisaged under the US-UK agreement — about “capabilities and risks” associated with AI models and systems, and on “fundamental technical research on AI safety and security”.
It also announced a plan to spend £100M on an AI safety taskforce which it said would be focused on so-called foundational AI models.
At the UK AI Summit last November, Raimondo announced the creation of a US AI safety institute on the heels of the US Executive Order on AI.
Neither the US nor the UK have proposed comprehensive legislation on AI safety, as yet — with the EU remaining ahead of the pack when it comes to legislating on AI safety.