cases

Exploring Offending Generated Images on Instagram and Facebook: An Inquiry by Meta’s Oversight Committee

Dripping Meta Logo
The Oversight Board, Meta’s semi-independent policy council, it turning its attention to how the company’s social platforms are handling explicit, AI-generated images. Tuesday, it announced investigations into two separate cases over how Instagram in India and Facebook in the U.S. handled AI-generated images of public figures after Meta’s systems fell short on detecting and responding to the explicit content. In other words, after two reports, the explicit AI-generated image remained on Instagram. The second case relates to Facebook, where a user posted an explicit, AI-generated image that resembled a U.S. public figure in a Group focusing on AI creations. Meta’s response and the next stepsIn response to the Oversight Board’s cases, Meta said it took down both pieces of content.

“Nijta: The Revolutionary French Startup Protecting Voice Privacy in AI Applications”

Gettyimages 525848879
This is particularly true in Europe in the context of GDPR: While many companies are hoping to build AI on top of voice data, in many cases, this requires removing biometric information first. This is where Nijta hopes to help: by providing AI-powered speech anonymization technology to clients that need to comply with privacy requirements. The startup also says that Nijta Voice Harbor’s protection is irreversible, unlike some of the voice modifications unwisely used by media outlets hoping to protect victims they interview. A lack of awareness of privacy issues around voice is one of the challenges Nijta will have to face. This is also why starting with B2B and Europe seems to make sense: Even if customers aren’t pushing for voice privacy, risking a hefty fine is turning companies into early adopters.

“Content Moderation’s Fate Lies in the Hands of the Supreme Court… or Does It?”

Gettyimages 1167833543
The Supreme Court could decide the future of content moderation — or it could puntThe Supreme Court is considering the fate of two state laws that limit how social media companies can moderate the content on their platforms. The two laws were both crafted by Republican lawmakers to punish social media companies for their perceived anti-conservative bias. “Supreme Court cases can fizzle in this way, much to the frustration in most cases to other parties,” Barrett said. “It’s clear that the Supreme Court needs to update its First Amendment jurisprudence to take into account this vast technological change,” Barrett said. “… The Supreme Court often lags behind society in dealing with these kinds of things, and now it’s time to deal with it.”

“Google’s Image-Generating AI: A Confession of Loss of Control”

Adobe Firefly Dogwalkers Tilt Blur
Google has apologized (or come very close to apologizing) for another embarrassing AI blunder this week, an image generating model that injected diversity into pictures with a farcical disregard for historical context. While the underlying issue is perfectly understandable, Google blames the model for “becoming” over-sensitive. But if you ask for 10, and they’re all white guys walking goldens in suburban parks? Where Google’s model went wrong was that it failed to have implicit instructions for situations where historical context was important. These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong.

Creating Optimal VR Applications: Focusing on Specific Use Cases Instead of General Computing

Vr Activities
Apple’s Vision Pro launch resembles its Apple Watch debut in more ways than one, but to me the most telling similarity is in the marketing approach. The company took the same approach with the Apple Watch, which like its face computer cousin, was more or less a solution in search of a problem when it originally debuted. But the iPhone debuted with a much more focused, and much more accurate idea of what it would become for users than the Apple Watch did. The Vision Pro, I’d argue, is even more adrift from how and why people will come to appreciate it. It’s terrible at a lot of other things – chief among them being the next big thing in the vein of personal computing or mobile.

Exploring the Importance of Asset Tokenization as a Catalyst for Crypto Growth

Gettyimages 1419599523
Why tokenization of assets can be a key driver of growth in cryptoStartups that find product-market fit can do well in any industry, but that’s not enough if you’re a web3 company. No, you have to go beyond and find real use cases for the emerging technology. There’s a lot more activity during bull markets as crypto tourists enter the industry in hopes of making it big. Many protocols and companies have taken the time to be regulated and work on bridging the real world and the world of crypto, Warden said. “I don’t love that expression ‘real world,’ because I think crypto is part of the real world, but I think we’re seeing more and more engagement there.”Warden thinks one of the biggest avenues for real world use cases is tokenization of assets and areas that aren’t even tradable yet.

Tesla Seeks Temporary Pause in Federal Racial Discrimination Lawsuit While Resolving Other Legal Matters

Gettyimages 1010166444 1
Tesla requests pause in federal racial bias lawsuit as it wraps up other casesTesla wants to pause a federal agency’s lawsuit against the automaker for racial bias against its Black workers at its Fremont assembly plant. The electric vehicle maker, in a filing in San Francisco federal court Monday, accused the U.S. The EEOC’s lawsuit alleges that Tesla violated federal law by tolerating widespread and ongoing racial harassment of its Black employees and subjecting some workers to retaliation for opposing harassment. Tesla also faces a proposed class action that alleges racial harassment filed by workers in 2017. Tesla’s filing on Monday says the federal court should decline to open a third lawsuit until the existing cases are resolved.