Picsart, a photo-editing startup backed by SoftBank, announced on Thursday that it’s partnering with Getty Images to develop a custom model to bring AI imagery to its 150 million users.
The company says the model will bring responsible AI imagery to creators, marketers and small businesses that use its platform.
Image Credits: Picsart x Getty Images AI ImagePicsart’s AI lab, PAIR, is building the model.
The company is also integrating Getty Images video content into Picsart’s platform and making it available to Plus members.
Picsart isn’t the first startup that Getty Images has partnered with for responsible AI imagery, as it also partnered with AI image generator, Bria, and Runway, a startup building generative AI for content creators.
Win-back subscriptionsApple will let developers reach out to customers who have subscribed to the app to win them back with new offers.
Developers can configure which customers they want to reach out to, and based on that, Apple surfaces win-back offers in the App Store.
“Eligible customers can discover win-back offers across the App Store, including on your product page and in editorial selections on the Today, Games and Apps tabs, as well as within your app,” according to Apple.
Image Credits: AppleImproved search suggestionsApple said that it is adding Focus State to the App Store later this year for improved search.
It gave examples of content creator apps offering subscriptions, apps with large catalogs of content (maybe apps that offer sachet subscriptions to different content), and digital content that spans across different apps managed by one developer.
Human Native AI is a London-based startup building a marketplace to broker such deals between the many companies building LLM projects and those willing to license data to them.
Human Native AI also helps rights holders prepare and price their content and monitors for any copyright infringements.
Human Native AI takes a cut of each deal and charges AI companies for its transaction and monitoring services.
Human Native AI announced a £2.8 million seed round led by LocalGlobe and Mercuri, two British micro VCs, this week.
It is also a smart time for Human Native AI to launch.
Google on Thursday is issuing new guidance for developers building AI apps distributed through Google Play, in hopes of cutting down on inappropriate and otherwise prohibited content.
Schools across the U.S. are reporting problems with students passing around AI deepfake nudes of other students (and sometimes teachers) for bullying and harassment, alongside other sorts of inappropriate AI content.
Google says that its policies will help to keep out apps from Google Play that feature AI-generated content that can be inappropriate or harmful to users.
It points to its existing AI-Generated Content Policy as a place to check its requirements for app approval on Google Play.
The company is also publishing other resources and best practices, like its People + AI Guidebook, which aims to support developers building AI apps.
India’s election overshadowed by the rise of online misinformation Greater internet penetration and the rise of "cheapfakes" since the last general election in 2019 pose new challengesAs India kicks off the world’s biggest election, which starts on April 19 and runs through June 1, the electoral landscape is overshadowed by misinformation.
Misinformation is not just a problem for election fairness — it can have deadly effects, including violence on the ground and increase hatred for minorities.
“Ever since social media has been thriving, there is a new trend where you use misinformation to target communities,” he said.
The country’s vast diversity in language and culture also make it particularly hard for fact-checkers to review and filter out misleading content.
Moreover, just before the Indian election, Meta reportedly cut funding to news organizations for fact-checking on WhatsApp.
The tools would be part of a wider set of proposals Ofcom is putting together focused on online child safety.
Consultations for the comprehensive proposals will start in the coming weeks with the AI consultation coming later this year, Ofcom said.
AI researchers are finding ever-more sophisticated ways of using AI to detect, for example, deep fakes, as well as to verify users online.
It found that 32% of the kids reported that they’d seen worrying content online, but only 20% of their parents said they reported anything.
Among children aged 16-17, Ofcom said, 25% said they were not confident about distinguishing fake from real online.
AirChat, the buzzy new social app, could be great – or, it could succumb to the same fate as ClubhouseOver the weekend, another social media platform exploded into the fray: AirChat.
Built by AngelList founder Naval Ravikant and former Tinder exec Brian Norgard, Airchat takes a refreshingly intimate approach to social media.
What I do consider a red flag is AirChat’s naive approach to content moderation.
Clubhouse’s approach to content moderation was even more permissive, since there was no way to block people for months after launch – AirChat already has block and mute features, thankfully.
With this minimalist approach to content moderation, it’s not hard to see how AirChat could get into hot water.
Truth Social, the social media platform owned by Donald Trump’s media company, has announced plans to launch a live TV streaming platform.
The streaming service will launch in three phases.
The company first plans to introduce Truth Social’s CDN (content delivery network) for streaming to the Truth Social app for Android, iOS and the web.
Next, Truth Social plans to release over-the-top (“OTT”) streaming apps for phones, tablets and other devices.
Truth Social went public last month after shareholders approved a merger of TMTG and Digital World Acquisition, a special purpose acquisition company (SPAC).
The Oversight Board, Meta’s semi-independent policy council, it turning its attention to how the company’s social platforms are handling explicit, AI-generated images.
Tuesday, it announced investigations into two separate cases over how Instagram in India and Facebook in the U.S. handled AI-generated images of public figures after Meta’s systems fell short on detecting and responding to the explicit content.
In other words, after two reports, the explicit AI-generated image remained on Instagram.
The second case relates to Facebook, where a user posted an explicit, AI-generated image that resembled a U.S. public figure in a Group focusing on AI creations.
Meta’s response and the next stepsIn response to the Oversight Board’s cases, Meta said it took down both pieces of content.
Meta-owned social network Threads is finally testing a “Recent” filter to sort search results by the latest.
“We’re starting to test this with a small number of people, so it’s easier to find relevant search results in real-time,” he said in a reply to a user.
A user part of the test posted that they could see “Top” and “Recent” filters on the search results screen.
They noted that the “Recent” filter isn’t strictly chronological, but it shows the latest posts better than the “Top” filter.
Earlier this year, the company accidentally rolled out the option to sort search results by the latest.