Picsart, a photo-editing startup backed by SoftBank, announced on Thursday that it’s partnering with Getty Images to develop a custom model to bring AI imagery to its 150 million users.
The company says the model will bring responsible AI imagery to creators, marketers and small businesses that use its platform.
Image Credits: Picsart x Getty Images AI ImagePicsart’s AI lab, PAIR, is building the model.
The company is also integrating Getty Images video content into Picsart’s platform and making it available to Plus members.
Picsart isn’t the first startup that Getty Images has partnered with for responsible AI imagery, as it also partnered with AI image generator, Bria, and Runway, a startup building generative AI for content creators.
This week in AI, Apple stole the spotlight.
At the company’s Worldwide Developers Conference (WWDC) in Cupertino, Apple unveiled Apple Intelligence, its long-awaited, ecosystem-wide push into generative AI.
The company promised Apple Intelligence is being built with safety at its core, along with highly personalized experiences.
Apple revealed in a blog post that it trains the AI models that power Apple Intelligence on a combination of licensed datasets and the public web.
Grab bagThis week marked the sixth anniversary of the release of GPT-1, the progenitor of GPT-4o, OpenAI’s latest flagship generative AI model.
It’s something Apple is striving to answer with its own take on the category, Apple Intelligence, which was officially unveiled this week at WWDC 2024.
Apple Intelligence is a more bespoke approach to generative AI, built specifically with the company’s different operating systems at their foundation.
It’s a very Apple approach in the sense that it prioritizes a frictionless user experience above all.
The operating systems also feature a feedback mechanism into which users can report issues with the generative AI system.
This should function the same with all external models Apple partners with, including Google Gemini.
Google is finally making its Gemini Nano AI model available to Pixel 8 and 8a users after teasing it in March.
The June Pixel drop will allow users to access the model as a developer option.
Apart from that, the feature drop includes Display Port connectivity support for Pixel 8 and Pixel 8a, reverse phone number lookup for unknown numbers, fall and crash detection for Pixel Watch 2, and doorbell notifications on the Pixel tablet.
The update brings manual lens picking in the camera to the Pixel 6 Pro, 7 Pro, and Pixel Fold.
Pixel WatchThe newest Pixel drop brings car crash detection to the Pixel Watch 2.
Traffic is down, newsrooms are undergoing layoffs, and publishers fear that AI technologies will only make matters worse.
Entering the fray, news reader startup Particle is teaming up with publishers to seek out a new business model for the AI era, where AI summaries of news don’t have to mean lost revenues.
Now, the company is bringing its first publishing partners into the mix to help it guide its next steps.
As a start, Particle now subscribes to Reuters newswire to help it deliver information about current events in the news.
What Particle isn’t yet ready to reveal is its business model.
French AI startup Mistral is introducing new AI model customization options, including paid plans, to let developers and enterprises fine-tune its generative models for particular use cases.
Mistral has released a software development kit (SDK), Mistral-Finetune, for fine-tuning its models on workstations, servers and small datacenter nodes.
For developers and companies that prefer a more managed solution, there’s Mistral’s newly launched fine-tuning services available through the company’s API.
Compatible with two of Mistral’s models for now, Mistral Small and the aforementioned Mistral 7B, Mistral says that the fine-tuning services will gain support for more of its models in the coming weeks.
Lastly, Mistral is debuting custom training services, currently only available to select customers, to fine-tune any Mistral model for an organization’s apps using their data.
Stability AI, the startup behind the AI-powered art generator Stable Diffusion, has released an open AI model for generating sounds and songs that it claims was trained exclusively on royalty-free recordings.
Called Stable Audio Open, the generative model takes a text description (e.g.
Stability AI says that it’s not optimized for this, and suggests that users looking for those capabilities opt for the company’s premium Stable Audio service.
Stable Audio Open also can’t be used commercially; its terms of service prohibit it.
And it doesn’t perform equally well across musical styles and cultures or with descriptions in languages other than English — biases Stability AI blames on the training data.
Cover slide Problem slide 1 Problem slide 2 Product image slide Solution slide What Is Unique?
The business model comes up shortClosely related to the previous point: Pricing is one side of the business model, but there are many more parts to the puzzle.
The business model slide is very light on details, and the details that are there are a little confusing.
The full pitch deckIf you want your own pitch deck teardown featured on TechCrunch, here’s more information.
Also, check out all our Pitch Deck Teardowns all collected in one handy place for you!
New AI models from Meta are making waves in technology circles.
Meta’s new Llama models have differently sized underlying datasets, with the Llama 3 8B model featuring eight billion parameters, and the Llama 3 70B model some seventy billion parameters.
The company’s new models, which were trained on 24,000 GPU clusters, perform well across benchmarks that Meta put them up against, besting some rivals’ models that were already in the market.
What matters for those of us not competing to build and release the most capable, or largest AI models, what we care about is that they are still getting better with time.
While Meta takes an open-source approach to AI work, its competitors are often prefer more closed-source work.
Meta has released the latest entry in its Llama series of open source generative AI models: Llama 3.
Meta describes the new models — Llama 3 8B, which contains 8 billion parameters, and Llama 3 70B, which contains 70 billion parameters — as a “major leap” compared to the previous-gen Llama models, Llama 2 8B and Llama 2 70B, performance-wise.
In fact, Meta says that, for their respective parameter counts, Llama 3 8B and Llama 3 70B — trained on two custom-built 24,000 GPU clusters — are are among the best-performing generative AI models available today.
So what about toxicity and bias, two other common problems with generative AI models (including Llama 2)?
The company’s also releasing a new tool, Code Shield, designed to detect code from generative AI models that might introduce security vulnerabilities.