At the time, it looked like it would be more than $50 million; in the end it ended up a little lower.
This latest round brings the total raised by the company, which was founded about four years ago, to $64 million.
Photoroom plans to use the funding to hire more people and to continue investing in its R&D and infrastructure.
Other features it offers includes AI-generated backgrounds, scene expansions, AI-generated images, and a plethora of image editing tools.
“Photoroom’s generative AI capabilities are unparalleled, and we have no doubt that they will continue to lead the way in this rapidly evolving landscape.”
Google is hopeful it will soon be able to ‘unpause’ the ability of its multimodal generative AI tool, Gemini, to depict people, per DeepMind founder, Demis Hassabis.
The capability to respond to prompts for images of humans should be back online in the “next few weeks”, he said today.
Asked by moderator, Wired’s Steven Levy, to explain what went wrong with the image generation feature, Hassabis sidestepped a detailed technical explanation.
Instead he suggested the issue was caused by Google failing to identify instances when users are basically after what he described as a “universal depiction”.
The issue is “very complex”, he suggested — likely demanding a whole-of-society mobilization and response to determine and enforce limits.
This week in AI, Google paused its AI chatbot Gemini’s ability to generate images of people after a segment of users complained about historical inaccuracies.
Google’s ginger treatment of race-based prompts in Gemini didn’t avoid the issue, per se — but disingenuously attempted to conceal the worst of the model’s biases.
Yes, the data sets used to train image generators generally contain more white people than Black people, and yes, the images of Black people in those data sets reinforce negative stereotypes.
That’s why image generators sexualize certain women of color, depict white men in positions of authority and generally favor wealthy Western perspectives.
Whether they tackle — or choose not to tackle — models’ biases, they’ll be criticized.
“While we do this, we’re going to pause the image generation of people and will re-release an improved version soon,” it added.
While we do this, we're going to pause the image generation of people and will re-release an improved version soon.
https://t.co/SLxYPGoqOZ — Google Communications (@Google_Comms) February 22, 2024Google launched the Gemini image generation tool earlier this month.
Gemini’s Al image generation does generate a wide range of people.
An earlier AI image classification tool made by Google caused outrage, back in 2015, when it misclassified black men as gorillas.
Just a year after raising $11.6 million, Kittl raised another $36 million in a Series B round led by IVP with some existing investors participating once again.
The Berlin-based startup has been working on a graphic design tool that you can use in your web browser without having to install an app.
Compared to Canva, Kittl “offers a broader feature set and it allows you to have access to the actual graphic file,” Heymann said.
The company is trying to build a design tool that is more powerful than Canva and that doesn’t carry all the legacy of Adobe’s applications.
The company lets you create an account for free, but you have to subscribe to access all product features.
The trouble is, many of these models — if not most — were trained on artwork without artists’ knowledge or permission.
And while some vendors have begun compensating artists or offering ways to “opt out” of model training, many haven’t.
Another, Kin.art, uses image segmentation (i.e., concealing parts of artwork) and tag randomization (swapping an art piece’s image metatags) to interfere with the model training process.
“We prevent your artwork from being inserted in the first place.”Now, Kin.art has a product to sell.
While the tool is free, artists have to upload their artwork to Kin.art’s portfolio platform in order to use it.
In 2019, then-President Trump tweeted a detailed image of a heavily damaged Iranian launch pad captured by a classified military satellite.
The image, which was declassified in 2022, revealed what many in the commercial Earth observation industry suspected: that U.S. defense had the ability to capture images at a staggeringly sharp 10-centimeter resolution.
In comparison, the biggest optical imagery providers today collect images at a 30-centimeter resolution, which is algorithmically improved to 15 centimeters.)
Now, the company says it has closed $35 million in Series A-1 financing, at an up round valuation.
Right now, Albedo is working toward launch of its first commercial satellite in the first half of 2025.
Samsung’s Galaxy S24 line arrives with camera improvements and generative AI tricks Starting at $800, the new flagships offer brighter screens and a slew of new photo editing toolsNo awards for correctly guessing that today’s big Unpacked news is all about Samsung Galaxy S24 series.
As anticipated, the Korean hardware giant just unveiled its latest flagship line, including the Galaxy S24, Galaxy S24+ and Galaxy S24 Ultra.
Writes Samsung,After great shots are captured, innovative Galaxy AI editing tools enable simple edits like erase, recompose, and remaster.
On the new Galaxy S24 series, Samsung’s Notes, Voice Recorder and Keyboard apps will use Gemini Pro to deliver better summarization features.
The S24, S24 and S24 Ultra sport 6.2, 6.7 and 6.8-inch displays, respectively.
Not uncommonly, KYC authentication involves “ID images,” or cross-checked selfies used to confirm a person is who they say they are.
There’s no evidence that gen AI tools have been used to fool a real KYC system — yet.
But the ease with which relatively convincing deepfaked ID images is cause for alarm.
Feeding deepfaked KYC images to an app is even easier than creating them.
The takeaway is that KYC, which was already hit-or-miss, could soon become effectively useless as a security measure.
Google’s making the second generation of Imagen, its AI model that can create and edit images given a text prompt, more widely available — at least to Google Cloud customers using Vertex AI who’ve been approved for access.
Text and logo generation brings Imagen in line with other leading image-generating models, like OpenAI’s DALL-E 3 and Amazon’s recently launched Titan Image Generator.
These techniques also enhance Imagen 2’s multilingual understanding, Google says — allowing the model to translate a prompt in one language to an output (e.g.
Google didn’t reveal the data that it used to train Imagen 2, which — while disappointing — doesn’t exactly come as a surprise.
Instead, Google offers an indemnification policy that protects eligible Vertex AI customers from copyright claims related both to Google’s use of training data and Imagen 2 outputs.