Adobe Also Joining the Race: Developing Generative Video Technology

Offered as an answer of sorts to OpenAI’s Sora, Google’s Imagen 2 and models from the growing number of startups in the nascent generative AI video space, Adobe’s model — a part of the company’s expanding Firefly family of generative AI products — will make its way into Premiere Pro, Adobe’s flagship video editing suite, sometime later this year, Adobe says. Like many generative AI video tools today, Adobe’s model creates footage from scratch (either a prompt or reference images) — and it powers three new features in Premiere Pro: object addition, object removal and generative extend. The lack of release time frame on the video model doesn’t instill a lot of confidence that it’ll avoid the same fate. And that, I’d say, captures the overall tone of Adobe’s generative video presser. Adobe’s clearly trying to signal with these announcements that it’s thinking about generative video, if only in the preliminary sense.

Adobe is keeping a tight lid on its plans to enter the AI video generation market with a new model, set to launch later this year as part of its Firefly series. Like many other companies in the emerging generative AI video space, Adobe’s model will allow users to create footage from prompts or reference images, and will power new features in its flagship video editing software, Premiere Pro, including object addition, object removal, and generative extend.

Object addition enables users to insert objects within a specified segment of a video clip, while object removal allows for the removal of unwanted objects like boom mics or coffee cups in the background. The generative extend feature adds extra frames to the beginning or end of a clip, helping to sync visuals with sound or evoke a stronger emotional response.

These tools are not only innovative, but also responsive to growing concerns around deepfakes, as Adobe plans to add Content Credentials to its AI-generated media to provide metadata and credit for the usage of its models. As for the training data behind the model, Adobe declines to disclose its sources or compensation policies for content contributors.

  • It has been reported that Adobe is paying photographers and artists on its stock media platform, Adobe Stock, for short video clips to train their AI model, with higher-quality footage commanding higher rates.
  • This approach stands in stark contrast to the methods of other AI video competitors such as OpenAI, who are accused of scraping publicly available web data without proper credit or payment.

In order to mitigate these concerns and differentiate itself as a responsible and trustworthy vendor, Adobe has implemented an IP indemnity policy and will follow a generative credits system for its AI video features. Customers with a paid subscription to Adobe Creative Cloud will receive a set number of credits each month, with additional credits needed for more complex tasks.

The big question remains: will Adobe’s AI-powered video features be worth the cost? Previous models in the Firefly series have been underwhelming and flawed, leading to doubts about the effectiveness and reliability of their upcoming video model. Adobe declined to provide live demos of their new features and plans to collaborate with third-party vendors, including OpenAI, in future developments. It has also been noted that discussions are already underway with other potential partners, such as Pika and Runway.

Despite these partnerships, Adobe’s current video generation concepts fail to impress and come predominantly in the form of ideas and concepts rather than concrete products. As Sora, OpenAI’s DALL-E 3, and other innovations continue to shape the market, Adobe has much to prove in order to retain its competitive edge and capitalize on this emerging technology.

Avatar photo
Zara Khan

Zara Khan is a seasoned investigative journalist with a focus on social justice issues. She has won numerous awards for her groundbreaking reporting and has a reputation for fearlessly exposing wrongdoing.

Articles: 847

Leave a Reply

Your email address will not be published. Required fields are marked *