Adobe is previewing AI video tools coming later this year
On Wednesday, Adobe unveiled Firefly AI video production tools that will arrive in beta later this year. As with most things related to AI, the examples are equal parts surprising and frightening as the company slowly integrates tools designed to automate much of the work of creating its significant paid user base today. Emphasizing the commercialization of AI found elsewhere in the tech industry, Adobe pitches it all as an additive technology that “helps take the tedium out of post-production.”
Adobe describes the new Firefly-powered text-to-video, Generative Extend (which will be available in Premiere Pro) and AI image-to-video tools as helping editors with tasks such as “navigating spaces in video, removing unwanted elements from a scene. , smoothing jump cut transitions, and finding the perfect roll.” The company says the tools will give video editors “more time to explore new creative ideas, a part of the job they love.” (To take Adobe at face value, you’d have to believe that employers won’t simply increase their outsourcing demands on editors once the industry fully embraces these AI tools. Or pay less. Or hire fewer people. But I digress.)
Firefly Text-to-Video lets you – you guessed it – create AI-generated videos with text commands. But it also includes tools to control camera angle, movement and zoom. It can take a shot with gaps in its timeline and fill in the gaps. It can even use a still reference image and turn it into a convincing AI video. Adobe says its best video models are “natural world videos,” which help create quick shots or b-roll without a huge budget.
For an example of how convincing the technology looks, check out Adobe’s examples in the promo video:
Although these are samples chosen by a company trying to sell you their products, their quality is undeniable. Detailed text helps you find an image of a burning volcano, a cool dog in a field of wildflowers or (showing it can be even more surprising) little furry monsters having a party that produces just that. If these results are indicative of the tools’ general output (by no means guaranteed), TV, film and commercial production will soon have powerful shortcuts available – for better or worse.
Meanwhile, Adobe’s photo-to-video example starts with an uploaded galaxy image. The text prompts it to convert to a reverse video from the star system to reveal the inside of the human eye. The company’s Generative Extend demo shows two people walking across a forest stream; the AI-generated component fills the gap in graphics. (It was convincing enough that I couldn’t tell which part of the product was AI-powered.)
Reuters reports that the tool will only generate five-second clips, at least initially. To Adobe’s credit, it says its Firefly Video Model is designed to be commercially safe and only train content the company has permission to use. “We only train them on the Adobe Stock database of content that contains 400 million images, illustrations, and videos selected so that they do not contain copyrights, trademarks or visual characters,” Adobe’s VP of Generative AI, Alexandru Costin, told . Reuters. The company also emphasized that it never trains for user activity. However, whether it puts its users to work or not is another matter entirely.
Adobe says its new video models will be available in beta later this year. You can sign up for a waiting list to try.
Source link