Adobe Firefly adds new video tools, AI models & sound effects

Adobe announced enhancements to its Firefly Video Model, introducing improved motion fidelity and advanced video controls designed to accelerate workflows and deliver the precision and style creators need to elevate their storytelling. The platform also added new generative AI partner models within Generate Video on Firefly, giving users the flexibility to choose the best model for their creative needs across image, video, and sound.
Additionally, the platform rolled out new workflow tools that give creators greater control over a video’s composition and style. Users can now layer in custom-generated sound effects directly within the Firefly web app and begin experimenting with AI-powered, avatar-led videos to expand their creative possibilities.
Improving the Firefly model and expanding options
Among the strengths of the Firefly Video Model is generating dynamic landscapes, from natural vistas to urban environments. The model also demonstrates capability with animal motion and behaviour, atmospheric elements like weather patterns and particle effects, and both 2D and 3D animation. The Video Model’s newly improved motion fidelity means video generations will now move more naturally, with smoother transitions and lifelike accuracy.
Creators enjoy experimenting with different styles, so the platform is continuously expanding the models we offer inside the Firefly app. Recently, it added Runway’s Gen-4 Video and Google Veo3 with Audio to Firefly Boards and Veo3 with Audio in Generate Video. Additionally, there are more partner models coming soon to the Firefly app:
-
Topaz Labs’ Image and Video Upscalers and Moonvalley’s Marey will be launching soon in Firefly Boards.
-
Luma AI’s Ray 2 and Pika 2.2, which are already available in Boards, will soon be added to Generate Video.
From storyboarding to fully animated video outputs, Firefly web and mobile apps let users choose the AI model that best fits each part of their workflows, while keeping their style consistent and creative control intact.
Advanced video controls
Whether users are creating for mobile, widescreen, or social, they can easily select vertical, horizontal, or square aspect ratios to match their format, no extra editing required.
They are also introducing other powerful new features designed to save time and elevate creative output:
Composition Reference for Video
This will bring structure and consistency to video creations. Users can upload a reference video, describe their vision, and Firefly will generate a new video that transfers the original composition to their generation, maintaining visual flow across scenes or repurposing existing content with a fresh look.
Style Presets
It will be possible to apply a distinct visual style to videos with a single click. Choose from presets like claymation, anime, line art, or 2D to instantly set the tone. Whether you’re pitching a concept, building a creative brief, or finalising your output, Style Presets help you stay consistent and speed up your workflow.
Keyframe Cropping
Users can stay in their creative flow with intuitive cropping tools. They can upload first and last frames, select how their image will be cropped, describe scenes, and Firefly will generate a video that fits their format.
Composition Reference, Style Presets, and Keyframe Cropping are built to give them more control, more speed, and more creative freedom. And they’re just the beginning.
Bring your stories to life with sound effects and virtual avatars
Whether they are building a tutorial, pitching a concept, or creating content at scale, Firefly now gives them even more ways to enhance their storytelling with two new features. With Adobe's commercially safe Generate Sound Effects (beta), they can layer in custom audio using just a prompt or their voice, adding emotion, energy, and cinematic polish to every scene. And with Text to Avatar (beta), users can turn scripts into avatar-led videos with just a few clicks.
Generate Sound Effects (beta)
Sound is a powerful storytelling tool that adds emotion and depth to videos. With Generate Sound Effects (beta), users can create custom sounds, like a lion’s roar or ambient nature sounds, that enhance their visuals. Other Firefly generative AI models, Generate Sound Effects (beta), are commercially safe, so they can create with confidence.
Once your video is complete, users can export directly to Adobe Express to create polished, share-ready content for all the social channels or export to Premiere Pro to add the video into an existing timeline.
Text to avatar (beta)
With Text to avatar (beta), users can turn their scripts into engaging, avatar-led videos in just a few clicks.
They can choose from a diverse library of avatars, customise their background with a colour, image, or video, and select the accent that best fits their video. Firefly handles the rest.
Here are a few ways creators are using Text to avatar:
-
Delivering clear, engaging video lessons or FAQs with a virtual presenter
-
Transforming blog posts or articles into scalable video content for social media
-
Pitching ideas or building internal training materials with a human touch
The platform is also introducing Enhance Prompt within the Generate Video module on Firefly web, a new feature designed to help users get better results faster. Enhance Prompt takes original input and adds language that helps Firefly better understand creative intent. It removes ambiguity, sharpens direction, and speeds your workflow.
News