Create customized sounds with Super Sonic Project | Dalle github | Best image to image ai generator | Dalle-mini github | Turtles AI

Create customized sounds with Super Sonic Project
Adobe presents a prototype that combines text, recognition of objects and voice input to facilitate the generation of audio effects in the videos
Isabella V15 October 2024

 


Adobe has unveiled Project Super Sonic, a prototype that combines text, object recognition and voice input to generate custom audio effects. This innovative tool promises to simplify and enrich sound production in video projects.

Key Points:

  •  Adobe unveils Project Super Sonic, an innovative prototype for creating audio effects.
  •  The technology integrates text, object recognition and voice input to generate sounds.
  •  Users can mimic the sounds to achieve customized results.
  •  The project currently remains a demo, but has potential for the future.

In the world of video creation, audio is as crucial as images. Aiming to simplify sound production, Adobe presented Project Super Sonic at the annual Max conference, a demo that offers a glimpse of future potential in audio effects generation. This prototype combines several technologies to allow users to quickly create soundtracks and sound effects, using text, object recognition and, surprisingly, their own voice. Although the idea of generating audio from a text prompt is not new, being already available in solutions such as those from ElevenLabs, what makes this project distinctive is its ability to take advantage of object recognition. Users can click on specific elements in a video frame, automatically generating the corresponding sound, which is an interesting integration of different models into a cohesive workflow. But the real twist comes with the ability to record voice sounds or other acoustic input while watching video. This approach allows users to provide expressive input that the system can interpret and transform into appropriate audio, making the creative process much more interactive. Justin Salamon, head of Sound Design AI at Adobe, emphasized that the main goal is to give users more control over sound production. By analyzing vocal characteristics and the sound spectrum, the system is able to guide audio generation based on the input provided. In addition, it is expected that users can use different sound sources, such as clapping or playing instruments, to achieve unique results. It is important to note that Project Super Sonic is currently one of many “previews” presented by Adobe, designed to show potential future developments, with no guarantee of inclusion in commercial versions of the software. However, the team’s previous experience working on features such as Generative Extend for the audio part of Firefly suggests that there is a good chance for future implementation. So far, Project Super Sonic remains a promising idea with a focus on customization and creative control, hinting at interesting developments for video professionals.

Innovation in this field could mark a new era in audio production for content creators.