With the help of Google Labs, Google DeepMinds unveiled a new concept for MusicFX DJ that allows anyone to create music, reactively with live controls. The new features to Google Music AI toolkit, Music AI Sandbox, as well as demonstrating new AI music technologies in Dream Track, which is a set of tools that creators can use for generating professional-sounding instrumentals for Shorts and videos on YouTube.
The new MusicFX DJ
It is noteworthy that the new version of MusicFX DJ will include such enhancements as the set of powerful controls the grandiose change of the conception of the graphic interface, and the fine-tuning of the sound, as well as the alterations of the model’s behavior. These capabilities allow players to create and control a constant stream of music and to pass tunes to friends and/or join in a session at the same time.
Collectively with Jacob Collier, a six-time GRAMMY award-winning singer, songwriter, producer, and multi-instrumentalist, we have developed these improvements to enrich music visualization in DJ and increase the possibilities of the MusicFX DJ application.
Unlike other disc jockey tools that combine two or more existing music tracks, MusicFX DJ creates new tracks from concepts involving text prompts by the players. MusicFX DJ in the hands of players allows choosing favorite genres or musical instruments and feels, playing an actual DJ set, or looking for the melodies, timbres, and rhythms to use in the studio.
One of them has engendered MusicFX DJ; two novel approaches exist. First, Google modified the generative music model derived from offline measurements to still generate the music in a real, streaming fashion. They achieved this by training it to come up with the next generated clip of music using the generated music and the player’s input text.
Second, unlike traditional text-to-music models that only employ a single prompt more or less permanently, Google makes it possible for them to mix the given multiple text prompts and vary this mixture over time.
So inspired by the approach of Jacob and his focus on cooperation with other artists as well as listeners and viewers, we decided to make it more comfortable to share the music created with the help of MusicFX DJ. Friends can watch a performance playback of the session and even DJ 60 seconds of the MusicFX audio themselves — steering the bass in a completely different direction.
Music AI Sandbox
Music AI Sandbox is an innovative set of music AI tools that should enhance the productivity of musicians, producers, and songwriters who work with us via Music AI Lab by YouTube. Even though it remains relatively small as compared to our U.S.-based services, it has been a great market to trial new and innovative generative music products and services to receive early feedback from artists of all types and other stakeholders in the music business.
Since our first live demonstration of the Music AI Sandbox at this year’s I/O, we have also been collaborating with Google’s Technology & Society team to enhance the user interface and engage with artists to gain valuable feedback. This work has enabled us to make major enhancements to the models that underpin this set of tools.
Based on state-of-the-art techniques, this new version of Music AI Sandbox also employs the models used in Music FX DJ, as well as the familiar functions of loop generation, sound transformation, and in-painting for connecting parts of the musical tracks.
Moving forward from our current partnership with YouTube, we have pivoted to add to our Dream Track experiment and let the U.S. creators try out different genres and prompts using sophisticated text-to-music models.
The recent modifications to our music generation models use a new reinforcement learning process resulting in higher quality audio, as well as more careful consideration of the specific text input of the user.