Meta Launches an Open-Source AI Music Generator
Meta is giving us a glimpse at the future with MusicGen, a new open-source AI model that can spit out short songs based on your input. While it’s far from a finished product, MusicGen is impressive, and you can condition the AI with an existing melody to improve its results.
To test the MusicGen AI, simply visit the Hugging Face website, wait for the model to load, and start punching away. MusicGen relies on text input, so you need to describe exactly what you want to hear—a 90s R&B track with a vibraphone, for example, or a metal song with cumbia rhythm.
The AI spits out a track that’s just 12 seconds long, and of course, the results are wildly inconsistent. Simple prompts seem to work best. And if you want to get very specific with the AI, you can give it a music file for an existing song. The AI will then “condition” itself on the song’s melody, though it will not save any uploaded songs to its database.
We present MusicGen: A simple and controllable music generation model. MusicGen can be prompted by both text and melody.
We release code (MIT) and models (CC-BY NC) for open research, reproducibility, and for the music community: https://t.co/OkYjL4xDN7 pic.twitter.com/h1l4LGzYgf— Felix Kreuk (@FelixKreuk) June 9, 2023
Like Google’s MusicLM AI, which we tested last month, MusicGen’s output can sound a bit watery, foggy, or smeared. Instruments don’t have a ton of definition, especially when you give the AI an ambitious prompt. From my testing, the AI rarely does a good job with all of the “instruments” that it’s playing, though it usually has one or two well-defined and crisp instruments.
Comments are closed.