Magenta RealTime: An Open-Weights Live Music Model (via) Fun new "live music model" release from Google DeepMind:
Today, we’re happy to share a research preview of Magenta RealTime (Magenta RT), an open-weights live music model that allows you to interactively create, control and perform music in the moment. [...]
As an open-weights model, Magenta RT is targeted towards eventually running locally on consumer hardware (currently runs on free-tier Colab TPUs). It is an 800 million parameter autoregressive transformer model trained on ~190k hours of stock music from multiple sources, mostly instrumental.
No details on what that training data was yet, hopefully they'll describe that in the forthcoming paper.
For the moment the code is on GitHub under an Apache 2.0 license and the weights are on HuggingFace under Creative Commons Attribution 4.0 International.
The easiest way to try the model out is using the provided Magenta_RT_Demo.ipynb Colab notebook.
It takes about ten minutes to set up, but once you've run the first few cells you can start interacting with it like this:
Recent articles
- Trying out the new Gemini 2.5 model family - 17th June 2025
- The lethal trifecta for AI agents: private data, untrusted content, and external communication - 16th June 2025
- An Introduction to Google’s Approach to AI Agent Security - 15th June 2025