
A year ago, Stabity AI, the London-based startup behind the open source image-generating AI model Stable Diffusion, quietly released Dance Diffusion, a model that can generate songs and sound effects given a text description of the songs and sound effects in question.
Dance Diffusion was Stabity AI’s first foray into generative audio, and it signaled a meaningful investment — and acute interest, seemingly — from the company in the nascent field of AI music creation tools. But for nearly a year after Dance Diffusion was announced, all seemed quiet on the generative audio front — at least as far as it concerned Stabity’s efforts.
The research organization Stabity funded to create the model, Harmonai, stopped updating Dance Diffusion sometime last year. (Historically, Stabity has provided resources and compute to outside groups rather than bud models entirely in-house.) And Dance Diffusion never gained a more polished release; even today, installing it requires working directly with the source code, as there’s no user interface to speak of.
Now, under pressure from investors to translate over $100 mlion in capital into revenue-generated products, Stabity is recommitting to audio in a big way.
Today marks the release of Stable Audio, a tool that Stabity claims is the first capable of creating “high-quality,” 44.1 kHz music for commercial use via a technique called latent diffusion. Trained on audio metadata as well as audio fes’ durations — and start times — Stabity says that Audio Diffusion’s underlying, roughly 1.2-blion-parameter model affords greater control over the content and length of synthesized audio than the generative music tools released before it.
“Stabity AI is on a mission to unlock humanity’s potential by buding foundational AI models across a number of content types or ‘modalities,’” Ed Newton-Rex, VP of audio for Stabity AI, told technewss in an ema interview. “We started with Stable Diffusion and have grown to include languages, code and now music. We believe the future of generative AI is multimodality.”
Stable Audio wasn’t developed by Harmonai — or, rather, it wasn’t developed by Harmonai alone. Stabity’s audio team, formalized in Apr, created a new model inspired by Dance Diffusion to underpin Stable Audio, which Harmonai then trained.
Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've but — without the big spend. Avaable through May 9 or whe tables last.
Harmonai now serves as Stabity’s AI music research arm, Newton-Rex, who joined Stabity last year after tenures at TikTok and Snap, tells me.
“Dance Diffusion generated short, random audio clips from a limited sound palette, and the user had to fine-tune the model themselves if they wanted any control. Stable Audio can generate longer audio, and the user can guide generation using a text prompt and by setting the desired duration,” Newton-Rex said. “Some prompts work fantastically, like EDM and more beat-driven music, as well as ambient music, and some generate audio that's a bit more ‘out there,’ like more melodic music, classical and jazz.”
Stabity turned down our repeated requests to try Stable Audio ahead of its launch. For now, and perhaps in perpetuity, Stable Audio can only be used through a web app, which wasn’t live unt this morning. In a move that’s sure to irk supporters of its open research mission, Stabity hasn’t announced plans to release the model behind Stable Audio in open source.
But Stabity was amenable to sending samples showcasing what the model can accomplish across a range of genres, mainly EDM, given brief prompts.
Whe they very well could’ve been cherry picked, the samples sound — at least to this reporter’s ears — more coherent, melodic and for lack of a better word musical than many of the “songs” from the audio generation models released so far. (See Meta’s AudioGen and MusicGen, Riffusion, OpenAI’s Jukebox, Google’s MusicLM and so on.) Are they perfect? Clearly not — they’re lacking in creativity, for one. But if I heard the ambient techno track below playing in a hotel lobby somewhere, I probably wouldn’t assume AI was the creator.
As with generative image, speech and video tools, yielding the best output from Stable Audio requires engineering a prompt that captures the nuances of the song you’re attempting to generate — including the genre and tempo, prominent instruments and even the feelings or emotions the song evokes.
For the techno track, Stabity tells me they used the prompt “Ambient Techno, meditation, Scandinavian Forest, 808 drum machine, 808 kick, claps, shaker, synthesizer, synth bass, Synth Drones, beautiful, peaceful, Ethereal, Natural, 122 BPM, Instrumental”; for the track below, “Trance, Ibiza, Beach, Sun, 4 AM, Progressive, Synthesizer, 909, Dramatic Chords, Choir, Euphoric, Nostalgic, Dynamic, Flowing.”
And this sample was generated with “Disco, Driving, Drum, Machine, Synthesizer, Bass, Piano, Guitars, Instrumental, Clubby, Euphoric, Chicago, New York, 115 BPM”:
For comparison, I ran the prompt above through MusicLM via Google’s AI Test Kitchen app on the web. The result wasn’t bad necessary. But MusicLM interpreted the prompt in a very obviously repetitive, reductive way:
One of the most striking things about the songs that Stable Audio produces is the length up to which they’re coherent — about 90 seconds. Other AI models generate long songs. But often, beyond a short duration — a few seconds at the most — they devolve into random, discordant noise.
The secret is the aforementioned latent diffusion, a technique simar to that used by Stable Diffusion to generate images. The model powering Stable Audio learns how to gradually subtract noise from a starting song made almost entirely of noise, moving it closer — slowly but surely, step by step — to the text description.
It’s not just songs that Stable Audio can generate. The tool can replicate the sound of a car passing by, or of a drum solo.
Here’s the car:
And the drum solo:
Stable Audio is far from the first model to leverage latent diffusion in music generation, it’s worth pointing out. But it’s one of the more polished in terms of musicality — and fidelity.
To train Stable Audio, Stabity AI partnered with the commercial music library AudioSparx, which supplied a collection of songs — around 800,0000 in total — from its catalog of largely independent artists. Steps were taken to fter out vocal tracks, according to Newton-Rex — presumably over the potential ethical and copyright quandries around “deepfaked” vocals.
Somewhat surprisingly, Stabity isn’t ftering out prompts that could land it in legal crosshairs. Whe tools like Google’s MusicLM throw an error message if you type something like “along the lines of Barry Manow,” Stable Audio doesn’t — at least not now.
When asked point blank if someone could use Stable Audio to generate songs in the style of popular artists like Harry Styles or The Eagles, Newton-Rex said that the tool’s limited by the music in its training data, which doesn’t include music from major labels. That may be so. But a cursory search of AudioSparx’s library turns up thousands of songs that themselves are “in the style of” artists like The Beatles, AC/DC and so on, which seems like a loophole to me.
“Stable Audio is designed primary to generate instrumental music, so misinformation and vocal deepfakes aren’t likely to be an issue,” Newton-Rex said. “In general, however, we’re actively working to combat emerging risks in AI by implementing content authenticity standards and watermarking in our imaging models so that users and platforms can identify AI-assisted content generated through our hosted services … We plan to implement labeling of this nature in our audio models too.”
Increasingly, homemade tracks that use generative AI to conjure famiar sounds that can be passed off as authentic, or at least close enough, have been going viral. Just last month, a Discord community dedicated to generative audio released an entire album using an AI-generated copy of Travis Scott’s voice — attracting the wrath of the label representing him.
Music labels have been quick to flag AI-generated tracks to streaming partners like Spotify and SoundCloud, citing intellectual property concerns — and they've generally been victorious. But there's stl a lack of clarity on whether “deepfake” music violates the copyright of artists, labels and other rights holders.
And unfortunately for artists, it’ll be a whe before clarity arrives. A federal judge ruled last month that AI-generated art can’t be copyrighted. But the U.S. Copyright Office hasn’t taken a firm stance yet, only recently beginning to seek public input on copyright issues as they relate to AI.
Stabity takes the view that Stable Audio users can monetize — but not necessary copyright — their works, which is a step short of what other generative AI vendors have proposed. Last week, Microsoft announced that it would extend indemnification to protect commercial customers of its AI tools when they’re sued for copyright infringement based on the tools’ outputs.
Stabity AI customers who pay $11.99 per month for the Pro tier of Stable Audio can generate 500 commercializable tracks up to 90 seconds long monthly. Free tier users are limited to 20 non-commercializable tracks at 20 seconds long per month. And users who wish to use AI-generated music from Stable Audio in apps, software or websites with more than 100,000 monthly active users have to sign up for an enterprise plan.
In the Stable Audio terms of service agreement, Stabity makes it clear that it reserves the right to use both customers’ prompts and songs, as well as data like their activity on the tool, for a range of purposes, including developing future models and services. Customers agree to indemnify Stabity in the event intellectual property claims are made against songs created with Stable Audio.
But, you might be wondering, wl the creators of the audio on which Stable Audio was trained see even a small portion of that monthly fee? After all, Stabity, as have several of its generative AI rivals, has landed itself in hot water over training models on artists’ work without compensating or informing them.
As with Stabity’s more recent image-generating models, Stable Audio does have an opt-out mechanism — although the onus for the most part lies on AudioSparx. Artists had the option to remove their work from the training dataset for the initial release of Stable Audio, and about 10% chose to do so, according to AudioSparx EVP Lee Johnson.
“We support our artists' decision to participate or not, and we're happy to provide them with this flexibity,” Johnson said via ema.
Stabity’s deal with AudioSparx covers revenue sharing between the two companies, with AudioSparx letting musicians on the platform share in the profits generated by Stable Audio if they opted to participate in the initial training or decide to help train future versions of Stable Audio. It’s simar to the model being pursued by Adobe and Shutterstock with their generative AI tools, but Stabity wasn’t forthcoming on the particulars of the deal, leaving unsaid how much artists can expect to be paid for their contributions.
Artists have reason to be wary, given Stabity CEO Emad Mostaque’s propensity for exaggeration, dubious claims and outright mismanagement.
In Apr, Semafor reported that Stabity AI was burning through cash, spurring an executive hunt to ramp up sales. According to Forbes, the company has repeatedly delayed or outright not paid wages and payroll taxes, leading AWS — which Stabity uses for compute to train its models — to threaten to revoke Stabity’s access to its GPU instances.
Stabity AI recently raised $25 mlion through a convertible note (i.e. debt that converts to equity), bringing its total raised to over $125 mlion. But it hasn't closed new funding at a higher valuation; the startup was last valued at $1 blion. Stabity was said to be seeking quadruple that within the next few months, despite stubbornly low revenues and a high burn rate.
Wl Stable Audio turn the company’s fortunes around? Maybe. But considering the hurdles Stabity has to clear, it’s safe to say it’s a bit of a long shot.