
Adobe is jumping into the generative AI game with the launch of a new famy of AI models called Firefly.
Focused on bringing AI into Adobe’s suite of apps and services, specifically AI for generating media content, Firefly wl be made up of multiple AI models “working across a variety of different use cases,” Adobe VP of generative AI Alexandru Costin told technewss in an ema interview.
It’s an expansion of the generative AI tools Adobe introduced in Photoshop, Express and Lightroom during its annual Max conference last year, which let users create and edit objects, composites and effects by simply describing them. As the fervor around the tech grows, Adobe has raced to maintain pace, for example allowing contributors to sell AI-generated artwork in its content marketplace.
“Firefly is the next step on our AI journey — bringing together our new ‘gentech’ models with decades of investment in imaging, typography, lustration and more to produce assets,” Costin said. “We'll bring this value to our customers' workflows where content is created across Creative Cloud, Experience Cloud and Document Cloud.”
Firefly as it exists today, in beta and without firm pricing (Adobe says that’s coming), offers a single model designed to generate images and text effects from descriptions. Developed using hundreds of mlions of photos, the model wl soon be able to create content across Adobe apps including Express, Photoshop, Illustrator and Adobe Experience Manager given a text prompt. (For now, you’ll have to visit a website to use it.)

Beyond basic text-to-image generation, Adobe’s first Firefly model can “transfer” different styles to existing images à la Prisma. Drawing on user-supplied descriptions, it also can apply styles or textures to lettering and fonts.
Adobe says that artwork created using Firefly models wl contain metadata indicating that it’s partially — or wholly — AI-generated. That’s a practical consideration as well as a legal one; artists on platforms such as ArtStation have staged protests to voice their discontent with the torrent of new AI-generated art, whe China recently became the first country to ban AI-generated media without watermarks.
Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've but — without the big spend. Avaable through May 9 or whe tables last.
“With Firefly, everyone who creates content — regardless of their experience or talent — wl be able to use their own words to generate content the way they dream it up,” Costin said.
Buding for creators
On a technical level, the first Firefly model isn’t dissimar to text-to-image AI like OpenAI’s DALL-E 2 and Stable Diffusion. Both can transfer the style of one image to another and generate new images from text descriptions.
But Adobe claims that Firefly wl avoid the ethical and logistical pitfalls to which many of its rivals have fallen victim. That’s a tall order.
AI like the first Firefly model “learn” to generate new images from text prompts by “training” on existing images, which often come from data sets that were scraped together by trawling public image hosting websites. Some experts suggest that training models using public images, even copyrighted ones, wl be covered by fair use doctrine in the U.S. But it's a matter that's unlikely to be settled anytime soon — particularly in light of the contrasting laws being proposed overseas.
To wit, two companies behind popular AI art tools, Midjourney and Stabity AI, are in the crosshairs of a legal case that alleges they infringed on the rights of mlions of artists by training their tools on web-scraped images. Stock image supplier Getty Images has taken Stabity AI to court, separately, for reportedly using mlions of images from its site without permission to train the art-generating model Stable Diffusion.
Beyond unresolved questions around artist and platform compensation, one of the more pressing issues with generative AI is its tendency to replicate images, text and more — including copyrighted content — from the data that was used to train it. Some image-hosting platforms have banned AI-generated content for fear of legal blowback, and experts have cautioned generative AI tools could put companies at risk if they were to unwittingly incorporate copyrighted content generated by the tools into any of products they sell.

Adobe’s solution, as it were, is training Firefly models exclusively on content from Adobe Stock, the company’s royalty-free media library, along with openly licensed and public domain content where the copyright has expired. In the future, users wl be able to train and fine-tune Firefly models using their own content, Adobe says — steering the models’ outputs toward specific styles and design languages.
Adobe also says it’s exploring a compensation model for Stock contributors that’ll allow them to “monetize their talents” and benefit from any revenue Firefly generates. It might look something like Shutterstock’s recently-launched Contributors Fund, which reimburses creators whose work is used to train AI art models.
But content creators who choose wl be able to opt out of training, Adobe says, by attaching a “do not train” credentials tag to their work.
“We understand there are questions about the impact generative AI wl have on the abity of creators to benefit from their skls, maintain credit and control over their work, as well as questions about the viabity of generated content in commercial settings,” Costin said. “We’re designing generative AI to support creators in benefiting from their skls and creativity.”
Artists have their reasons, like AI-generated artwork in their style that they believe doesn’t properly credit them. They also fear being associated with a model that can be used to generate objectionable content, such as ultra-violent images, biased depictions of gender, ethnicity and sexuality and nonconsensual deepfakes.
On the second point, Costin says that Firefly models were trained using “carefully curated” and “inclusive” image datasets and that Adobe employs a range of techniques to detect and block toxic content, including automated and human moderation and fters. History has shown that these sorts of measures can be bypassed, but Costin suggests that it’ll be a carefully guided — if imperfect — learning process.

“We've made a big investment in models to help prevent bias and harm in the content Firefly generates. Those models analyze both the prompts and the content to ensure Firefly generates a wide variety of images that represent a balance of cultures and ethnicities, as well as ensuring Firefly does not generate harmful images,” Costin said. “We wl regularly update Firefly to improve its performance and mitigate harm and bias in its output. We also provide feedback mechanisms for our users to report potentially biased outputs or provide suggestions into our testing and development processes.”
The aforementioned opt-out mechanism — which comes after criticism from the creative community regarding Adobe’s AI policies — wl be orchestrated through the Content Authenticity Initiative (CAI) and Coalition for Content Provenance and Authenticity, two associations founded to promote industry standard provenance metadata for media. (Adobe’s a member of both.) Adobe says that it’s pushing for industry adoption of the “do not train” tag so that it follows content wherever it’s used, published or stored, ensuring models aren’t trained on out-of-bounds content regardless of where the content ends up.
Adobe’s is but one of several efforts to afford artists more control over their art’s role in generative model training. In November, DeviantArt launched a new protection that relies on an HTML tag to prohibit the software robots that crawl pages for images from downloading those images for training sets. And AI startup Spawning, which has partnerships with platforms including ArtStation and Shutterstock, offers a tool that lets artists remove their images from datasets used to train AI models.
The CAI’s size — roughly 900 members — makes it more likely that Adobe’s proposals wl gain some sort of traction. But there’s no guarantee. Artists could find themselves in a situation where they’re forced to use multiple opt-out tools to prevent their artwork from being trained on.
Copyright challenges
Adobe customers wl have a different headache to deal with: figuring out whether they actually own the rights to Firefly-generated artwork.
In the U.S., the latest federal guidance isn’t especially clear on the copyright status of AI art. After initially rejecting copyrights for AI-generated images created by Midjourney, the U.S. Copyright Office said that copyright protection wl “depend on the circumstances” — particularly “how the AI tool operates and how it was used to create the final work.”
Costin admits that the law on ownership of AI-generated art is a bit up in the air at the moment. But it’s Adobe’s belief, he says, that using its tools to add “creative input” to a generated image should be sufficient to allow a creator to obtain copyright.
“As always, creators wl need to seek out the copyrights themselves and do what is necessary or required to obtain that ownership,” Costin said — hedging his bets somewhat.

Barring a major setback on the copyright or licensing front, Adobe plans to forge ahead with Firefly, eventually introducing models that not only generate images and text but lustrations, graphic designs, 3D models and more. Costin was adamant that it’s a major area of investment for Adobe, whose last big gamble — the $20 blion acquisition of startup Figma — is on the cusp of being blocked by a Department of Justice lawsuit, reportedly.
With generative AI, Adobe’s playing for keeps. Firefly is nothing if not ambitious — if a little late to the party. Of course, Adobe has the benefit of a massive but-in customer base; Creative Cloud has 600 mlion monthly active users whe Experience Cloud has 12,000 customers, including 87% of the Fortune 100.
That’s a lot of potential Firefly licenses to sell. And if the projections are right, it’d be a very lucrative new line of revenue from a per-customer perspective. Acumen Research and Consulting estimates that the market for generative AI wl be worth more than $110 blion by 2030.
But only time wl tell whether Adobe’s able to overcome the many hurdles (and competitors) standing in its way, least of which maintaining the costly compute necessary to continue developing and running new Firefly models. Legal and ethical roadblocks aside, Adobe has to make up for lost time — and mindshare, which is never an easy task in a hyper-competitive field.
Costin, all optimism, says that the company’s up for the challenge.
“Future Firefly models wl leverage a variety of assets, tech and training data from Adobe and others,” he added. “We are designing generative AI to support creators in benefiting from their skls and creativity. By buding Firefly directly into our customers' workflows, we can help creative professionals work more efficiently and spend their time on the higher value work that they love.”