Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, generative AI company Stability AI, which captured the public imagination last August with the open-source image generator Stable Diffusion, announced the beta release of Stable Diffusion XL (SDXL), its latest image generation model that a press release said was built for enterprise clients, and “excels at photorealism.”
“SDXL brings a richness to image generation that is transformative across several industries, including graphic design and architecture,” said Stability AI CTO Tom Mason in the press release.
The SDXL beta is available in Stability’s API and DreamStudio programming suite, which are targeted to enterprise developers. The company says SDXL produces more detailed imagery and composition than its predecessor Stable Diffusion 2.1, including next-level photorealism, enhanced image composition and face generation, use of shorter prompts to create descriptive imagery, and greater capability to produce legible text.
SDXL also goes beyond text-to-image prompting to include image-to-image prompting (inputting one image to get variations of that image), inpainting (reconstructing missing parts of an image) and outpainting (constructing a seamless extension of an existing image).
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
>>Follow VentureBeat’s ongoing generative AI coverage<<
Stable Diffusion 3.0 models are ‘still under development’
“We used the ‘XL’ label because this model is trained using 2.3 billion parameters whereas prior models were in the range of 900 million parameters,” Scott Draves, VP of engineering at Stability AI, told VentureBeat by email. Draves added that while the SDXL model is an improvement over the 2.0 model architecture, 3.0 models are still under development. “We will have more fundamental improvements when they are ready,” he said.
SDXL is only being released in beta to API and DreamStudio customers, he explained, because the company is still getting input from customers to refine the model. “We are interested in feedback on all aspects of the model’s capabilities and performance before we release it to the open-source community,” he said.
Stability AI faces challenges on several fronts
London-based Stability AI, founded in 2019, has been on a tear since exploding into the cultural zeitgeist last summer. Stable Diffusion 2.0 was released in November 2022, just three months after the initial model.
But the company has also been busy fending off a variety of challenges, including fierce competition from other AI image generators like Midjourney.
There has also been pushback from artists who object to the use of their works as training data for Stable Diffusion models. Last December, Spawning, an organization that launched in September to build tools for artist ownership of their training data, announced that Stability AI would honor artists’ requests to opt out of the training of Stable Diffusion 3.
That hasn’t stopped the lawsuits from starting, however: In January, three artists filed the first class-action copyright infringement lawsuit around AI art against Stability AI and Midjourney, while in February Getty Images filed a lawsuit claiming its images were misused by Stability AI.
And even though last month Stability AI CEO Emad Mostaque hinted at company plans to go public, last week Semafor reported that Stability AI “is burning through cash and has been slow to generate revenue, leading to an executive hunt to help ramp up sales.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.