Stability AI is continuing its rapid pace of new model development with the debut of the Stable LM 2 12 billion parameter model update today.
While Stability AI is perhaps best known for its text-to-image Stable Diffusion generative AI model, the company has a much broader vision and set of models than just image generation.
Stable LM was first launched in April 2023 as a text content large language model (LLM) and it was updated with the Stable LM 2 1.6B model in January of this year.
The new Stable LM 2 12B dramatically grows Stable LM 2's capabilities with more parameters and enhanced performance that the company claims outperforms larger models like Llama 2 70B under certain benchmarks.
Stable LM 2 12B includes a base version and an instruction-tuned variant designed to enhance conversational skills across seven languages: English, Spanish, German, Italian, French, Portuguese, and Dutch. The new models are available for commercial use via a Stability AI membership, which is the company's model for generating revenue.
"The instruct model is specifically designed and trained to interact with users in a conversational way," Carlos Riquelme, head of the language team at Stability AI told VentureBeat. "In addition, a significant effort has been made to make it safer."
The update comes less than a month after the resignation of co-founder and CEO Emad Mostaque amid accusations of mismanagement, but shows that the company continues to move forward and ship new model updates under new interim co-CEOs Shan Shan Wong and Christian Laforte.
Stability AI is aiming to balance performance and accessibility with Stable LM 2
Stability AI claims that Stable LM 2 12B strikes an optimal balance between power, accuracy, and accessibility.
With 12 billion parameters, it can handle a variety of tasks typically limited to models with far greater parameters and computational requirements. Benchmark results demonstrate that Stable LM 2 12B achieves strong performance comparable to significantly larger models.

What's particularly noteworthy is that the same general approach that Stability AI used to build the smaller 1.6B model was used for the new 12B models.
"We foresee a future where models are not used in isolation but operate as part of broader systems where one, or maybe several, language models interact among themselves and use external software tools," Riquelme said. "Accordingly, in this direction, the 12B model has also been trained to be able to play this master role, by connecting to and calling a variety of functions and APIs that different users and organizations may find relevant to their needs."
The 1.6B model is getting better too
Not only is Stability AI adding more parameters to Stable LM 2, it's also improving the previously released 1.6B version too.
Riquelme noted that smaller models like the 1.6B Stable LM can be useful to perform more specific and narrow tasks, while the larger 12B will still have more capability. That said, he emphasized that the updated Stable LM 2 1.6B model has improved its conversational style, it’s safer and it is better able to connect with other software tools.
"It’s the same size as before, so it’s equally fast and lightweight," Riquelme said. "On the other hand, the 12B is more performant, more reliable, while computationally heavier."
He explained that depending on the use case and the available resources such as response time, memory and budget, different models will offer different tradeoffs.
"We don’t think there is a single optimal model size for every situation, and that’s why we offer two fairly different model sizes," he said.
