Stability AI, the startup behind the generative AI art tool Stable Diffusion, today open sourced a suite of text-generating AI models intended to compete against systems like OpenAI’s GPT-4.
Called StableLM and available in “alpha” on GitHub and Cuddle roomsa platform for hosting AI models and code, Stability AI says its models can generate both code and text and “demonstrate how small and efficient models can deliver high performance with proper training.”
“Language models will be the backbone of our digital economy and we want everyone to have a say in their design,” the Stability AI team wrote in a statement. blog post on the company’s website.
The models were trained on a dataset called The Pile, which is a mix of Internet-scraped text samples from websites such as PubMed, StackExchange, and Wikipedia. But Stability AI claims it created a custom training set that increases the size of the standard Pile by 3x.

Image Credits: Stability AI
Stability AI did not say in the blog post whether the StableLM models have the same limitations as others, namely a tendency to generate toxic responses to certain prompts and hallucinate (i.e., fabricated) facts. But given that The Pile contains blasphemous, lewd, and otherwise rather abrasive language, it wouldn’t be surprising if that were the case.
This reporter tried to test the models on Hugging Face, which provides a frontend to run them without having to reconfigure the code from scratch. Unfortunately, I got an “out of capacity” error every time, which may have to do with the size of the models – or their popularity.
“As is typical of any pre-trained large language model without additional refinement and reinforcement learning, the responses a user gets can be of varying quality and may contain offensive language and views,” Stability AI wrote in the repo for StableLM. “This is expected to improve with scale, better data, community feedback and optimization.”
Still, the StableLM models seem quite capable in terms of what they can accomplish, especially the fine-tuned versions included in the alpha release. Tuned using a Stanford-developed technique called Alpaca on open source datasets, including from AI startup Anthropic, the refined StableLM models behave like ChatGPT, responding to instructions (sometimes humorously) such as “write a cover letter for a software developer”. or “write lyrics for an epic rap battle song.”
The number of open source text-generating models is growing practically by the day, as companies large and small compete for visibility in the increasingly lucrative generative AI space. Over the past year, Meta, Nvidia, and independent groups such as the Hugging Space-backed BigScience project have released models roughly comparable to “privately” available-via-API models such as GPT-4 and Anthropic’s Claude.
Some researchers have criticized the release of open source models along the lines of StableLM in the past, arguing that they can be used for unsavory purposes such as creating phishing emails or supporting malware attacks . But Stability AI argues that open-sourcing is, in fact, the right approach.
“We open source our models to promote transparency and trust. Researchers can “look under the hood” to verify performance, work on interpretability techniques, identify potential risks, and help develop security measures,” Stability AI wrote in the blog post. “Open, fine-grained access to our models enables the broad research and academic community to develop interpretation and security techniques beyond what is possible with closed models.”

Image Credits: Stability AI
There may be a grain of truth in that. Even commercialized models like GPT-4, which have filters and human moderation teams, have been found to spray toxicity. On the other hand, it takes more effort to modify and fix open source models on the backend, especially if developers don’t keep up with the latest updates.
In any case, Stability AI has historically not shied away from controversy.
The company is in the crosshairs of legal fallen claiming that it has violated the rights of millions of artists by developing AI art tools using web-scraped, copyrighted images. And a few communities on the internet have used Stability’s tools to generate pornographic deepfakes of celebrities and graphic images of violence.
Moreover, despite the philanthropic tone of his blog post, Stability AI is also under pressure to monetize its massive endeavors – which range from art and animation to biomedical and generative audio. Stability AI CEO Emad Mostaque has hinted in plans for an IPO, but Semafor recently reported that Stability AI — which raised more than $100 million in venture capital last October at a reported valuation of more than $1 billion — “burns through cash and has been slow to generate revenue.”