Phi-2 is the latest small language model from Microsoft and is said to be significantly better than its predecessors. The company is now hosting the small models on Azure.
In June, Microsoft researchers presented Phi-1, a transformer-based language model optimized for code with only 1.3 billion parameters. The model was trained exclusively on high-quality data and outperformed models up to ten times larger in benchmarks.
Phi-1.5 followed a few months later, also with 1.3 billion parameters and trained on additional data consisting of various AI-generated texts. Phi-1.5 can compose poems, write emails and stories, and summarize texts. One variant can also analyze images. In benchmarks on common sense, language comprehension, and reasoning, the model was in some areas able to keep up with models with up to 10 billion parameters.
Microsoft has now announced Phi-2, which with 2.7 billion parameters is twice as large, but still tiny compared to other language models. Compared to Phi-1.5, the model shows dramatic improvements in logical reasoning and safety, according to the company. With the right fine-tuning and customization, the small language model is a powerful tool for cloud and edge applications, the company said.
Microsoft’s Phi-2 shows improvements in math and coding
The company has not yet published any further details about the model, however, Sebastien Bubeck, head of the Machine Learning Foundations Group at Microsoft Research, published a screenshot on Twitter of the “MT-Bench” benchmark, which attempts to test the real capabilities of large – and small – language models with powerful language models such as GPT-4.
According to the results, Phi-2 outperforms Meta’s Llama-2-7B model in some areas. A chat version of Phi-2 is also in the pipeline and may address some of the model’s existing weaknesses in these areas.
Microsoft announces “Models as a Service”
Phi-2 and Phi-1.5 are now available in the Azure AI model catalog, along with Stable Diffusion, Falcon, CLIP, Whisper V3, BLIP, and SAM. Microsoft is also adding Code Llama and Nemotron from Meta and Nvidia.
Microsoft also announced “Models as a Service”: “Pro developers will soon be able to easily integrate the latest AI models such as Llama 2 from Meta, Command from Cohere, Jais from G42, and premium models from Mistral as an API endpoint to their applications. They can also fine-tune these models with their own data without needing to worry about setting up and managing the GPU infrastructure, helping eliminate the complexity of provisioning resources and managing hosting.“