Small Language Models (SLMs) are a variant of large language models (LLMs) that use smaller models (as defined by parameter count) trained on less but higher-quality data to achieve performance comparable to a much larger model that required more resources (compute and data) to train.

The concept was introduced in the seminal paper, Textbooks Are All You Need by Gunasekar et al.1

Footnotes

  1. Textbooks Are All You Need - Microsoft Research