Exploring Small Language Models

by Natalia Kuzminykh , Associate Data Science Content Editor

While large language models are well-known for their ability to handle complex tasks, they also come with significant computational power and energy demands, making them less suitable for smaller organizations and devices with limited processing capacity.

Small language models (SLMs) offer a practical alternative. Designed to be more lightweight and resource-efficient, they’re ideal for applications that need to operate within limited computational environments. With fewer resource demands, SLMs are easier and quicker to deploy, reducing the time and effort required for maintenance.

Throughout this article, we’ll explore the various use cases of SLMs and discuss their advantages over LLMs. We’ll focus on: their efficiency, speed, robustness and security. And we’ll aim to understand why this type of AI model is becoming a popular choice for applications where large-scale models aren’t feasible.

Defining Small Language Models (SLMs) and Exploring Their Use Cases

Overall, the SLM is a type of neural network that generates natural language content. The term “small” refers not only to the physical size of the model, but also to the number of parameters it contains, its neural architecture, and the scope of the data used for its training.

Parameters are numerical values that guide a model’s analysis of input and creation of responses. A smaller number of parameters also means a simpler model, which requires less training data and consumes fewer computing resources.

The consensus among many researchers is that LMs with fewer than 100 million parameters are considered small, although the definition can vary. Some experts consider models with as few as one million to 10 million parameters to be small, in contrast to today’s larger models which can have hundreds of billions of parameters.

The pros and cons of SLMs
Highlighting the opportunities for SLMs when compared to large language models.

Summary of use cases of SLMs

Recent advancements with SLMs are driving their widespread adoption. These models, with their ability to generate a coherent response to specific contexts, have numerous applications.

One notable use case is text completion, where SLMs predict and generate text, assisting with tasks such as sentence completion and conversational prompts. This technology is also valuable for language translation—bridging linguistic gaps in real-time interactions.

In customer service, SLMs power chatbots and virtual assistants, allowing them to conduct natural and engaging conversations. These applications are essential for providing end-to-end assistance and handling routine inquiries, which enhances the customer experience and operational efficiency. In content creation, SLMs generate text for emails, reports and marketing materials. This saves significant time and resources, while maintaining content relevance and quality.

SLMs also analyse data, performing sentiment analysis to gauge public opinion and customer feedback. They aid in identifying named entities for better information organization and analyse market trends to optimize sales and marketing strategies. These capabilities enable businesses to make informed decisions, tailor customer interactions and innovate effectively in product development.

The Issues with LLMs

The reason why training an LLM is often more feasible for large organizations is due to three significant challenges: data, hardware and legal concerns.

Resource and energy use

First of all, it’s no secret that training LLMs is an intensive process that needs powerful machines. For example, training Google’s PaLM required a staggering 6,144 TPU v4 chips, while Meta AI’s OPT model, although comparatively more efficient, still used 992 Nvidia A100 GPUs of 80GB each. The scale of this hardware deployment often leads to failures, necessitating manual restarts throughout the lengthy training process. That not only makes it more complicated, but also adds to the cost of developing the software. Some rough estimates suggest figures as high as $23 million for training a single model.

The energy consumption involved in training these models is equally immense. Although specific details about the training process of GPT-4 remain undisclosed, we can refer to the energy consumption for GPT-3, which was nearly 1,300 MWh. That’s the equivalent of streaming Netflix for a staggering 1.6 million hours (using around 0.0008 MWh per hour).

Such figures highlight the vast disparity in energy use between daily activities and training advanced AI models. Although subsequent processes like inference consume considerably less energy, the initial training phase is particularly power-hungry and carbon-intensive.

Additionally, environmental impacts extend beyond power usage. Recent estimates highlight that the carbon footprint associated with training these models is akin to the electricity consumption of a US family over 120 years. While companies like Meta have taken steps to reduce this footprint, it remains a significant environmental concern.

Many details about the specific resource and energy requirements of LLMs still remain under wraps due to competitive secrecy among leading tech companies. This lack of transparency from major AI developers further complicates efforts to assess and address these impacts.

Access to vast datasets is another significant barrier for many businesses other than the tech giants like Google and Facebook, who dominate this field. This makes it difficult for smaller entities to compete. Many datasets, especially those scraped from the internet, contain copyrighted material, which raises ethical and legal concerns about the use of such data without proper authorization. For example, creators from various fields argue that their copyrighted works are being used to train AI without permission or compensation.

The conversation around copyright has evolved as AI technology has advanced, and some companies have sought exemptions from copyright laws in order to continue their operations. However, there is still a risk of litigation, as evidenced by discussions about potential lawsuits that could threaten the existence of AI models.

In contrast, SLMs present a more manageable solution regarding data handling and copyright issues. With SLMs, it’s easier to obtain licenses for training materials, ensuring that content creators are compensated for their work. This approach not only reduces legal risks, but also leads to better, more predictable model performance through the use of high-quality, ethically sourced data.

Data quality

Data quality is a crucial aspect of training LLMs, as it has a direct impact on the model’s performance. LMs require vast amounts of data that are representative of various languages and contexts. However, the available datasets are often unevenly distributed, with a disproportionate amount of data in English and a lack of representation for other languages and cultures. This imbalance can lead to biased models that may not perform well for non-English speakers.

The process of curating and refining this data to ensure it’s of high quality is labour-intensive and complex. It involves extensive cleaning and the use of advanced algorithms to weed out irrelevant or low-quality content. The task is crucial because poorly curated datasets can lead to models that are ineffective or behave unpredictably.

Moreover, the process of acquiring and labeling this data raises ethical concerns. In particular, some of the data used to train LLMs comes from controversial and potentially damaging internet sources, such as texts describing extreme violence or abuse. Labeling such content for machine learning purposes raises questions not only about the psychological effect on data labelers, but also about the ethical implications of using these datasets.

These ethical and quality-related challenges underline the need for better data management practices in the development of LLMs. Ensuring high-quality, ethically sourced data not only improves the performance of the models but also helps in building AI systems that are socially responsible and less harmful. It’s crucial for the AI community to address these issues head-on, developing standards that safeguard both the well-being of those in the data labeling process and the integrity of the data used.

How do SLMs stack up next to LLMs?

SLMs are streamlined counterparts to LLMs, characterized by smaller neural networks and simpler architectures. Let’s explore this further below:

  1. Resource usage

    Firstly, SLMs excel in terms of resource efficiency, which is crucial when deploying AI solutions in environments with limited computational power. Due to their smaller number of parameters, SLMs require less memory and processing power to train and operate compared to LLMs, making them ideal for use in smaller devices or situations where quick deployment is essential.

    The simplicity of SLMs greatly aids in their development and deployment. Their smaller size and more streamlined neural networks make them easier for developers to manage, opening the door for their use in remote or edge computing scenarios, where maintaining large-scale data processing infrastructure would be impractical. Faster training cycles due to fewer tunable parameters further reduce the time from development to deployment, enhancing the feasibility of using SLMs in time-sensitive applications.

  2. Speed

    When it comes to performance speed, SLMs often have the upper hand due to their compact size. They typically have lower latency and can make faster predictions, which is important for applications that need real-time processing, like interactive voice response systems and real-time language translation.

    Additionally, SLMs benefit from faster cold-start times, meaning they can begin processing tasks more promptly after initialization compared to LLMs. This feature is particularly beneficial in environments where models need to be frequently restarted or deployed dynamically.

  3. Robustness

    Despite their smaller size, SLMs can be surprisingly robust, especially within their specific domains or tasks. Since they are often designed for particular applications, they can handle relevant data variations more effectively than LLMs, which might not perform as well when applied outside their primary training scenarios.

    The manageability of SLMs means they can be more easily monitored and modified to ensure they continue to operate reliably, which simplifies ongoing maintenance and enhances overall system stability.

  4. Security

    Security is another area where SLMs generally excel. With fewer parameters and a more contained operational scope, SLMs present a smaller attack surface compared to LLMs. This reduced complexity allows fewer opportunities for malicious exploits and simplifies the process of securing the models. By focusing on specific functionalities and smaller datasets, SLMs can achieve a higher level of security hardening, making them suitable for applications where data privacy and security are paramount.

  5. Other advantages

    Beyond resource usage and security, SLMs are often easier to tune due to their simplicity. Adjustments and optimizations can be made more rapidly, which is advantageous in dynamic environments where user needs or data inputs frequently change. This agility also extends to security practices, where the ability to quickly refine and adapt the models contributes to maintaining robust protection measures.

SLM Examples

Let’s explore some well-known SLMs:

  • DistilBERT: This model is a simplified version of the original BERT model. It has been designed to maintain around 95% of its predecessor’s capability with language comprehension tasks, such as the GLUE benchmark. With approximately half of the parameters of the BERT base model, DistilBERT offers a good balance between speed, efficiency and cost, making it suitable for use in resource-constrained environments. Although it may be slightly less accurate than larger models, its performance is still commendable considering its reduced size.
  • GPT-Neo: GPT-Neo is an open-source alternative to GPT-3, with similar architecture and capabilities. It has 2.7 billion parameters and is designed to provide high-quality results for a variety of language tasks without the need for fine-tuning. While GPT-Neo may not always perform as well as larger models, its effectiveness remains strong across a wide range of applications.
  • GPT-J: Similar to GPT-3 in design, GPT-J has 6 billion parameters and includes Rotary Position Embeddings and attention mechanisms. This model is effective for tasks such as translating from English to French, and it competes closely with the Curie version of GPT-3 (with 6.7 billion parameters). Interestingly, GPT-J outperforms the much larger GPT-3 Davinci model (with 175 billion parameters) in code generation.
  • Orca 2: Developed by Microsoft, Orca 2 has been fine-tuned with high-quality synthetic data to perform well in zero-shot reasoning tasks. Due to its smaller size, it may face challenges in tasks that require extensive knowledge or contextual depth, but it is specifically designed for high performance in logical reasoning and punches above its weight. .
  • Phi-2: Another innovative model from Microsoft, Phi-2, stands out with its impressive 2.7 billion parameters. This model is optimized for efficient training and adaptability, making it well-suited for a wide range of reasoning and understanding tasks. Despite its relatively small size, Phi-2 approaches near-human performance in language processing, outperforming much larger models like GPT-4 in terms of training efficiency. One thing to watch out for is that its effectiveness can depend on the representation of data used during the tuning phase.

Optimization via Intelligent Routing

Although SLMs are able to provide decent results at a lower cost, you will still need the power of an LLM or the assistance of another data source. In these situations you should employ a routing layer that redirects the query to an optimal source of information.

Routing modules manage and direct user queries within systems that involve multiple data sources and decision-making processes. Essentially, these routers function by receiving a user’s question along with a set of possible options, each tagged with specific metadata, then determining the most appropriate choice, or set of choices, in response. These routers are versatile and can be employed independently as selector modules, or integrated with other query engines or retrievers to enhance decision-making.

An example routing of a user query to small and large language models.

These modules leverage the capabilities of LMs to analyze and select among varied options. This makes them powerful tools in scenarios like selecting the optimal data source from a diverse range, or deciding the best method for processing information—be it through summarization using a summary index query engine, or semantic search with a vector index query engine. They can also experiment with multiple choices simultaneously through multi-routing capabilities, effectively combining results for more comprehensive outputs.

Routing optimizations also include strategic approaches to handling queries, to maximize efficiency and reduce operational costs.

  • For simpler and more predictable queries, caching mechanisms can be used to store and quickly access data without repeatedly contacting the LLM provider, thereby reducing response times and costs.
  • Depending on the complexity of the question, the system may route the query to a different-sized model:
    • Straightforward inquiries may be directed to smaller, less complex models, which can reduce the processing load and operational demands.
    • More challenging questions can be directed towards more powerful, larger language models that are capable of handling complex queries.

Conclusion

SLMs can offer significant advantages over LLMs for many applications, especially where resource constraints and rapid deployment are crucial concerns.

SLMs not only require less computing power and energy, making them more environmentally friendly and cost-effective, but also provide faster processing times and easier scalability across diverse environments.

Additionally, their custom nature allows for more secure and robust implementations tailored to specific tasks, reducing the risks associated with large models. While LLMs have their place in handling complex, wide-ranging tasks, SLMs offer a compelling alternative that can be just as effective, if not more so, in contexts that demand efficiency, agility and focus. This makes SLMs particularly valuable in today’s fast-paced, resource-conscious world, where the balance of performance and practicality is crucial.

More articles

Big Data in LLMs with Retrieval-Augmented Generation (RAG)

Explore how Retrieval-Augmented Generation (RAG) enhances Language Models by utilizing indexing, retrieval, and generation for up-to-date data access.

Read more

RL Presentation: How to Overcome Reinforcement Learning Challenges

This is the second presentation discussing some of the greatest challenges for a reinforcement learning project. Learn what they are and how to overcome them.

Read more
}