Optimizing Memory for Large Language Model Inference and Fine-Tuning


Large language models (LLMs) like GPT-4, Bloom, and LLaMA have achieved remarkable capabilities by scaling up to billions of parameters. However, deploying these massive models for inference or fine-tuning is challenging due to their immense memory requirements. In this technical blog, we will explore techniques for estimating and optimizing memory consumption during LLM inference and fine-tuning across various hardware setups.

Understanding Memory Requirements

The memory required to load an LLM is primarily determined by the number of parameters and the numerical precision used to store the parameters. A simple rule of thumb is:

  • Loading a model with X billion parameters requires roughly 4X GB of VRAM in 32-bit float precision
  • Loading a model with X billion parameters requires roughly 2X GB of VRAM in 16-bit bfloat16/float16 precision

For example, loading the 175B parameter GPT-3 model would require approximately 350GB of VRAM in bfloat16 precision. As of today, the largest commercially available GPUs like the NVIDIA A100 and H100 offer only 80GB of VRAM, necessitating tensor parallelism and model parallelism techniques.

During inference, the memory footprint is dominated by the model parameters and the temporary activation tensors produced. A high-level estimate for the peak memory usage during inference is the sum of the memory required to load the model parameters and the memory for activations.

Quantifying Inference Memory

Let’s quantify the memory requirements for inference using the OctoCode model, which has around 15 billion parameters in bfloat16 format (~ 31GB). We’ll use the Transformers library to load the model and generate text:

</pre>
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder",
torch_dtype=torch.bfloat16,
device_map="auto",
pad_token_id=0)
tokenizer = AutoTokenizer.from_pretrained("bigcode/octocoder")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = "Question: Please write a Python function to convert bytes to gigabytes.\n\nAnswer:"
result = pipe(prompt, max_new_tokens=60)[0]["generated_text"][len(prompt):]
def bytes_to_gigabytes(bytes):
return bytes / 1024 / 1024 / 1024
bytes_to_gigabytes(torch.cuda.max_memory_allocated())
<pre>

Output:

The peak GPU memory usage is around 29GB, which aligns with our estimate of 31GB for loading the model parameters in bfloat16 format.

Optimizing Inference Memory with Quantization

While bfloat16 is the common precision used for training LLMs, researchers have found that quantizing the model weights to lower precision data types like 8-bit integers (int8) or 4-bit integers can significantly reduce memory usage with minimal accuracy loss for inference tasks like text generation.

Let’s see the memory savings from 8-bit and 4-bit quantization of the OctoCode model:

</div>
# 8-bit quantization
model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", load_in_8bit=True, 
pad_token_id=0)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
result = pipe(prompt, max_new_tokens=60)[0]["generated_text"][len(prompt):]
bytes_to_gigabytes(torch.cuda.max_memory_allocated())</pre>
Output:
# 4-bit quantization
model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", load_in_4bit=True,
low_cpu_mem_usage=True, pad_token_id=0)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
result = pipe(prompt, max_new_tokens=60)[0]["generated_text"][len(prompt):]
bytes_to_gigabytes(torch.cuda.max_memory_allocated())
</pre>
<pre>

Output:

With 8-bit quantization, the memory requirement drops from 31GB to 15GB, while 4-bit quantization further reduces it to just 9.5GB! This allows running the 15B parameter OctoCode model on consumer GPUs like the RTX 3090 (24GB VRAM).

However, note that more aggressive quantization like 4-bit can sometimes lead to accuracy degradation compared to 8-bit or bfloat16 precision. There’s a trade-off between memory savings and accuracy that users should evaluate for their use case.

Quantization is a powerful technique that can enable LLM deployment on resource-constrained environments like cloud instances, edge devices, or even mobile phones by drastically reducing the memory footprint.

Estimating Memory for Fine-Tuning

While quantization is primarily used for efficient inference, techniques like tensor parallelism and model parallelism are crucial for managing memory requirements during the training or fine-tuning of large language models.

The peak memory consumption during fine-tuning is typically 3-4 times higher than inference due to additional memory requirements for:

  • Gradients
  • Optimizer states
  • Activations from the forward pass stored for backpropagation

A conservative estimate is that fine-tuning an LLM with X billion parameters requires around 4 * (2X) = 8X GB of VRAM in bfloat16 precision.

For example, fine-tuning the 7B parameter LLaMA model would require approximately 7 * 8 = 56GB of VRAM per GPU in bfloat16 precision. This exceeds the memory capacity of current GPUs, necessitating distributed fine-tuning techniques.

Distributed Fine-Tuning Techniques

Several distributed fine-tuning methods have been proposed to overcome GPU memory constraints for large models:

  1. Data Parallelism: The classic data parallelism approach replicates the entire model across multiple GPUs while splitting and distributing the training data batches. This reduces training time linearly with the number of GPUs but does not reduce the peak memory requirement on each GPU.
  2. ZeRO Stage 3: An advanced form of data parallelism that partitions the model parameters, gradients, and optimizer states across GPUs. It reduces memory compared to classic data parallelism by keeping only the required partitioned data on each GPU during different phases of training.
  3. Tensor Parallelism: Instead of replicating the model, tensor parallelism divides the model parameters into rows or columns and distributes them across GPUs. Each GPU operates on a partitioned set of parameters, gradients, and optimizer states, leading to substantial memory savings.
  4. Pipeline Parallelism: This technique partitions the model layers across different GPUs/workers, with each device executing a subset of the layers. Activations are passed between workers, reducing peak memory but increasing communication overhead.

Estimating memory usage for these distributed methods is non-trivial as the distribution of parameters, gradients, activations, and optimizer states varies across techniques. Moreover, different components like the transformer body and language modeling head may exhibit different memory allocation behaviors.

The LLMem Solution

Researchers recently proposed LLMem, a solution that accurately estimates GPU memory consumption when applying distributed fine-tuning methods to LLMs across multiple GPUs.

Estimating GPU Memory Usage for Fine-Tuning Pre-Trained LLM

Estimating GPU Memory Usage for Fine-Tuning Pre-Trained LLM

LLMem considers factors like recombining parameters before computation (ZeRO Stage 3), output gathering in the backward pass (tensor parallelism), and the different memory allocation strategies for the transformer body and language modeling head.

Experimental results show that LLMem can estimate peak GPU memory usage for fine-tuning LLMs on a single GPU with error rates of up to 1.6%, outperforming the state-of-the-art DNNMem’s average error rate of 42.6%. When applying distributed fine-tuning methods to LLMs with over a billion parameters on multiple GPUs, LLMem achieves an impressive average error rate of 3.0%.

By accurately estimating memory requirements upfront, LLMem can help users select the most efficient distributed fine-tuning method that avoids out-of-memory issues while minimizing training time.

Emerging Techniques

While quantization, tensor parallelism, and model parallelism are established techniques, researchers continue to explore novel methods to push the boundaries of efficient LLM training and deployment.

  1. LoRA and QLoRA: These techniques involve training a smaller residual adapter module to update the pre-trained LLM with new knowledge instead of directly fine-tuning the massive number of parameters. This can lead to substantial memory savings while retaining most of the model’s performance.
  2. FlashAttention: The self-attention mechanism is a memory and compute bottleneck in transformer models. FlashAttention approximates the standard attention with linear complexity, reducing memory requirements from quadratic to linear in the input sequence length.
  3. Mixture-of-Experts: This approach conditionally routes each input data sample to a specialized expert model instead of processing it through the entire model. This dynamic sparsity can save memory by only activating a subset of experts for each sample.
  4. Reversed Model Surgery: Researchers have explored surgical model compression by iteratively removing less important components like attention heads to trade off memory/speed for accuracy.
  5. Offloading: Finally, techniques that offload parameters, optimizer states, or activations to CPU RAM or disk can supplement limited GPU memory for large models.

These cutting-edge methods illustrate the vibrant research ecosystem focused on democratizing efficient LLM training and deployment across diverse hardware environments.

Conclusion

The memory requirements of large language models pose significant challenges for their widespread adoption in real-world applications. By understanding memory estimation techniques and leveraging quantization, distributed training strategies, and emerging innovations, we can optimize LLM deployments on resource-constrained devices.

Tools like LLMem pave the way toward accurate memory estimation, enabling users to select the most suitable fine-tuning configuration. As hardware evolves and research advances, we can anticipate more efficient LLM training and inference, driving progress in natural language processing and artificial intelligence.

Striking the right balance between model capacity, accuracy, and resource utilization will be crucial for unlocking the full potential of large language models across diverse domains and use cases. By embracing memory optimization techniques, we move closer to a future where state-of-the-art language AI is accessible, scalable, and sustainable.

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img