Quantcast
Channel: Analytics India Magazine
Viewing all articles
Browse latest Browse all 3522

Thanks to NVIDIA, Llama 3.1’s Context Window Went Up From 128k to 4M

$
0
0
LLM Systems Will Soon Have Infinite Context Length

LLMs have been pushing the context window limit to let users provide more information and get accurate results. A new study seems to have found a way to go beyond the order of 1 million.

Researchers from NVIDIA and the University of Illinois Urbana-Champaign (UIUC) have shared a research paper that discusses the technique to expand the context window of LLMs to about four million tokens.

They have also come up with UltraLong-8B, a new series of models – Llama-3.1-8-UltraLong-1M-Instruct, Llama-3.1-8-UltraLong-4M-Instruct, and Llama-3.1-8-UltraLong-2M-Instruct – all available on Hugging Face. These models are based on Llama-3.1-8B-Instruct. 

“In this work, we introduce an efficient training recipe for building ultra-long context LLMs from aligned instruct model, pushing the boundaries of context lengths from 128K to 1M, 2M, and 4M tokens,” the researchers stated. 

“Our approach leverages efficient continued pretraining strategies to extend the context window and employs effective instruction tuning to maintain the instruction-following and reasoning abilities,” they added.

The approach involves two main stages. The first attempts to extend the context window using a specially curated corpus with unsampled long documents. Researchers applied ‘YaRN-based RoPE scaling’ to improve the model’s ability to process long sequences and continued with a one-step pretraining method over multistep techniques.

The second stage deals with instruction tuning, which refines the model’s instruction-following and reasoning capabilities using a high-quality, short-context supervised fine-tuning (SFT) dataset across general, mathematical, and coding domains.

As per the paper, benchmark experiments included evaluations like RULER, LV-Eval, InfiniteBench, HumanEval, and more. UltraLong-8B models were found to be outperforming the rest compared to existing Llama-based long context models in both long-context and standard tasks. The researchers also performed a Needle in a Haystack (NIAH) test, where the models achieved 100% accuracy.

Researchers acknowledged that the technique uses supervised fine-tuning and does not explore reinforcement learning, which can be studied in the future. They also state that expanding the context window does not keep LLM’s safety alignment in mind.

The post Thanks to NVIDIA, Llama 3.1’s Context Window Went Up From 128k to 4M appeared first on Analytics India Magazine.


Viewing all articles
Browse latest Browse all 3522

Trending Articles