The Llama 4 series represents a major shift in open-source artificial intelligence, moving toward capabilities and Mixture-of-Experts (MoE) architectures.
: A defining feature is the 10 million token context window available in some variants, allowing the model to "read" over 7,500 pages of text or process 20+ hours of video in a single prompt. Key Models in the Series Laskamp4
: The models use a "mixture of experts," where only a subset of the total parameters (e.g., 17 billion active parameters in the Scout model) are activated for any given task. This significantly reduces computational costs and latency while maintaining high performance. The Llama 4 series represents a major shift
: Designed for efficiency, this model has 17 billion active parameters. It fits on a single H100 GPU. It is optimized for high-speed performance (up to 460+ tokens per second) and long-document reasoning. It is optimized for high-speed performance (up to