meta-llama

Meta: Llama 4 Maverick

Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward...

Quality Score
89/100
composite of price, context, capability
Input Price
$0.15
per 1M tokens
Output Price
$0.60
per 1M tokens
Context Window
1,048,576
tokens
Model ID
meta-llama/llama-4-maverick
Vendor
meta-llama
Tokenizer
Llama4
Input Modalities
text, image
Output Modalities
text
Max Output
16,384 tokens
Tool Calling
not supported
Structured Output
✓ supported
Reasoning Mode
not supported
Vision
✓ accepts images
Audio
no
Moderated
no

Similar models