meta-llama
Meta: Llama 4 Scout
Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input...
Quality Score
99/100
composite of price, context, capability
Input Price
$0.08
per 1M tokens
Output Price
$0.30
per 1M tokens
Context Window
327,680
tokens
- Model ID
- meta-llama/llama-4-scout
- Vendor
- meta-llama
- Tokenizer
- Llama4
- Input Modalities
- text, image
- Output Modalities
- text
- Max Output
- 16,384 tokens
- Tool Calling
- ✓ supported
- Structured Output
- ✓ supported
- Reasoning Mode
- not supported
- Vision
- ✓ accepts images
- Audio
- no
- Moderated
- no
Similar models
meta-llama
Meta: Llama 4 Maverick
$0.15 in / $0.60 out
1,048,576 ctx
89
meta-llama
Meta: Llama 3.3 70B Instruct
$0.10 in / $0.32 out
131,072 ctx
86
meta-llama
Meta: Llama 3.1 70B Instruct
$0.40 in / $0.40 out
131,072 ctx
86
meta-llama
Meta: Llama Guard 4 12B
$0.18 in / $0.18 out
163,840 ctx
85
meta-llama
Meta: Llama 3.2 11B Vision Instruct
$0.24 in / $0.24 out
131,072 ctx
81
meta-llama
Meta: Llama 3.1 8B Instruct
$0.02 in / $0.05 out
16,384 ctx
72