Mixtral 8x22B Instruct Pricing — $0.90/M input, $0.90/M output
Fireworks.ai / April 2026
Input $0.90/M tokens
Output $0.90/M tokens
Context Window 65K tokens
Large MoE model via Fireworks.ai
Typical use cases
General-purpose inference, chat, extraction, structured output
Estimated monthly cost at scale
Assumes 50/50 input/output token split at stated daily volume.
| Daily Volume | Monthly Tokens | Estimated Monthly Cost |
|---|---|---|
| 1M tokens/day | 30M tokens | $27.00 |
| 5M tokens/day | 150M tokens | $135.00 |
| 10M tokens/day | 300M tokens | $270.00 |
vs. other Fireworks.ai models
| Model | Input ($/M) | Output ($/M) | Context |
|---|---|---|---|
| Llama 3.3 70B Instruct | $0.90 | $0.90 | 128K |
| DeepSeek R1 | $3.00 | $8.00 | 128K |
| Qwen 2.5 72B Instruct | $0.90 | $0.90 | 32K |
| Llama 3.1 8B Instruct | $0.10 | $0.10 | 128K |
Not sure if Mixtral 8x22B Instruct is the right fit for your workload? Clawback tests cheaper alternatives against your actual prompts and tells you exactly where you're overpaying.
Test if a cheaper model matches Mixtral 8x22B Instruct quality →