Mixtral 8x7B Instruct Pricing — $0.60/M input, $0.60/M output
Together.ai / April 2026
Input $0.60/M tokens
Output $0.60/M tokens
Context Window 32K tokens
MoE model for general tasks
Typical use cases
General-purpose inference, chat, extraction, structured output
Estimated monthly cost at scale
Assumes 50/50 input/output token split at stated daily volume.
| Daily Volume | Monthly Tokens | Estimated Monthly Cost |
|---|---|---|
| 1M tokens/day | 30M tokens | $18.00 |
| 5M tokens/day | 150M tokens | $90.00 |
| 10M tokens/day | 300M tokens | $180.00 |
vs. other Together.ai models
| Model | Input ($/M) | Output ($/M) | Context |
|---|---|---|---|
| Llama 3.3 70B Instruct Turbo | $0.88 | $0.88 | 128K |
| Qwen 2.5 72B Instruct Turbo | $1.20 | $1.20 | 32K |
| Llama 3.1 8B Instruct Turbo | $0.18 | $0.18 | 128K |
| Llama 3.1 405B Instruct Turbo | $3.50 | $3.50 | 128K |
| Gemma 2 27B | $0.80 | $0.80 | 8K |
| DeepSeek R1 (Together) | $3.00 | $7.00 | 64K |
Not sure if Mixtral 8x7B Instruct is the right fit for your workload? Clawback tests cheaper alternatives against your actual prompts and tells you exactly where you're overpaying.
Test if a cheaper model matches Mixtral 8x7B Instruct quality →