How to Use DeepSeek R1 Reasoning API: Chain-of-Thought Guide
April 17, 2026 · 7 min read
DeepSeek R1 is unique among AI models: it shows its reasoning process. Unlike GPT or Claude, which reason internally, R1 exposes a visible chain-of-thought (CoT) that lets you see exactly how it arrives at an answer. This transparency is invaluable for debugging, verification, and building trust in AI-generated solutions.
What Makes DeepSeek R1 Different?
Most AI models produce a final answer directly. DeepSeek R1 generates a reasoning trace first, then produces the answer. This means you can:
- Verify logic: See each step the model took to reach its conclusion
- Debug errors: Identify exactly where reasoning went wrong
- Build trust: Show users the reasoning behind AI decisions
- Improve prompts: Understand how the model interprets your instructions
Quick Start: DeepSeek R1 via API
from openai import OpenAI
client = OpenAI(
base_url="https://api.aipower.me/v1",
api_key="YOUR_AIPOWER_KEY",
)
# DeepSeek R1 — reasoning model with visible chain-of-thought
response = client.chat.completions.create(
model="deepseek/deepseek-reasoner",
messages=[
{"role": "user", "content": "Prove that there are infinitely many prime numbers."}
],
)
print(response.choices[0].message.content)R1 for Math and Logic Problems
DeepSeek R1 excels at competition-level math. It scored 97.3% on MATH-500 and 79.8% on AIME 2024, beating GPT-4o on both benchmarks.
# Solve a competition math problem
response = client.chat.completions.create(
model="deepseek/deepseek-reasoner",
messages=[{
"role": "user",
"content": "Find all positive integers n such that n^2 + 2n + 4 is divisible by 7."
}],
)
# R1 will show step-by-step: modular arithmetic, case analysis, verificationR1 for Code Debugging
buggy_code = """
def binary_search(arr, target):
left, right = 0, len(arr)
while left < right:
mid = (left + right) / 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid
else:
right = mid
return -1
"""
response = client.chat.completions.create(
model="deepseek/deepseek-reasoner",
messages=[{
"role": "user",
"content": f"Find and fix all bugs in this code. Explain each bug:\n{buggy_code}"
}],
)
# R1 reasons through each line, identifies integer division bug,
# infinite loop from left = mid, and off-by-one in right boundaryDeepSeek R1 vs Claude Opus 4.6 for Reasoning
| Feature | DeepSeek R1 | Claude Opus 4.6 |
|---|---|---|
| Chain-of-thought | Visible (in output) | Internal (hidden) |
| MATH-500 | 97.3% | 96.4% |
| AIME 2024 | 79.8% | 83.3% |
| Input cost/M | $0.34 | $7.50 |
| Output cost/M | $0.50 | $37.50 |
| Best for | Transparent reasoning, math | Complex multi-step, nuance |
When to Use R1 vs Claude Opus
- Use R1 when you need visible reasoning, math/logic tasks, or budget-friendly reasoning ($0.34/M vs $7.50/M)
- Use Claude Opus for complex multi-domain reasoning, creative problem-solving, or when raw accuracy is paramount
- Use both: Draft with R1 (cheap), verify critical results with Opus
Try DeepSeek R1 with 10 free API calls at aipower.me. See the chain-of-thought reasoning for yourself.
GET STARTED WITH AIPOWER
16 AI models. One API. OpenAI SDK compatible.
Who should use AIPower?
- • Developers needing both Chinese and Western AI models
- • Chinese teams that can't access OpenAI / Anthropic directly
- • Startups wanting multi-model redundancy through one API
- • Anyone tired of paying grey-market intermediary premiums
3 steps to first API call
- Sign up — email only, 10 free trial calls, no card
- Copy your API key from the dashboard
- Change
base_urlin your OpenAI SDK → done
from openai import OpenAI
client = OpenAI(
base_url="https://api.aipower.me/v1", # ← only change
api_key="sk-your-aipower-key",
)
response = client.chat.completions.create(
model="auto-cheap", # or anthropic/claude-opus, deepseek/deepseek-chat, openai/gpt-5, etc.
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)+100 bonus calls on first $5 top-up · WeChat Pay + Alipay + card accepted · docs · security