The completions endpoint is the primary interface for interacting with Crucible models. Send a prompt, get a response.
Endpoint
POST https://api.crucible.dev/v1/completions
Request body
json
{
"model": "crucible-1",
"prompt": "Your input here.",
"reasoning_mode": "standard",
"max_tokens": 2048,
"temperature": 0.2
}
json
{
"model": "crucible-1",
"prompt": "Your input here.",
"reasoning_mode": "standard",
"max_tokens": 2048,
"temperature": 0.2
}
json
{
"model": "crucible-1",
"prompt": "Your input here.",
"reasoning_mode": "standard",
"max_tokens": 2048,
"temperature": 0.2
}
Parameters
model (required) — The model ID to use. See Available Models for a full list.
prompt (required) — The input text. Can be a plain string or a structured messages array for multi-turn contexts.
reasoning_mode — Controls how the model approaches the task. Options: standard, deep, fast. Defaults to standard.
max_tokens — Maximum number of tokens in the output. Defaults to 1024.
temperature — Controls output variability. Lower values produce more deterministic outputs. Range 0–1. For reasoning tasks, we recommend values between 0.1 and 0.3.
Response
json
{
"id": "cmpl_abc123",
"model": "crucible-1",
"output": "The key obligations in this contract are...",
"reasoning_trace": null,
"usage": {
"prompt_tokens": 312,
"completion_tokens": 148,
"total_tokens": 460
}
}json
{
"id": "cmpl_abc123",
"model": "crucible-1",
"output": "The key obligations in this contract are...",
"reasoning_trace": null,
"usage": {
"prompt_tokens": 312,
"completion_tokens": 148,
"total_tokens": 460
}
}json
{
"id": "cmpl_abc123",
"model": "crucible-1",
"output": "The key obligations in this contract are...",
"reasoning_trace": null,
"usage": {
"prompt_tokens": 312,
"completion_tokens": 148,
"total_tokens": 460
}
}