
Today we are releasing Reasoning v2, the new engine powering Deep Mode across all Crucible models. This is not a incremental update. It is a ground-up redesign of how our models approach multi-step inference.
What Is Different
Reasoning v1 used a fixed-depth search strategy. The model would always generate the same number of candidate reasoning steps regardless of problem complexity. Simple problems wasted compute on unnecessary steps. Hard problems sometimes ran out of depth before reaching a sound conclusion.
Reasoning v2 is adaptive. The model dynamically allocates reasoning depth based on problem complexity. A simple extraction task resolves in 2-3 steps. A complex multi-document synthesis may use 12-15. The result is better accuracy on hard tasks and lower latency on easy ones.
Performance Gains
14-point improvement on DOCLENS-v2 multi-document benchmark
22% reduction in median Deep Mode latency
31% reduction in reasoning trace length on simple tasks
No Migration Required
Reasoning v2 is live for all users now. Your existing API calls will automatically use the new engine. No parameter changes, no reintegration work.

Today we are releasing Reasoning v2, the new engine powering Deep Mode across all Crucible models. This is not a incremental update. It is a ground-up redesign of how our models approach multi-step inference.
What Is Different
Reasoning v1 used a fixed-depth search strategy. The model would always generate the same number of candidate reasoning steps regardless of problem complexity. Simple problems wasted compute on unnecessary steps. Hard problems sometimes ran out of depth before reaching a sound conclusion.
Reasoning v2 is adaptive. The model dynamically allocates reasoning depth based on problem complexity. A simple extraction task resolves in 2-3 steps. A complex multi-document synthesis may use 12-15. The result is better accuracy on hard tasks and lower latency on easy ones.
Performance Gains
14-point improvement on DOCLENS-v2 multi-document benchmark
22% reduction in median Deep Mode latency
31% reduction in reasoning trace length on simple tasks
No Migration Required
Reasoning v2 is live for all users now. Your existing API calls will automatically use the new engine. No parameter changes, no reintegration work.

Today we are releasing Reasoning v2, the new engine powering Deep Mode across all Crucible models. This is not a incremental update. It is a ground-up redesign of how our models approach multi-step inference.
What Is Different
Reasoning v1 used a fixed-depth search strategy. The model would always generate the same number of candidate reasoning steps regardless of problem complexity. Simple problems wasted compute on unnecessary steps. Hard problems sometimes ran out of depth before reaching a sound conclusion.
Reasoning v2 is adaptive. The model dynamically allocates reasoning depth based on problem complexity. A simple extraction task resolves in 2-3 steps. A complex multi-document synthesis may use 12-15. The result is better accuracy on hard tasks and lower latency on easy ones.
Performance Gains
14-point improvement on DOCLENS-v2 multi-document benchmark
22% reduction in median Deep Mode latency
31% reduction in reasoning trace length on simple tasks
No Migration Required
Reasoning v2 is live for all users now. Your existing API calls will automatically use the new engine. No parameter changes, no reintegration work.