AI Models

Choose the right model for each block based on your needs.

Model Selection

Every block in Deep Notebook lets you select which AI model executes your instructions. This appears as a dropdown next to the Run button — click it to see available options.

Select model:
Claude 4.5 Sonnet

Different models excel at different tasks. Selecting the right one improves both speed and quality.

Available Models

Deep Notebook supports models from multiple providers, giving you flexibility for different use cases:

Claude 4.5 Sonnet
Latest Sonnet model with improved capabilities
Anthropic
Claude 4 Sonnet
Most capable for complex reasoning and analysis
Anthropic
Claude 4.5 Haiku
Fastest Claude model with solid capabilities
Anthropic
GPT-5
Latest OpenAI flagship model
OpenAI
Gemini 2.5 Flash
Fast with improved speed and capabilities
Google
Gemini 2.5 Flash Lite
Lightweight version optimized for speed
Google
Cerebras GPT-OSS 120B
Large parameter model on specialized hardware
Cerebras
Groq Kimi K2
Fast model with reasoning capabilities
Groq

Choosing the Right Model

Use a faster model when:

  • Running simple lookups or data extractions
  • Processing high volumes quickly
  • Iterating on prompt structure and testing
  • The task has straightforward, well-defined outputs

Use a more capable model when:

  • Synthesizing information from many sources
  • Tasks require nuanced judgment or careful reasoning
  • Output quality is critical (external reports, important communications)
  • Complex multi-step reasoning chains

Model Persistence

Your model selection persists per block. If you choose one model for Block 1 and a different model for Block 2, each remembers its setting. This lets you optimize each step independently — fast extraction in early blocks, careful synthesis in later ones.

Performance & Cost Considerations

More capable models generally:

  • Provide higher quality outputs for complex tasks
  • Take longer to respond
  • Consume more resources from your plan

Faster models generally:

  • Return results quickly
  • Work well for straightforward tasks
  • Are more resource-efficient

The right choice depends on your specific workflow. Experiment with different models to find the best balance of speed, quality, and resource usage for each step.