← Back to all frameworks NLP

OpenAI & Anthropic

Frontier LLMs — GPT-4 class and Claude — production-wired

What it is

The two model providers I default to for any new build. OpenAI for the broadest tool-use ecosystem and structured outputs; Anthropic Claude for long-context reasoning, prompt caching and safety.

How Vaaani uses it

  • GPT-4o for multimodal tool-use agents (vision, function calling)
  • Claude Opus / Sonnet for long-context analysis (100k+ tokens)
  • Prompt caching to cut costs by 90% on repeated system prompts
  • Streaming + structured JSON outputs for typed UI rendering

Why it makes the cut

Most Vaaani agents run a hybrid: Claude for the heavy reasoning, GPT-4o-mini for the cheap routing. Choose the right model per call, not per project.

Sample code

from anthropic import Anthropic
client = Anthropic()

resp = client.messages.create(
    model="claude-opus-4-7",
    max_tokens=1024,
    messages=[{"role": "user",
               "content": "Summarize this 80-page contract."}]
)

Related in the Vaaani stack

Have a project that needs OpenAI?

30-min discovery call. You describe the busywork; I map it to an AI worker and a budget.