Query GPT, Claude, Grok, and Gemini in parallel with one TypeScript function
February 26, 2026
Most multi-model AI setups involve the same boilerplate: initialise four SDK clients, fire four parallel requests, catch four different error shapes, normalise four different response formats. The Decision Memos SDK collapses that into a single typed function call.
Install
npm install decisionmemosQuery four models in parallel
import { createMultiModelQuery } from 'decisionmemos';
const query = createMultiModelQuery();
const result = await query.ask(
'Should we migrate to microservices or adopt a modular monolith?'
);
for (const r of result.responses) {
console.log(`[${r.modelName}] ${r.response.slice(0, 200)}`);
}
// → 4 typed responses from 4 models, in parallelThe SDK reads provider API keys from environment variables and gracefully skips any provider whose key isn't present. You can use all four models or just two — whatever you have keys for.
What you get back
Each response in result.responses is fully typed: modelName, provider, response (the text), timestamp, latency, and an optional error field if that model failed. The top-level result also gives you successCount, errorCount, and totalLatency across the panel.
// result shape
{
question: string;
responses: ModelResponse[]; // one per model
successCount: number;
errorCount: number;
totalLatency: number;
timestamp: Date;
}Individual model failures don't throw — they're captured in the error field of that response and the rest of the panel continues. This makes the SDK resilient to transient provider outages.
Use individual clients directly
You can also use the model clients independently if you only want one or two providers, or want to manage them yourself.
import { OpenAIClient, AnthropicClient } from 'decisionmemos';
const gpt = new OpenAIClient(process.env.OPENAI_API_KEY!, 'gpt-4o');
const claude = new AnthropicClient(process.env.ANTHROPIC_API_KEY!);
const [a, b] = await Promise.all([
gpt.query('What are the risks of a big-bang rewrite?'),
claude.query('What are the risks of a big-bang rewrite?'),
]);Attribution
If you ship the SDK in your product, the package exports a DECISION_MEMOS_ATTRIBUTION constant with a label and URL you can use to show a 'Powered by Decision Memos' link — no configuration required.
import { DECISION_MEMOS_ATTRIBUTION } from 'decisionmemos';
// { label: 'Powered by Decision Memos', url: 'https://decisionmemos.com' }When you need more: the hosted API
The free SDK gives you raw parallel responses. The Decision Memos hosted API adds the orchestration layer: advisor personas with tuned system prompts, dynamic briefing questions, synthesis with consensus scoring (strong / moderate / weak), and a structured Decision Memo artifact — verdict, trade-offs, risks, and next steps.
The SDK and the hosted API share the same TypeScript types. Upgrading is a one-line change — swap createMultiModelQuery for an API call to POST /v1/deliberate and the response shape is the same DecisionMemo interface already exported from the package.
The SDK source is on GitHub and the package is on npm. Full SDK reference at decisionmemos.com/docs/sdk.