Quickstart: Build a Chat App

Send your first message and stream a response with the OpenRouter SDK

Goal: Learn the fundamentals of OpenRouter by building a TypeScript chat app that sends messages and streams responses through OpenRouter.

Outcome: A working multi-turn conversation loop that can talk to any of the 600+ models available on the platform by changing a single string.

Want to get started faster? Copy this prompt into your coding agent.

Prerequisites

1. Create a project and install the SDK

Set up a new Node.js project and add the OpenRouter client SDK. The SDK is ESM-only, so set the package type to module. Install tsx so you can run the TypeScript examples directly.

$mkdir openrouter-chat && cd openrouter-chat
$npm init -y
$npm pkg set type=module
$npm install @openrouter/sdk
$npm install --save-dev tsx

2. Send your first message

Create chat.ts with a client instance and a single chat completion request. The apiKey reads from the environment so you never hard-code credentials.

1import { OpenRouter } from '@openrouter/sdk';
2
3const client = new OpenRouter({
4 apiKey: process.env.OPENROUTER_API_KEY,
5});
6
7const completion = await client.chat.send({
8 chatRequest: {
9 model: 'google/gemini-3.1-flash-lite',
10 messages: [
11 { role: 'user', content: 'Say hello in one sentence.' },
12 ],
13 },
14});
15
16console.log(completion.choices[0]?.message.content);
17console.log({
18 promptTokens: completion.usage?.promptTokens,
19 completionTokens: completion.usage?.completionTokens,
20});

Run it with your API key:

$OPENROUTER_API_KEY=sk-or-v1-... npx tsx chat.ts

You should see a single text response printed to the console. The SDK returns token usage in camelCase fields such as promptTokens and completionTokens. The completion.choices array follows the same shape as the Chat Completions response.

3. Stream the response

Streaming returns text as it is generated instead of waiting for the full response. Pass stream: true and iterate over the returned async iterable. Each chunk contains a delta with the new text fragment.

1import { OpenRouter } from '@openrouter/sdk';
2
3const client = new OpenRouter({
4 apiKey: process.env.OPENROUTER_API_KEY,
5});
6
7const stream = await client.chat.send({
8 chatRequest: {
9 model: 'google/gemini-3.1-flash-lite',
10 messages: [
11 { role: 'user', content: 'Explain how routers work in three sentences.' },
12 ],
13 stream: true,
14 },
15});
16
17for await (const chunk of stream) {
18 const delta = chunk.choices[0]?.delta?.content;
19 if (delta) process.stdout.write(delta);
20}
21console.log();

Text now prints incrementally. See the Streaming reference for the full SSE event format.

4. Add multi-turn conversation

Multi-turn works by sending the full message history with each request. The model uses all previous messages as context. Append each user input and assistant response to a messages array before the next call.

1import { OpenRouter } from '@openrouter/sdk';
2import * as readline from 'node:readline';
3
4const client = new OpenRouter({
5 apiKey: process.env.OPENROUTER_API_KEY,
6});
7
8const messages: { role: 'user' | 'assistant'; content: string }[] = [];
9
10const rl = readline.createInterface({
11 input: process.stdin,
12 output: process.stdout,
13});
14
15function ask(): void {
16 rl.question('You: ', async (input) => {
17 if (input.toLowerCase() === 'exit') {
18 rl.close();
19 return;
20 }
21
22 messages.push({ role: 'user', content: input });
23
24 const stream = await client.chat.send({
25 chatRequest: {
26 model: 'google/gemini-3.1-flash-lite',
27 messages,
28 stream: true,
29 },
30 });
31
32 let response = '';
33 process.stdout.write('Assistant: ');
34 for await (const chunk of stream) {
35 const delta = chunk.choices[0]?.delta?.content;
36 if (delta) {
37 process.stdout.write(delta);
38 response += delta;
39 }
40 }
41 console.log();
42
43 messages.push({ role: 'assistant', content: response });
44 ask();
45 });
46}
47
48ask();

Run the file and type messages. The model remembers prior turns because the full messages array is sent with each request. Type exit to quit.

5. Swap models

OpenRouter gives you access to hundreds of models through one API. Change the model string to switch providers — no other code changes needed.

1// Use OpenAI's latest chat model
2model: 'openai/gpt-chat-latest',
3
4// Use Anthropic Claude Sonnet latest
5model: '~anthropic/claude-sonnet-latest',
6
7// Use a free model
8model: 'openrouter/free',

Browse all available models at openrouter.ai/models or query the Models API programmatically.

Check your work

  • npx tsx chat.ts prints a streamed response to the console
  • A multi-turn conversation maintains context across turns (ask a follow-up that references a previous answer)
  • Changing the model string switches to a different provider with no other code changes
  • The non-streaming response includes a usage object with promptTokens and completionTokens

Next steps

  • Connect a coding agent to OpenRouter
  • Explore the Agent SDK for built-in multi-turn loops and tool execution