What it is
JavaScript on the server. Single-threaded event loop, non-blocking I/O — ideal for orchestrating LLM calls, database writes and webhook fan-out without thread overhead.
How Vaaani uses it
- Streaming LLM responses to clients via SSE / WebSockets
- Background workers that drain a queue of AI tasks
- BFF (backend-for-frontend) layers in front of Python AI services
- Edge functions on Cloudflare Workers / Vercel Edge
Why it makes the cut
Most AI workers are 95% waiting on a network call. Node's event loop makes that waiting cheap. Python can do this too — but Node feels native.
Sample code
import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({ model: "gpt-4o" }); const stream = await model.stream("Hello"); for await (const chunk of stream) { process.stdout.write(chunk.content); }
Related in the Vaaani stack
Have a project that needs Node?
30-min discovery call. You describe the busywork; I map it to an AI worker and a budget.