QUIC Streams: Sending and Receiving Data
This is Part 3 in a series on the new node:quic module in Node.js. In
Part 2, we covered endpoints, sessions,
and the connection lifecycle. In this post, we focus on the core unit of data
transfer in QUIC: the stream.
Reminder: the node:quic module is highly experimental (Stability 1.0). The
APIs described here may change in future releases.
Two kinds of streams
QUIC defines two types of streams:
-
Bidirectional streams: can carry data in both directions. Either side of the connection can read from and write to the stream. These are the most common type and are what you will use for request/response patterns.
-
Unidirectional streams: carry data in one direction only. The side that creates the stream can write to it; the other side can only read. These are useful for server-push patterns or one-way data feeds.
Both types are multiplexed within a single QUIC session. Each stream has its own flow control, so a slow consumer on one stream does not block data on other streams.
Creating streams
Streams are created from a session:
create-streams.mjs
// Create a bidirectional stream.
const bidiStream = await session.createBidirectionalStream();
// Create a unidirectional stream.
const uniStream = await session.createUnidirectionalStream();Both methods return a promise that resolves to a QuicStream object. The
promise resolves immediately if the session has capacity for a new stream. If
the peer's stream limit has been reached, the stream enters a "pending" state
and the promise resolves once the peer grants capacity.
You can inspect a stream's properties:
console.log(stream.id); // BigInt stream ID.
console.log(stream.direction); // 'bidi' or 'uni'.
console.log(stream.session); // The parent QuicSession.
console.log(stream.destroyed); // Boolean.
console.log(stream.early); // True if 0-RTT data (server side).Receiving streams
When the remote peer creates a stream, it is delivered via the session's
onstream callback:
receive-streams.mjs
import { listen } from 'node:quic';
const endpoint = await listen(async (session) => {
session.onstream = async (stream) => {
console.log('New stream:', stream.id, stream.direction);
if (stream.direction === 'bidi') {
// Can read from and write to this stream.
} else {
// Unidirectional: can only read.
}
};
}, options);Sending data: body sources
The simplest way to send data on a stream is by providing a body when
creating the stream. The body option is remarkably flexible -- it accepts
many different types:
String
const stream = await session.createBidirectionalStream({
body: 'Hello, world!',
});Strings are encoded as UTF-8. You can also use setBody() after creation:
const stream = await session.createBidirectionalStream();
stream.setBody('Hello, world!');ArrayBuffer and TypedArrays
const encoder = new TextEncoder();
const data = encoder.encode('binary data');
const stream = await session.createBidirectionalStream({
body: data,
});
// Or with a raw ArrayBuffer:
const buffer = new ArrayBuffer(1024);
const stream2 = await session.createBidirectionalStream({
body: buffer,
});SharedArrayBuffer
const shared = new SharedArrayBuffer(256);
const view = new Uint8Array(shared);
view.set([1, 2, 3, 4, 5]);
const stream = await session.createBidirectionalStream({
body: shared,
});Blob
const blob = new Blob(['Hello from a Blob!'], { type: 'text/plain' });
const stream = await session.createBidirectionalStream({
body: blob,
});Blobs are especially useful for sending large data because they can be read in chunks internally, avoiding the need to load the entire contents into memory at once.
FileHandle
import { open } from 'node:fs/promises';
const file = await open('/path/to/large-file.bin', 'r');
const stream = await session.createBidirectionalStream({
body: file,
});
// The file handle is consumed and closed automatically.This is one of the most efficient body sources for sending file data. The QUIC implementation reads directly from the file descriptor in the send loop without buffering the entire file in JavaScript memory.
Async iterable
async function* generateData() {
const encoder = new TextEncoder();
for (let i = 0; i < 10; i++) {
yield encoder.encode(`chunk ${i}\n`);
}
}
const stream = await session.createBidirectionalStream({
body: generateData(),
});Async iterables provide a natural pull-based data source. The QUIC send loop pulls chunks from the iterator as the congestion window allows, providing automatic backpressure.
Sync iterable
const chunks = [
new Uint8Array([1, 2, 3]),
new Uint8Array([4, 5, 6]),
new Uint8Array([7, 8, 9]),
];
const stream = await session.createBidirectionalStream({
body: chunks,
});ReadableStream
const readable = new ReadableStream({
pull(controller) {
controller.enqueue(new Uint8Array([1, 2, 3]));
controller.close();
},
});
const stream = await session.createBidirectionalStream({
body: readable,
});Promise
You can even provide a Promise that resolves to any of the above types:
const stream = await session.createBidirectionalStream({
body: fetch('https://example.com/data').then((r) => r.body),
});null (empty body)
Pass null to send a FIN immediately with no body data. This is useful for
signaling intent to the peer without sending any payload:
const stream = await session.createBidirectionalStream({
body: null,
});
// The write side is closed immediately.The Writer API
For scenarios where you need incremental control over writing -- sending data
in multiple steps, applying backpressure, or writing conditionally -- the
stream.writer property provides a low-level writing interface.
Important: the Writer API and the body / setBody() approach are mutually
exclusive. Once you access stream.writer, you cannot call setBody(), and
vice versa.
Basic writing
writer-basic.mjs
const stream = await session.createBidirectionalStream();
const writer = stream.writer;
const encoder = new TextEncoder();
// Synchronous write: returns true if there is room in the buffer,
// false if the high-water mark has been reached (backpressure).
const ok = writer.writeSync(encoder.encode('first chunk'));
// Write more data.
writer.writeSync(encoder.encode('second chunk'));
// Signal that no more data will be written.
writer.endSync();Async writing
The writer also supports async versions that return promises:
const writer = stream.writer;
// Async write: resolves when the data has been accepted.
await writer.write(encoder.encode('async chunk'));
// Async end: resolves when the FIN has been sent.
await writer.end();The async methods accept an optional AbortSignal for cancellation:
const ac = new AbortController();
setTimeout(() => ac.abort(), 5000);
try {
await writer.write(encoder.encode('data'), { signal: ac.signal });
} catch (err) {
if (err.name === 'AbortError') {
console.log('Write was aborted');
}
}Batch writing
For sending multiple chunks at once:
const chunks = [
encoder.encode('chunk 1'),
encoder.encode('chunk 2'),
encoder.encode('chunk 3'),
];
// Synchronous batch.
writer.writevSync(chunks);
// Or async batch.
await writer.writev(chunks);Backpressure with desiredSize
The writer tracks how much room is left in the internal buffer via the
desiredSize property:
backpressure.mjs
const stream = await session.createBidirectionalStream();
const writer = stream.writer;
const encoder = new TextEncoder();
// The highWaterMark controls the internal buffer size.
// Default is 65536 bytes. You can adjust it:
stream.highWaterMark = 16384;
const chunk = encoder.encode('x'.repeat(4096));
while (someCondition) {
const ok = writer.writeSync(chunk);
if (!ok) {
// Buffer is full. Wait for it to drain.
// desiredSize is negative when over the high-water mark.
console.log('Backpressure, desiredSize:', writer.desiredSize);
// In practice, you would yield to the event loop here
// and resume writing when there is capacity.
break;
}
}
writer.endSync();Failing a writer
To abort the writable side of a stream with an error:
writer.fail(new Error('upload cancelled'));
// Sends RESET_STREAM to the peer.This is different from stream.destroy(): fail() only resets the writable
side (sends RESET_STREAM), while destroy() resets both sides (sends both
RESET_STREAM and STOP_SENDING).
Writer with Symbol.dispose
The writer supports Symbol.dispose for use with using:
{
using writer = stream.writer;
writer.writeSync(encoder.encode('data'));
// If the block exits without calling end(), the writer
// calls fail() automatically to clean up.
}Reading data: async iteration
QUIC streams implement the async iterable protocol. The simplest way to read
data from a stream is with for await...of:
reading.mjs
const stream = await session.createBidirectionalStream({
body: encoder.encode('ping'),
});
// Each iteration yields a Uint8Array[] (an array of chunks).
for await (const chunks of stream) {
for (const chunk of chunks) {
console.log('Received chunk:', chunk.byteLength, 'bytes');
}
}
console.log('Stream ended');The iterator yields arrays of Uint8Array chunks rather than individual chunks.
This batching reduces the number of async iterations needed when data arrives
quickly, improving throughput.
Collecting the full body
The stream/iter module provides helpers for collecting an entire stream's
contents:
import { bytes, text } from 'stream/iter';
// Collect as a single Uint8Array.
const data = await bytes(stream);
// Collect as a UTF-8 string.
const str = await text(stream);Breaking out of iteration
You can exit the iterator early with break. This cleans up the iterator:
for await (const chunks of stream) {
// Process the first batch and stop.
console.log('Got', chunks.length, 'chunks');
break;
}Non-readable streams
The sender side of a unidirectional stream is not iterable -- iterating it
immediately returns { done: true }:
const uniStream = await session.createUnidirectionalStream({
body: encoder.encode('one-way data'),
});
// The creator of a uni stream can only write, not read.
for await (const _ of uniStream) {
// This loop body never executes.
}Flow control
QUIC has two levels of flow control:
-
Connection-level flow control limits the total amount of data that can be in flight across all streams on a session.
-
Stream-level flow control limits the amount of data that can be in flight on a single stream.
Both are configured via transport parameters:
flow-control.mjs
import { readFileSync } from 'node:fs';
import { createPrivateKey } from 'node:crypto';
import { listen, connect } from 'node:quic';
import { bytes } from 'stream/iter';
const key = createPrivateKey(readFileSync('key.pem'));
const cert = readFileSync('cert.pem');
const encoder = new TextEncoder();
const endpoint = await listen(async (session) => {
session.onstream = async (stream) => {
const data = await bytes(stream);
console.log('Server received', data.byteLength, 'bytes');
stream.writer.endSync();
await stream.closed;
session.close();
};
}, {
sni: { '*': { keys: [key], certs: [cert] } },
alpn: ['flow-test'],
transportParams: {
// Connection-level: max total data in flight.
initialMaxData: 65536n,
// Stream-level: max data in flight on a single bidi stream
// that the peer opened (inbound from local perspective).
initialMaxStreamDataBidiRemote: 16384n,
// Stream-level: max data in flight on a bidi stream that
// we opened (outbound from local perspective).
initialMaxStreamDataBidiLocal: 16384n,
// Max data in flight on a unidirectional stream.
initialMaxStreamDataUni: 8192n,
// Max number of concurrent bidi/uni streams the peer can open.
initialMaxStreamsBidi: 100n,
initialMaxStreamsUni: 100n,
},
});
const session = await connect(endpoint.address, {
servername: 'localhost',
alpn: 'flow-test',
transportParams: {
initialMaxData: 65536n,
initialMaxStreamDataBidiRemote: 16384n,
initialMaxStreamDataBidiLocal: 16384n,
},
});
await session.opened;
// Send data that exceeds the stream window. The QUIC stack
// handles the flow control automatically -- the send loop
// pauses when the window is exhausted and resumes when the
// peer sends WINDOW_UPDATE frames.
const largeData = encoder.encode('x'.repeat(100_000));
const stream = await session.createBidirectionalStream({
body: largeData,
});
for await (const _ of stream) { /* drain response */ }
await stream.closed;
await session.close();
await endpoint.close();You can also control the maximum flow control window that the session will advertise:
const session = await connect(address, {
maxStreamWindow: 1048576n, // Max per-stream window: 1 MB.
maxWindow: 16777216n, // Max connection window: 16 MB.
});The onblocked callback
When a stream's send buffer reaches the flow control limit, the onblocked
callback fires:
const stream = await session.createBidirectionalStream();
stream.onblocked = () => {
console.log('Stream is blocked by flow control');
};This is primarily an informational signal. The QUIC stack handles flow control
automatically -- you do not need to take action in onblocked. It is useful
for diagnostics and understanding when flow control is limiting throughput.
Stream lifecycle
A QUIC stream goes through several phases:
-
Pending -- the stream has been requested but the peer's stream limit has not yet been reached. During this phase, the stream's
idis not yet assigned. You can still configure the stream (set body, headers, priority) and those settings will be applied once the stream becomes active. -
Open -- the stream ID has been assigned and data can flow in both directions (for bidi) or one direction (for uni).
-
Half-closed -- one side has sent a FIN (end of stream). For a bidirectional stream, this means one direction is closed while the other remains open.
-
Closed -- both sides have finished. The
stream.closedpromise resolves. -
Destroyed -- the stream's resources have been released.
The stream.closed promise
// Clean close: resolves when both directions are finished.
await stream.closed;
// Error: rejects if the stream is destroyed with an error.
try {
await stream.closed;
} catch (err) {
console.error('Stream error:', err.code, err.errorCode);
}Destroying a stream
// Destroy without error: sends RESET_STREAM + STOP_SENDING
// with NO_ERROR code.
stream.destroy();
// Destroy with error: the error is forwarded to the peer.
stream.destroy(new Error('request cancelled'));
// Destroy with an explicit QUIC error code.
stream.destroy(new Error('cancelled'), { code: 8n });Reset and stop-sending
When a stream is destroyed with an error, the QUIC stack sends two frames:
- RESET_STREAM -- tells the peer that the local side will not send any more data on this stream.
- STOP_SENDING -- tells the peer that the local side does not want to receive any more data on this stream.
The peer is notified via the onreset callback:
// On the receiving side:
session.onstream = (stream) => {
stream.onreset = (err) => {
console.log('Peer reset the stream:', err.errorCode);
};
};Half-close pattern
With bidirectional streams, one side can close its write direction while still reading from the other side:
halfclose.mjs
import { listen, connect } from 'node:quic';
import { bytes } from 'stream/iter';
const encoder = new TextEncoder();
const decoder = new TextDecoder();
// Server: receive data, then send a response.
const endpoint = await listen(async (session) => {
session.onstream = async (stream) => {
// Read everything the client sends.
const request = await bytes(stream);
console.log('Server got:', decoder.decode(request));
// The client has closed its write side (FIN sent), but
// we can still write back on the same stream.
const writer = stream.writer;
writer.writeSync(encoder.encode('response data'));
writer.endSync();
await stream.closed;
session.close();
};
}, options);
// Client: send data, then read the response.
const session = await connect(endpoint.address, clientOptions);
await session.opened;
const stream = await session.createBidirectionalStream({
body: encoder.encode('request data'),
// body closes the write side after sending.
});
// Read the server's response.
const response = await bytes(stream);
console.log('Client got:', decoder.decode(response));
await stream.closed;
await session.close();
await endpoint.close();Stream limits and pending streams
The number of concurrent streams is controlled by transport parameters. When the limit is reached, new streams enter a pending state:
pending-streams.mjs
import { readFileSync } from 'node:fs';
import { createPrivateKey } from 'node:crypto';
import { listen, connect } from 'node:quic';
import { bytes } from 'stream/iter';
const key = createPrivateKey(readFileSync('key.pem'));
const cert = readFileSync('cert.pem');
// Server: only allow 2 concurrent bidi streams from the peer.
const endpoint = await listen(async (session) => {
session.onstream = async (stream) => {
const data = await bytes(stream);
stream.writer.endSync();
await stream.closed;
};
}, {
sni: { '*': { keys: [key], certs: [cert] } },
alpn: ['limit-test'],
transportParams: {
initialMaxStreamsBidi: 2n,
},
});
const session = await connect(endpoint.address, {
servername: 'localhost',
alpn: 'limit-test',
});
await session.opened;
// Open 5 streams. The first 2 open immediately; the remaining 3
// enter a pending state until earlier streams close.
const streams = await Promise.all([
session.createBidirectionalStream({ body: 'stream 0' }),
session.createBidirectionalStream({ body: 'stream 1' }),
session.createBidirectionalStream({ body: 'stream 2' }),
session.createBidirectionalStream({ body: 'stream 3' }),
session.createBidirectionalStream({ body: 'stream 4' }),
]);
// All 5 promises resolve, but streams 2-4 may have waited for
// earlier streams to close before becoming active.
for (const s of streams) {
for await (const _ of s) { /* drain */ }
await s.closed;
}
await session.close();
await endpoint.close();The key point is that createBidirectionalStream() does not fail when the
stream limit is reached. Instead, it returns a pending stream that activates
once the peer grants more stream capacity. This means you can open streams
eagerly without worrying about the remote peer's limits -- the QUIC stack
handles the queuing automatically.
Stream statistics
Each stream tracks its own statistics:
const stats = stream.stats;
console.log({
bytesReceived: stats.bytesReceived,
bytesSent: stats.bytesSent,
createdAt: stats.createdAt,
openedAt: stats.openedAt,
destroyedAt: stats.destroyedAt,
receivedAt: stats.receivedAt,
ackedAt: stats.ackedAt,
finalSize: stats.finalSize,
maxOffset: stats.maxOffset,
maxOffsetAcknowledged: stats.maxOffsetAcknowledged,
maxOffsetReceived: stats.maxOffsetReceived,
});All values are BigInts. The ackedAt timestamp is particularly useful -- it
tells you when the peer acknowledged receipt of all data on the stream.
A complete file transfer example
Here is a more realistic example: a client that sends a file to a server, and the server writes it to disk:
file-transfer.mjs
import { readFileSync } from 'node:fs';
import { open } from 'node:fs/promises';
import { createPrivateKey } from 'node:crypto';
import { listen, connect } from 'node:quic';
import { bytes } from 'stream/iter';
const key = createPrivateKey(readFileSync('key.pem'));
const cert = readFileSync('cert.pem');
const encoder = new TextEncoder();
const decoder = new TextDecoder();
// Server: receive file data and write to disk.
const endpoint = await listen(async (session) => {
session.onstream = async (stream) => {
const data = await bytes(stream);
console.log(`Received ${data.byteLength} bytes`);
const out = await open('/tmp/received-file.bin', 'w');
await out.writeFile(data);
await out.close();
// Send acknowledgment.
const writer = stream.writer;
writer.writeSync(encoder.encode(`received ${data.byteLength} bytes`));
writer.endSync();
await stream.closed;
session.close();
};
}, {
sni: { '*': { keys: [key], certs: [cert] } },
alpn: ['file-transfer'],
});
// Client: send a file and read the acknowledgment.
const session = await connect(endpoint.address, {
servername: 'localhost',
alpn: 'file-transfer',
});
await session.opened;
// Open the file as a body source -- efficient, no full buffering.
const fileHandle = await open('/path/to/source-file.bin', 'r');
const stream = await session.createBidirectionalStream({
body: fileHandle,
});
// Read the server's acknowledgment.
const ack = await bytes(stream);
console.log('Server says:', decoder.decode(ack));
await stream.closed;
await session.close();
await endpoint.close();The use of a FileHandle as a body source is significant: the QUIC
implementation reads directly from the file descriptor in the send loop,
avoiding the need to buffer the entire file in JavaScript memory. For large
files, this is substantially more efficient than reading the file into a
Buffer first.
What is next?
In Part 4, we will layer HTTP/3 on top of QUIC: request/response semantics with pseudo-headers, informational headers, trailing headers, stream priority, GOAWAY, and ORIGIN frames.