Stream Examples
Upload directly to S3
Use the AWS SDK v3 Upload helper with a Node.js Readable from toNodeReadable(). The upload streams the XLSX directly to S3 without writing a temp file to disk.
import { createExcelSchema, createWorkbookStream } from "typed-xlsx";
declare class S3Client {
constructor(config: { region: string });
}
declare class Upload {
constructor(input: {
client: unknown;
params: {
Bucket: string;
Key: string;
Body: NodeJS.ReadableStream;
ContentType: string;
};
});
done(): Promise<void>;
}
declare function fetchRowsCursor(): AsyncIterable<Array<{ id: string; amount: number }>>;
const schema = createExcelSchema<{ id: string; amount: number }>()
.column("id", { accessor: "id" })
.column("amount", { accessor: "amount", style: { numFmt: "#,##0.00" } })
.build();
const s3 = new S3Client({ region: "us-east-1" });
async function uploadReportToS3(key: string) {
const workbook = createWorkbookStream({ tempStorage: "file", memoryProfile: "low-memory" });
const table = await workbook.sheet("Report").table("data", { schema });
for await (const batch of fetchRowsCursor()) {
await table.commit({ rows: batch });
}
const upload = new Upload({
client: s3,
params: {
Bucket: "my-reports-bucket",
Key: key,
Body: workbook.toNodeReadable(),
ContentType: "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
},
});
await upload.done();
}
For most S3 use cases, prefer the SDK Upload helper above instead of implementing multipart logic yourself.
Stream with progress tracking
Wrap .commit() calls with a counter to report progress back to a job queue, SSE connection, or log.
import { createExcelSchema, createWorkbookStream } from "typed-xlsx";
declare function getRowCount(): Promise<number>;
declare function fetchRowsCursor(): AsyncIterable<Array<{ id: string; value: number }>>;
const schema = createExcelSchema<{ id: string; value: number }>()
.column("id", { accessor: "id" })
.column("value", { accessor: "value" })
.build();
async function exportWithProgress(jobId: string, onProgress: (pct: number) => void) {
const workbook = createWorkbookStream({ tempStorage: "file" });
const table = await workbook.sheet("Export").table("data", { schema });
const totalRows = await getRowCount();
let committed = 0;
for await (const batch of fetchRowsCursor()) {
await table.commit({ rows: batch });
committed += batch.length;
onProgress(Math.round((committed / totalRows) * 100));
}
await workbook.writeToFile(`./exports/${jobId}.xlsx`);
}
Parallel schema preparation, serial commit
Schema building is pure TypeScript — it does no async I/O. You can build schemas and even begin creating stream tables concurrently, as long as you commit rows to each table serially.
import { createExcelSchema, createWorkbookStream } from "typed-xlsx";
declare function fetchOrders(): AsyncIterable<Array<{ id: string; total: number }>>;
declare function fetchReturns(): AsyncIterable<Array<{ orderId: string; reason: string }>>;
const ordersSchema = createExcelSchema<{ id: string; total: number }>()
.column("id", { accessor: "id" })
.column("total", { accessor: "total" })
.build();
const returnsSchema = createExcelSchema<{ orderId: string; reason: string }>()
.column("orderId", { accessor: "orderId" })
.column("reason", { accessor: "reason" })
.build();
const workbook = createWorkbookStream();
// Create both tables concurrently — safe, each is on its own sheet
const [ordersTable, returnsTable] = await Promise.all([
workbook.sheet("Orders").table("orders", { schema: ordersSchema }),
workbook.sheet("Returns").table("returns", { schema: returnsSchema }),
]);
// Fetch and commit both pipelines concurrently
await Promise.all([
(async () => {
for await (const batch of fetchOrders()) {
await ordersTable.commit({ rows: batch });
}
})(),
(async () => {
for await (const batch of fetchReturns()) {
await returnsTable.commit({ rows: batch });
}
})(),
]);
await workbook.writeToFile("./report.xlsx");
For HTTP response streaming, use the output-target examples in the streaming docs. The underlying pattern is the same: commit rows in batches, then finalize with exactly one of pipeToNode() or toNodeReadable() once the workbook is ready.