Streaming
Commit API
The workbook → sheet → table hierarchy and how to feed rows in batches via .commit().
The stream builder follows a three-level hierarchy: workbook → sheet → table. Each table receives rows through successive .commit() calls.
Basic flow
import { createExcelSchema, createWorkbookStream } from "@chronicstone/typed-xlsx";
type SaleRow = {
date: string;
region: string;
revenue: number;
};
const schema = createExcelSchema<SaleRow>()
.column("date", { accessor: "date", header: "Date" })
.column("region", { accessor: "region", header: "Region" })
.column("revenue", {
accessor: "revenue",
header: "Revenue",
style: { numFmt: "#,##0.00" },
summary: (summary) => [
summary.cell({
init: () => 0,
step: (acc, row) => acc + row.revenue,
finalize: (acc) => acc,
style: { numFmt: "#,##0.00", font: { bold: true } },
}),
],
})
.build();
const workbook = createWorkbookStream({ tempStorage: "file" });
const sheet = workbook.sheet("Sales", { freezePane: { rows: 1 } });
const table = await sheet.table({ id: "sales", schema });
// Fetch and commit rows in batches
const batches: SaleRow[][] = [
[{ date: "2024-01-01", region: "North", revenue: 4200 }],
[{ date: "2024-01-02", region: "South", revenue: 3100 }],
];
for (const batch of batches) {
await table.commit({ rows: batch });
}
await workbook.writeToFile("./sales.xlsx");
Multiple sheets
Each call to .sheet() returns a new sheet builder. Sheets are written in the order they were created.
import { createExcelSchema, createWorkbookStream } from "@chronicstone/typed-xlsx";
const orderSchema = createExcelSchema<{ id: string; total: number }>()
.column("id", { accessor: "id" })
.column("total", { accessor: "total", style: { numFmt: "#,##0.00" } })
.build();
const logSchema = createExcelSchema<{ ts: string; message: string }>()
.column("ts", { accessor: "ts", header: "Timestamp" })
.column("message", { accessor: "message", header: "Message" })
.build();
const workbook = createWorkbookStream();
// Sheet 1 — Orders
const ordersSheet = workbook.sheet("Orders", { freezePane: { rows: 1 } });
const ordersTable = await ordersSheet.table({ id: "orders", schema: orderSchema });
for await (const batch of fetchOrders()) {
await ordersTable.commit({ rows: batch });
}
// Sheet 2 — Audit log
const logSheet = workbook.sheet("Audit Log");
const logTable = await logSheet.table({ id: "log", schema: logSchema });
for await (const batch of fetchLogs()) {
await logTable.commit({ rows: batch });
}
await workbook.writeToFile("./report.xlsx");
Multiple tables per sheet
You can open multiple tables on the same sheet. The tables are laid out according to the sheet's tablesPerRow option.
import { createExcelSchema, createWorkbookStream } from "@chronicstone/typed-xlsx";
const schema = createExcelSchema<{ label: string; value: number }>()
.column("label", { accessor: "label" })
.column("value", { accessor: "value" })
.build();
const workbook = createWorkbookStream();
const sheet = workbook.sheet("Summary", { tablesPerRow: 2 });
const tableA = await sheet.table({ id: "by-region", schema });
const tableB = await sheet.table({ id: "by-category", schema });
await tableA.commit({ rows: regionData });
await tableB.commit({ rows: categoryData });
await workbook.writeToFile("./summary.xlsx");
Column selection in stream mode
Pass select to .table() to restrict which columns are emitted. The schema itself is unchanged.
import { createExcelSchema, createWorkbookStream } from "@chronicstone/typed-xlsx";
const schema = createExcelSchema<{ id: string; name: string; internalNote: string }>()
.column("id", { accessor: "id" })
.column("name", { accessor: "name" })
.column("internalNote", { accessor: "internalNote" })
.build();
const workbook = createWorkbookStream();
const table = await workbook
.sheet("Export")
.table({ id: "data", schema, select: { exclude: ["internalNote"] } });
await table.commit({ rows: [{ id: "1", name: "Ada", internalNote: "internal" }] });
await workbook.writeToFile("./export.xlsx");
Batch size guidance
There is no hard limit on batch size. Practical guidance:
- Too small (< 100 rows): spool write overhead dominates. Use 500–5000 rows per batch.
- Too large (> 50k rows per batch): peak heap temporarily spikes for the batch itself.
- Sweet spot: 1000–10000 rows per batch for most schemas, balanced memory against spool overhead.
The right number depends on your row's shape (wide vs narrow) and available memory. For pipelines backed by a database cursor or paginated API, use whatever page size the source produces.