@nestbolt/excel
Storage Drivers
Configure local, S3, Google Cloud Storage, and Azure Blob Storage backends for reading and writing Excel files.
By default, ExcelService.store() and ExcelService.import() work with the local filesystem. The storage driver system lets you configure named backends ("disks") to read and write files from cloud storage providers, all through a unified interface.
Configuration
Define storage backends in the disks option of ExcelModule.forRoot() or forRootAsync(). Each disk has a name (the object key) and a driver-specific configuration:
import { Module } from "@nestjs/common";
import { ExcelModule } from "@nestbolt/excel";
@Module({
imports: [
ExcelModule.forRoot({
disks: {
local: {
driver: "local",
root: "./storage",
},
s3: {
driver: "s3",
bucket: "my-reports",
region: "us-east-1",
prefix: "excel",
},
gcs: {
driver: "gcs",
bucket: "my-reports",
keyFilename: "/path/to/service-account.json",
},
azure: {
driver: "azure",
container: "reports",
connectionString: process.env.AZURE_STORAGE_CONNECTION_STRING,
},
},
defaultDisk: "local",
}),
],
})
export class AppModule {}Using the Disk Parameter
All ExcelService methods that interact with storage accept an optional disk parameter as their last argument. When omitted, the defaultDisk is used. If no defaultDisk is configured, an implicit local driver handles the operation.
// Store to S3
await this.excelService.store(new UsersExport(), "reports/users.xlsx", undefined, "s3");
// Store using the decorator API to GCS
await this.excelService.storeFromEntity(
UserEntity,
users,
"exports/users.xlsx",
undefined,
"gcs",
);
// Import from S3
const result = await this.excelService.import(
new UsersImport(),
"uploads/data.xlsx",
undefined,
"s3",
);
// Shorthand reads also support disk
const rows = await this.excelService.toArray("data.xlsx", undefined, "s3");
const objects = await this.excelService.toCollection("data.xlsx", undefined, "gcs");Local Driver
The default driver. Reads and writes files on the local filesystem. No additional packages are required.
Configuration
{
driver: "local",
root: "./storage",
}| Option | Type | Default | Description |
|---|---|---|---|
driver | "local" | -- | Required. Must be "local". |
root | string | "." | Base directory. All paths are resolved relative to this directory. |
Behavior
put(path, buffer)writes the file, creating intermediate directories as needed.get(path)reads the file and returns its contents as aBuffer.delete(path)removes the file. No-op if the file does not exist.exists(path)returnstrueif the file exists.
Example
ExcelModule.forRoot({
disks: {
local: { driver: "local", root: "./storage/excel" },
},
defaultDisk: "local",
});
// Stores to ./storage/excel/reports/monthly.xlsx
await this.excelService.store(exportable, "reports/monthly.xlsx");When no disk configuration is provided at all, the library creates an implicit local driver with root: "." (the process working directory).
S3 Driver
Works with AWS S3 and any S3-compatible service, including MinIO, Cloudflare R2, and DigitalOcean Spaces.
Install the SDK
npm install @aws-sdk/client-s3Configuration
{
driver: "s3",
bucket: "my-bucket",
region: "us-east-1",
prefix: "excel",
credentials: {
accessKeyId: "AKIA...",
secretAccessKey: "secret...",
sessionToken: "optional...",
},
endpoint: "https://s3.us-east-1.amazonaws.com",
client: existingS3Client,
}| Option | Type | Default | Description |
|---|---|---|---|
driver | "s3" | -- | Required. Must be "s3". |
bucket | string | -- | Required. S3 bucket name. |
region | string | SDK default | AWS region. Uses SDK default resolution when omitted. |
prefix | string | -- | Key prefix prepended to all object keys (e.g., "excel" turns "report.xlsx" into "excel/report.xlsx"). |
credentials | object | SDK default chain | Inline credentials. Overrides the default credential chain (env vars, IAM roles, ~/.aws/credentials). |
endpoint | string | AWS default | Endpoint URL override for S3-compatible services. |
client | S3Client | -- | A pre-configured S3Client instance. When provided, region, credentials, and endpoint are ignored. |
Authentication Strategies
SDK default credentials (recommended for AWS deployments):
{
driver: "s3",
bucket: "my-bucket",
region: "us-east-1",
}The AWS SDK automatically resolves credentials from environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY), IAM roles, EC2 instance profiles, or the ~/.aws/credentials file.
Inline credentials (useful for development or non-AWS environments):
{
driver: "s3",
bucket: "my-bucket",
region: "us-east-1",
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
}S3-compatible endpoint (MinIO, R2, DigitalOcean Spaces):
// MinIO
{
driver: "s3",
bucket: "my-bucket",
endpoint: "http://localhost:9000",
credentials: {
accessKeyId: "minioadmin",
secretAccessKey: "minioadmin",
},
}
// Cloudflare R2
{
driver: "s3",
bucket: "my-bucket",
endpoint: "https://<account-id>.r2.cloudflarestorage.com",
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
}Pre-configured client (for custom middleware, retries, or shared clients):
import { S3Client } from "@aws-sdk/client-s3";
const s3Client = new S3Client({
region: "us-east-1",
maxAttempts: 5,
});
// In module config:
{
driver: "s3",
bucket: "my-bucket",
client: s3Client,
}GCS Driver
Reads and writes files to Google Cloud Storage.
Install the SDK
npm install @google-cloud/storageConfiguration
{
driver: "gcs",
bucket: "my-bucket",
prefix: "excel",
keyFilename: "/path/to/service-account.json",
credentials: {
client_email: "...",
private_key: "...",
project_id: "...",
},
client: existingStorageInstance,
}| Option | Type | Default | Description |
|---|---|---|---|
driver | "gcs" | -- | Required. Must be "gcs". |
bucket | string | -- | Required. GCS bucket name. |
prefix | string | -- | Object name prefix prepended to all paths. |
keyFilename | string | -- | Path to a service account JSON keyfile. |
credentials | object | ADC | Inline service account credentials (client_email, private_key, optional project_id). |
client | Storage | -- | A pre-configured @google-cloud/storage Storage instance. |
Authentication Strategies
Application Default Credentials (recommended for GCP deployments):
{
driver: "gcs",
bucket: "my-bucket",
}ADC automatically resolves credentials from the environment (Cloud Run, GKE, Compute Engine, or the GOOGLE_APPLICATION_CREDENTIALS environment variable).
Service account keyfile:
{
driver: "gcs",
bucket: "my-bucket",
keyFilename: "/path/to/service-account.json",
}Inline credentials:
{
driver: "gcs",
bucket: "my-bucket",
credentials: {
client_email: "excel-export@my-project.iam.gserviceaccount.com",
private_key: process.env.GCS_PRIVATE_KEY!,
project_id: "my-project",
},
}Pre-configured client:
import { Storage } from "@google-cloud/storage";
const storage = new Storage({ projectId: "my-project" });
// In module config:
{
driver: "gcs",
bucket: "my-bucket",
client: storage,
}Azure Driver
Reads and writes files to Azure Blob Storage.
Install the SDK
npm install @azure/storage-blobConfiguration
{
driver: "azure",
container: "reports",
prefix: "excel",
connectionString: "DefaultEndpointsProtocol=https;...",
accountName: "myaccount",
accountKey: "mykey",
client: existingContainerClient,
}| Option | Type | Default | Description |
|---|---|---|---|
driver | "azure" | -- | Required. Must be "azure". |
container | string | -- | Required. Azure Blob Storage container name. |
prefix | string | -- | Blob name prefix prepended to all paths. |
connectionString | string | -- | Azure Storage connection string. |
accountName | string | -- | Storage account name (used with accountKey). |
accountKey | string | -- | Storage account key (used with accountName). |
client | ContainerClient | -- | A pre-configured ContainerClient instance. |
Authentication Strategies
Connection string:
{
driver: "azure",
container: "reports",
connectionString: process.env.AZURE_STORAGE_CONNECTION_STRING!,
}Account name and key:
{
driver: "azure",
container: "reports",
accountName: process.env.AZURE_ACCOUNT_NAME!,
accountKey: process.env.AZURE_ACCOUNT_KEY!,
}Pre-configured ContainerClient (for managed identity, SAS tokens, or shared clients):
import { ContainerClient } from "@azure/storage-blob";
const containerClient = new ContainerClient(
"https://myaccount.blob.core.windows.net/reports",
new DefaultAzureCredential(),
);
// In module config:
{
driver: "azure",
container: "reports",
client: containerClient,
}Pre-configured Clients via forRootAsync
For full control over authentication -- secrets managers, vaults, custom middleware, or shared clients from other modules -- use forRootAsync():
import { Module } from "@nestjs/common";
import { ExcelModule } from "@nestbolt/excel";
import { S3Client } from "@aws-sdk/client-s3";
@Module({
imports: [
ExcelModule.forRootAsync({
imports: [AwsModule],
inject: [S3Client],
useFactory: (s3Client: S3Client) => ({
disks: {
s3: { driver: "s3", bucket: "my-reports", client: s3Client },
},
defaultDisk: "s3",
}),
}),
],
})
export class AppModule {}This pattern ensures the S3 client is created and configured by AwsModule and shared across your application.
The StorageDriver Interface
All drivers implement the StorageDriver interface. The meaning of the path argument depends on the driver:
- LocalDriver -- filesystem path, absolute or relative to
root. - S3Driver -- S3 object key (bucket is configured at the driver level).
- GCSDriver -- GCS object name (bucket is configured at the driver level).
- AzureDriver -- blob name (container is configured at the driver level).
interface StorageDriver {
put(path: string, buffer: Buffer): Promise<void>;
get(path: string): Promise<Buffer>;
delete(path: string): Promise<void>;
exists(path: string): Promise<boolean>;
}Using DiskManager Directly
Inject DiskManager for direct storage operations outside of the export/import workflow:
import { Injectable } from "@nestjs/common";
import { DiskManager } from "@nestbolt/excel";
@Injectable()
export class ReportService {
constructor(private readonly diskManager: DiskManager) {}
async getReport(path: string): Promise<Buffer> {
const driver = this.diskManager.disk("s3");
return driver.get(path);
}
async deleteReport(path: string): Promise<void> {
const driver = this.diskManager.disk("s3");
if (await driver.exists(path)) {
await driver.delete(path);
}
}
async copyToArchive(path: string): Promise<void> {
const s3 = this.diskManager.disk("s3");
const archive = this.diskManager.disk("archive");
const buffer = await s3.get(path);
await archive.put(`archive/${path}`, buffer);
await s3.delete(path);
}
}DiskManager.disk(name?) accepts an optional disk name. When omitted, it returns the driver for the defaultDisk. If the requested disk is not configured and the name is not "local", an error is thrown listing the available disks.
The DiskManager caches driver instances. The first call to disk("s3") creates the driver; subsequent calls return the cached instance.