a streaming interface for archive generation
The archiver package provides a streaming interface for creating archive files (ZIP and TAR) in Node.js applications. With over 18 million weekly downloads, it's become the de facto standard for programmatic archive generation on the server side. Unlike traditional archiving approaches that load entire file trees into memory, archiver pipes data directly to output streams, making it suitable for handling large file sets in production environments.
Archiver solves a common problem in Node.js applications: efficiently bundling files for download, backup, or deployment. It abstracts away the complexity of archive format specifications while exposing enough control for compression levels, file permissions, and custom entry metadata. The package supports multiple input types—individual files, readable streams, buffers, strings, entire directories, and glob patterns—making it flexible enough for everything from simple file bundling to complex build pipelines.
The package is commonly found in build tools, backup systems, content management platforms, and serverless functions that need to generate archives on-the-fly. Its streaming architecture means you can generate multi-gigabyte archives without exhausting server memory, and its event-driven API integrates cleanly with Node.js async patterns. While archiver only handles archive creation (not extraction), this focused scope has made it remarkably stable and performant for its primary use case.
const fs = require('fs');
const archiver = require('archiver');
const path = require('path');
async function createArchive() {
const output = fs.createWriteStream(path.join(__dirname, 'output.zip'));
const archive = archiver('zip', {
zlib: { level: 9 }
});
return new Promise((resolve, reject) => {
output.on('close', () => {
console.log(`Archive created: ${archive.pointer()} bytes`);
resolve();
});
archive.on('error', (err) => {
reject(err);
});
archive.on('warning', (err) => {
if (err.code === 'ENOENT') {
console.warn('Warning:', err.message);
} else {
reject(err);
}
});
archive.pipe(output);
archive.file('package.json', { name: 'metadata.json' });
archive.append('Generated on: ' + new Date().toISOString(), { name: 'timestamp.txt' });
archive.append(Buffer.from('binary data here'), { name: 'data.bin' });
archive.directory('src/', 'source-code');
archive.glob('*.md', { cwd: __dirname });
archive.finalize();
});
}
createArchive().catch(console.error);Build and deployment pipelines: Bundle application assets, source code, or compiled artifacts into ZIP files for distribution or deployment to cloud services. Common in CI/CD workflows where you need to package Lambda functions or container build artifacts.
On-demand file downloads: Generate archives dynamically when users request downloads of selected files or folders from web applications. Particularly useful in CMS platforms, file managers, or SaaS applications where users export their data.
Backup systems: Create incremental or full backups of directories with proper compression, then stream them directly to S3, network storage, or other backup destinations without temporary local storage.
Log aggregation: Compress and archive application logs on rotation schedules, either as TAR for Unix systems or ZIP for cross-platform compatibility, before shipping them to long-term storage.
Asset delivery: Package multiple static assets (images, CSS, JavaScript) into a single archive for batch downloads in development tools, theme distributors, or content delivery workflows.
npm install archiverpnpm add archiverbun add archiver