Working with large JSON files (100MB+) requires special techniques. Standard JSON.parse() loads the entire file into memory, which can crash your application.
The Problem with Large JSON Files
When you call JSON.parse() on a 500MB file, you need at least 500MB of free memory — often much more due to object overhead. This leads to out-of-memory errors and slow performance.
Solution 1: Streaming Parsers
Streaming parsers process JSON token by token without loading the full file:
// Using stream-json library
const {chain} = require('stream-chain');
const {parser} = require('stream-json');
const {streamArray} = require('stream-json/streamers/StreamArray');
const pipeline = chain([
fs.createReadStream('large-file.json'),
parser(),
streamArray(),
data => {
// Process each item one at a time
console.log(data.value);
}
]);
Solution 2: Chunked Processing
Split large files into smaller chunks:
const fs = require('fs');
const readline = require('readline');
// If JSON is line-delimited (NDJSON)
const rl = readline.createInterface({
input: fs.createReadStream('data.ndjson')
});
rl.on('line', (line) => {
const obj = JSON.parse(line);
// Process one object at a time
});
Solution 3: Node.js Streams
Use Node.js streams for memory-efficient processing:
const {Transform} = require('stream');const jsonTransform = new Transform({
objectMode: true,
transform(chunk, encoding, callback) {
// Transform each chunk
callback(null, processChunk(chunk));
}
});
NDJSON Format
Newline Delimited JSON (NDJSON) is ideal for large datasets. Each line is a separate JSON object:
{"id":1,"name":"Alice"}
{"id":2,"name":"Bob"}
{"id":3,"name":"Charlie"}
Browser-Side Tips
For large JSON in the browser:
Before processing, use our JSON Minifier to reduce file size and JSON Validator to check for errors.