Create an instance of NoFilter.
Optionalinput: string | Buffer<ArrayBufferLike> | NoFilterOptionsSource data.
OptionalinputEncoding: BufferEncoding | NoFilterOptionsEncoding name for input, ignored if input is not a String.
Optionaloptions: NoFilterOptions = {}Other options.
If false then the stream will automatically end the writable side when the
readable side ends. Set initially by the allowHalfOpen constructor option,
which defaults to true.
This can be changed manually to change the half-open behavior of an existing
Duplex stream instance, but must be changed before the 'end' event is emitted.
ReadonlyclosedIs true after 'close' has been emitted.
Is true after readable.destroy() has been called.
ReadonlyerroredReturns error if the stream has been destroyed with an error.
Is true if it is safe to call read, which means
the stream has not been destroyed or emitted 'error' or 'end'.
ReadonlyreadableReturns whether the stream was destroyed or errored before emitting 'end'.
ReadonlyreadableReturns whether 'data' has been emitted.
ReadonlyreadableGetter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.
ReadonlyreadableBecomes true when 'end' event is emitted.
This property reflects the current state of a Readable stream as described
in the Three states section.
ReadonlyreadableReturns the value of highWaterMark passed when creating this Readable.
ReadonlyreadableThis property contains the number of bytes (or objects) in the queue
ready to be read. The value provides introspection data regarding
the status of the highWaterMark.
ReadonlyreadableGetter for the property objectMode of a given Readable stream.
Is true if it is safe to call writable.write(), which means
the stream has not been destroyed, errored, or ended.
ReadonlywritableReturns whether the stream was destroyed or errored before emitting 'finish'.
ReadonlywritableNumber of times writable.uncork() needs to be
called in order to fully uncork the stream.
ReadonlywritableIs true after writable.end() has been called. This property
does not indicate whether the data has been flushed, for this use writable.writableFinished instead.
ReadonlywritableIs set to true immediately before the 'finish' event is emitted.
ReadonlywritableReturn the value of highWaterMark passed when creating this Writable.
ReadonlywritableThis property contains the number of bytes (or objects) in the queue
ready to be written. The value provides introspection data regarding
the status of the highWaterMark.
ReadonlywritableIs true if the stream's buffer has been full and stream will emit 'drain'.
ReadonlywritableGetter for the property objectMode of a given Writable stream.
Current readable length, in bytes.
Length of the contents.
How many total bytes have been read at the current position?
Optional_Optional_Optional[captureThe Symbol.for('nodejs.rejection') method is called in case a
promise rejection happens when emitting an event and
captureRejections is enabled on the emitter.
It is possible to use events.captureRejectionSymbol in
place of Symbol.for('nodejs.rejection').
import { EventEmitter, captureRejectionSymbol } from 'node:events';
class MyClass extends EventEmitter {
constructor() {
super({ captureRejections: true });
}
[captureRejectionSymbol](err, event, ...args) {
console.log('rejection happened for', event, 'with', err, ...args);
this.destroy(err);
}
destroy(err) {
// Tear the resource down here.
}
}
Returns a number indicating whether this comes before or after or is the same as the other NoFilter in sort order.
The other object to compare.
import { Readable } from 'node:stream';
async function* splitToWords(source) {
for await (const chunk of source) {
const words = String(chunk).split(' ');
for (const word of words) {
yield word;
}
}
}
const wordsStream = Readable.from(['this is', 'compose as operator']).compose(splitToWords);
const words = await wordsStream.toArray();
console.log(words); // prints ['this', 'is', 'compose', 'as', 'operator']
See stream.compose for more information.
Optionaloptions: Abortablea stream composed with the stream stream.
The writable.cork() method forces all written data to be buffered in memory.
The buffered data will be flushed when either the uncork or end methods are called.
The primary intent of writable.cork() is to accommodate a situation in which
several small chunks are written to the stream in rapid succession. Instead of
immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them
all to writable._writev(), if present. This prevents a head-of-line blocking
situation where data is being buffered while waiting for the first small chunk
to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.
See also: writable.uncork(), writable._writev().
Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable
stream will release any internal resources and subsequent calls to push() will be ignored.
Once destroy() has been called any further calls will be a no-op and no
further errors except from _destroy() may be emitted as 'error'.
Implementors should not override this method, but instead implement readable._destroy().
Optionalerror: ErrorError which will be passed as payload in 'error' event
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments
to each.
Returns true if the event had listeners, false otherwise.
import { EventEmitter } from 'node:events';
const myEmitter = new EventEmitter();
// First listener
myEmitter.on('event', function firstListener() {
console.log('Helloooo! first listener');
});
// Second listener
myEmitter.on('event', function secondListener(arg1, arg2) {
console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
});
// Third listener
myEmitter.on('event', function thirdListener(...args) {
const parameters = args.join(', ');
console.log(`event with parameters ${parameters} in third listener`);
});
console.log(myEmitter.listeners('event'));
myEmitter.emit('event', 1, 2, 3, 4, 5);
// Prints:
// [
// [Function: firstListener],
// [Function: secondListener],
// [Function: thirdListener]
// ]
// Helloooo! first listener
// event with parameters 1, 2 in second listener
// event with parameters 1, 2, 3, 4, 5 in third listener
Calling the writable.end() method signals that no more data will be written
to the Writable. The optional chunk and encoding arguments allow one
final additional chunk of data to be written immediately before closing the
stream.
Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'.
import fs from 'node:fs';
const file = fs.createWriteStream('example.txt');
file.write('hello, ');
file.end('world!');
// Writing more now is not allowed!
Optionalcb: () => voidCalling the writable.end() method signals that no more data will be written
to the Writable. The optional chunk and encoding arguments allow one
final additional chunk of data to be written immediately before closing the
stream.
Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'.
import fs from 'node:fs';
const file = fs.createWriteStream('example.txt');
file.write('hello, ');
file.end('world!');
// Writing more now is not allowed!
Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer},
{TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.
Optionalcb: () => voidCalling the writable.end() method signals that no more data will be written
to the Writable. The optional chunk and encoding arguments allow one
final additional chunk of data to be written immediately before closing the
stream.
Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'.
import fs from 'node:fs';
const file = fs.createWriteStream('example.txt');
file.write('hello, ');
file.end('world!');
// Writing more now is not allowed!
Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer},
{TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.
The encoding if chunk is a string
Optionalcb: () => voidDo these NoFilter's contain the same bytes? Doesn't work if either is in object mode.
Other NoFilter to compare against.
Equal?
Returns an array listing the events for which the emitter has registered listeners.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.on('foo', () => {});
myEE.on('bar', () => {});
const sym = Symbol('symbol');
myEE.on(sym, () => {});
console.log(myEE.eventNames());
// Prints: [ 'foo', 'bar', Symbol(symbol) ]
This method is similar to Array.prototype.every and calls fn on each chunk in the stream
to check if all awaited return values are truthy value for fn. Once an fn call on a chunk
awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false.
If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.
a function to call on each chunk of the stream. Async or not.
Optionaloptions: Pick<ReadableOperatorOptions, "concurrency" | "signal">a promise evaluating to true if fn returned a truthy value for every one of the chunks.
This method allows filtering the stream. For each chunk in the stream the fn function will be called
and if it returns a truthy value, the chunk will be passed to the result stream.
If the fn function returns a promise - that promise will be awaited.
a function to filter chunks from the stream. Async or not.
Optionaloptions: ReadableOperatorOptionsa stream filtered with the predicate fn.
This method is similar to Array.prototype.find and calls fn on each chunk in the stream
to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy,
the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value.
If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.
a function to call on each chunk of the stream. Async or not.
Optionaloptions: Pick<ReadableOperatorOptions, "concurrency" | "signal">a promise evaluating to the first chunk for which fn evaluated with a truthy value,
or undefined if no element was found.
This method is similar to Array.prototype.find and calls fn on each chunk in the stream
to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy,
the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value.
If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.
a function to call on each chunk of the stream. Async or not.
Optionaloptions: Pick<ReadableOperatorOptions, "concurrency" | "signal">a promise evaluating to the first chunk for which fn evaluated with a truthy value,
or undefined if no element was found.
This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
a function to map over every chunk in the stream. May be async. May be a stream or generator.
Optionaloptions: Pick<ReadableOperatorOptions, "concurrency" | "signal">a stream flat-mapped with the function fn.
This method allows iterating a stream. For each chunk in the stream the fn function will be called.
If the fn function returns a promise - that promise will be awaited.
This method is different from for await...of loops in that it can optionally process chunks concurrently.
In addition, a forEach iteration can only be stopped by having passed a signal option
and aborting the related AbortController while for await...of can be stopped with break or return.
In either case the stream will be destroyed.
This method is different from listening to the 'data' event in that it uses the readable event
in the underlying machinary and can limit the number of concurrent fn calls.
a function to call on each chunk of the stream. Async or not.
Optionaloptions: Pick<ReadableOperatorOptions, "concurrency" | "signal">a promise for when the stream has finished.
Get a byte by offset. I didn't want to get into metaprogramming
to give you the NoFilter[0] syntax.
The byte to retrieve.
0-255.
The readable.isPaused() method returns the current operating state of the Readable.
This is used primarily by the mechanism that underlies the readable.pipe() method.
In most typical cases, there will be no reason to use this method directly.
const readable = new stream.Readable();
readable.isPaused(); // === false
readable.pause();
readable.isPaused(); // === true
readable.resume();
readable.isPaused(); // === false
The iterator created by this method gives users the option to cancel the destruction
of the stream if the for await...of loop is exited by return, break, or throw,
or if the iterator should destroy the stream if the stream emitted an error during iteration.
Optionaloptions: ReadableIteratorOptionsReturns the number of listeners listening for the event named eventName.
If listener is provided, it will return how many times the listener is found
in the list of the listeners of the event.
Optionallistener: (...args: any[]) => voidReturns a copy of the array of listeners for the event named eventName.
server.on('connection', (stream) => {
console.log('someone connected!');
});
console.log(util.inspect(server.listeners('connection')));
// Prints: [ [Function] ]
This method allows mapping over the stream. The fn function will be called for every chunk in the stream.
If the fn function returns a promise - that promise will be awaited before being passed to the result stream.
a function to map over every chunk in the stream. Async or not.
Optionaloptions: ReadableOperatorOptionsa stream mapped with the function fn.
Adds the listener function to the end of the listeners array for the
event named eventName. No checks are made to see if the listener has
already been added. Multiple calls passing the same combination of eventName
and listener will result in the listener being added, and called, multiple
times.
server.on('connection', (stream) => {
console.log('someone connected!');
});
Returns a reference to the EventEmitter, so that calls can be chained.
By default, event listeners are invoked in the order they are added. The
emitter.prependListener() method can be used as an alternative to add the
event listener to the beginning of the listeners array.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.on('foo', () => console.log('a'));
myEE.prependListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a
Adds a one-time listener function for the event named eventName. The
next time eventName is triggered, this listener is removed and then invoked.
server.once('connection', (stream) => {
console.log('Ah, we have our first user!');
});
Returns a reference to the EventEmitter, so that calls can be chained.
By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener() method can be used as an alternative to add the
event listener to the beginning of the listeners array.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.once('foo', () => console.log('a'));
myEE.prependOnceListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a
The readable.pause() method will cause a stream in flowing mode to stop
emitting 'data' events, switching out of flowing mode. Any data that
becomes available will remain in the internal buffer.
const readable = getReadableStreamSomehow();
readable.on('data', (chunk) => {
console.log(`Received ${chunk.length} bytes of data.`);
readable.pause();
console.log('There will be no additional data for 1 second.');
setTimeout(() => {
console.log('Now data will start flowing again.');
readable.resume();
}, 1000);
});
The readable.pause() method has no effect if there is a 'readable' event listener.
Adds the listener function to the beginning of the listeners array for the
event named eventName. No checks are made to see if the listener has
already been added. Multiple calls passing the same combination of eventName
and listener will result in the listener being added, and called, multiple
times.
server.prependListener('connection', (stream) => {
console.log('someone connected!');
});
Returns a reference to the EventEmitter, so that calls can be chained.
Adds a one-time listener function for the event named eventName to the
beginning of the listeners array. The next time eventName is triggered, this
listener is removed, and then invoked.
server.prependOnceListener('connection', (stream) => {
console.log('Ah, we have our first user!');
});
Returns a reference to the EventEmitter, so that calls can be chained.
Return a promise fulfilled with the full contents, after the 'finish' event fires. Errors on the stream cause the promise to be rejected.
Optionalcb: FunctionFinished/error callback used in addition to the promise.
Fulfilled when complete.
Optionalencoding: BufferEncodingReturns a copy of the array of listeners for the event named eventName,
including any wrappers (such as those created by .once()).
import { EventEmitter } from 'node:events';
const emitter = new EventEmitter();
emitter.once('log', () => console.log('log once'));
// Returns a new Array with a function `onceWrapper` which has a property
// `listener` which contains the original listener bound above
const listeners = emitter.rawListeners('log');
const logFnWrapper = listeners[0];
// Logs "log once" to the console and does not unbind the `once` event
logFnWrapper.listener();
// Logs "log once" to the console and removes the listener
logFnWrapper();
emitter.on('log', () => console.log('log persistently'));
// Will return a new Array with a single function bound by `.on()` above
const newListeners = emitter.rawListeners('log');
// Logs "log persistently" twice
newListeners[0]();
emitter.emit('log');
Pulls some data out of the internal buffer and returns it. If there is no data available, then it will return null.
If you pass in a size argument, then it will return that many bytes. If size bytes are not available, then it will return null, unless we've ended, in which case it will return the data remaining in the buffer.
If you do not specify a size argument, then it will return all the data in the internal buffer.
Fires NoFilter#read When read from.
Optionalsize: numberNumber of bytes to read.
If no data or not enough data, null. If decoding output a string, otherwise a Buffer.
Read a variable-sized JavaScript signed BigInt from the stream in 2's complement format.
Optionallen: numberNumber of bytes to read or all remaining if null.
A BigInt.
Read a signed 64-bit big-endian BigInt from the stream. Consumes 8 bytes.
Value read.
Read a signed 64-bit little-endian BigInt from the stream. Consumes 8 bytes.
Value read.
Read an unsigned 64-bit big-endian BigInt from the stream. Consumes 8 bytes.
Value read.
Read an unsigned 64-bit little-endian BigInt from the stream. Consumes 8 bytes.
Value read.
Read a 64-bit big-endian float from the stream. Consumes 8 bytes.
Value read.
Read a 64-bit little-endian float from the stream. Consumes 8 bytes.
Value read.
Read a 32-bit big-endian float from the stream. Consumes 4 bytes.
Value read.
Read a 32-bit little-endian float from the stream. Consumes 4 bytes.
Value read.
Read the full number of bytes asked for, no matter how long it takes. Fail if an error occurs in the meantime, or if the stream finishes before enough data is available.
Note: This function won't work fully correctly if you are using stream-browserify (for example, on the Web).
The number of bytes to read.
A promise for the data read.
Read a little-endian signed 16-bit integer from the stream. Consumes 2 bytes.
Value read.
Read a little-endian signed 16-bit integer from the stream. Consumes 2 bytes.
Value read.
Read a little-endian signed 16-bit integer from the stream. Consumes 4 bytes.
Value read.
Read a little-endian signed 32-bit integer from the stream. Consumes 4 bytes.
Value read.
Read a signed 8-bit integer from the stream. Consumes 1 byte.
Value read.
Read a variable-sized JavaScript unsigned BigInt from the stream.
Optionallen: numberNumber of bytes to read or all remaining if null.
A BigInt.
Read a little-endian unsigned 16-bit integer from the stream. Consumes 2 bytes.
Value read.
Read a little-endian unsigned 16-bit integer from the stream. Consumes 2 bytes.
Value read.
Read a little-endian unsigned 16-bit integer from the stream. Consumes 4 bytes.
Value read.
Read a little-endian unsigned 32-bit integer from the stream. Consumes 4 bytes.
Value read.
Read an unsigned 8-bit integer from the stream. Consumes 1 byte.
Value read.
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value.
If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.
The reducer function iterates the stream element-by-element which means that there is no concurrency parameter
or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.
a reducer function to call over every chunk in the stream. Async or not.
a promise for the final value of the reduction.
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value.
If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.
The reducer function iterates the stream element-by-element which means that there is no concurrency parameter
or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.
a promise for the final value of the reduction.
Removes all listeners, or those of the specified eventName.
It is bad practice to remove listeners added elsewhere in the code,
particularly when the EventEmitter instance was created by some other
component or module (e.g. sockets or file streams).
Returns a reference to the EventEmitter, so that calls can be chained.
OptionaleventName: EOptionaleventName: string | symbolRemoves the specified listener from the listener array for the event named
eventName.
const callback = (stream) => {
console.log('someone connected!');
};
server.on('connection', callback);
// ...
server.removeListener('connection', callback);
removeListener() will remove, at most, one instance of a listener from the
listener array. If any single listener has been added multiple times to the
listener array for the specified eventName, then removeListener() must be
called multiple times to remove each instance.
Once an event is emitted, all listeners attached to it at the
time of emitting are called in order. This implies that any
removeListener() or removeAllListeners() calls after emitting and
before the last listener finishes execution will not remove them from
emit() in progress. Subsequent events behave as expected.
import { EventEmitter } from 'node:events';
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
const callbackA = () => {
console.log('A');
myEmitter.removeListener('event', callbackB);
};
const callbackB = () => {
console.log('B');
};
myEmitter.on('event', callbackA);
myEmitter.on('event', callbackB);
// callbackA removes listener callbackB but it will still be called.
// Internal listener array at time of emit [callbackA, callbackB]
myEmitter.emit('event');
// Prints:
// A
// B
// callbackB is now removed.
// Internal listener array [callbackA]
myEmitter.emit('event');
// Prints:
// A
Because listeners are managed using an internal array, calling this will
change the position indexes of any listener registered after the listener
being removed. This will not impact the order in which listeners are called,
but it means that any copies of the listener array as returned by
the emitter.listeners() method will need to be recreated.
When a single function has been added as a handler multiple times for a single
event (as in the example below), removeListener() will remove the most
recently added instance. In the example the once('ping')
listener is removed:
import { EventEmitter } from 'node:events';
const ee = new EventEmitter();
function pong() {
console.log('pong');
}
ee.on('ping', pong);
ee.once('ping', pong);
ee.removeListener('ping', pong);
ee.emit('ping');
ee.emit('ping');
Returns a reference to the EventEmitter, so that calls can be chained.
The readable.resume() method causes an explicitly paused Readable stream to
resume emitting 'data' events, switching the stream into flowing mode.
The readable.resume() method can be used to fully consume the data from a
stream without actually processing any of that data:
getReadableStreamSomehow()
.resume()
.on('end', () => {
console.log('Reached the end, but did not read anything.');
});
The readable.resume() method has no effect if there is a 'readable' event listener.
The readable.setEncoding() method sets the character encoding for
data read from the Readable stream.
By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data
to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the
output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal
string format.
The Readable stream will properly handle multi-byte characters delivered
through the stream that would otherwise become improperly decoded if simply
pulled from the stream as Buffer objects.
const readable = getReadableStreamSomehow();
readable.setEncoding('utf8');
readable.on('data', (chunk) => {
assert.equal(typeof chunk, 'string');
console.log('Got %d characters of string data:', chunk.length);
});
The encoding to use.
By default EventEmitters will print a warning if more than 10 listeners are
added for a particular event. This is a useful default that helps finding
memory leaks. The emitter.setMaxListeners() method allows the limit to be
modified for this specific EventEmitter instance. The value can be set to
Infinity (or 0) to indicate an unlimited number of listeners.
Returns a reference to the EventEmitter, so that calls can be chained.
Read bytes or objects without consuming them. Useful for diagnostics. Note: as a side-effect, concatenates multiple writes together into what looks like a single write, so that this concat doesn't have to happen multiple times when you're futzing with the same NoFilter.
Optionalstart: numberBeginning offset.
Optionalend: numberEnding offset.
If in object mode, an array of objects. Otherwise, concatenated array of contents.
This method is similar to Array.prototype.some and calls fn on each chunk in the stream
until the awaited return value is true (or any truthy value). Once an fn call on a chunk
awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true.
If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.
a function to call on each chunk of the stream. Async or not.
Optionaloptions: Pick<ReadableOperatorOptions, "concurrency" | "signal">a promise evaluating to true if fn returned a truthy value for at least one of the chunks.
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
Optionaloptions: Abortablea promise containing an array with the contents of the stream.
Return an object compatible with Buffer's toJSON implementation, so that round-tripping will produce a Buffer.
If in object mode, the objects. Otherwise, JSON text.
Decodes and returns a string from buffer data encoded using the specified character set encoding. If encoding is undefined or null, then encoding defaults to 'utf8'. The start and end parameters default to 0 and NoFilter.length when undefined.
Optionalencoding: BufferEncodingWhich to use for decoding?
Optionalstart: numberStart offset.
Optionalend: numberEnd offset.
String version of the contents.
The writable.uncork() method flushes all data buffered since cork was called.
When using writable.cork() and writable.uncork() to manage the buffering
of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event
loop phase.
stream.cork();
stream.write('some ');
stream.write('data ');
process.nextTick(() => stream.uncork());
If the writable.cork() method is called multiple times on a stream, the
same number of calls to writable.uncork() must be called to flush the buffered
data.
stream.cork();
stream.write('some ');
stream.cork();
stream.write('data ');
process.nextTick(() => {
stream.uncork();
// The data will not be flushed until uncork() is called a second time.
stream.uncork();
});
See also: writable.cork().
The readable.unpipe() method detaches a Writable stream previously attached
using the pipe method.
If the destination is not specified, then all pipes are detached.
If the destination is specified, but no pipe is set up for it, then
the method does nothing.
import fs from 'node:fs';
const readable = getReadableStreamSomehow();
const writable = fs.createWriteStream('file.txt');
// All the data from readable goes into 'file.txt',
// but only for the first second.
readable.pipe(writable);
setTimeout(() => {
console.log('Stop writing to file.txt.');
readable.unpipe(writable);
console.log('Manually close the file stream.');
writable.end();
}, 1000);
Optionaldestination: WritableStreamOptional specific stream to unpipe
Passing chunk as null signals the end of the stream (EOF) and behaves the
same as readable.push(null), after which no more data can be written. The EOF
signal is put at the end of the buffer and any buffered data will still be
flushed.
The readable.unshift() method pushes a chunk of data back into the internal
buffer. This is useful in certain situations where a stream is being consumed by
code that needs to "un-consume" some amount of data that it has optimistically
pulled out of the source, so that the data can be passed on to some other party.
The stream.unshift(chunk) method cannot be called after the 'end' event
has been emitted or a runtime error will be thrown.
Developers using stream.unshift() often should consider switching to
use of a Transform stream instead. See the API for stream implementers section for more information.
// Pull off a header delimited by \n\n.
// Use unshift() if we get too much.
// Call the callback with (error, header, stream).
import { StringDecoder } from 'node:string_decoder';
function parseHeader(stream, callback) {
stream.on('error', callback);
stream.on('readable', onReadable);
const decoder = new StringDecoder('utf8');
let header = '';
function onReadable() {
let chunk;
while (null !== (chunk = stream.read())) {
const str = decoder.write(chunk);
if (str.includes('\n\n')) {
// Found the header boundary.
const split = str.split(/\n\n/);
header += split.shift();
const remaining = split.join('\n\n');
const buf = Buffer.from(remaining, 'utf8');
stream.removeListener('error', callback);
// Remove the 'readable' listener before unshifting.
stream.removeListener('readable', onReadable);
if (buf.length)
stream.unshift(buf);
// Now the body of the message can be read from the stream.
callback(null, header, stream);
return;
}
// Still reading the header.
header += str;
}
}
}
Unlike push, stream.unshift(chunk) will not
end the reading process by resetting the internal reading state of the stream.
This can cause unexpected results if readable.unshift() is called during a
read (i.e. from within a _read implementation on a
custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately,
however it is best to simply avoid calling readable.unshift() while in the
process of performing a read.
Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must
be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.
Optionalencoding: BufferEncodingEncoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.
Wait for the full number of bytes asked for, no matter how long it takes. Fail if an error occurs in the meantime, or if the stream finishes before enough data is available.
Note: This function won't work fully correctly if you are using stream-browserify (for example, on the Web).
The number of bytes to read.
A promise for the data read.
Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more
information.)
When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable
stream that uses
the old stream as its data source.
It will rarely be necessary to use readable.wrap() but the method has been
provided as a convenience for interacting with older Node.js applications and
libraries.
import { OldReader } from './old-api-module.js';
import { Readable } from 'node:stream';
const oreader = new OldReader();
const myReader = new Readable().wrap(oreader);
myReader.on('readable', () => {
myReader.read(); // etc.
});
An "old style" readable stream
The writable.write() method writes some data to the stream, and calls the
supplied callback once the data has been fully handled. If an error
occurs, the callback will be called with the error as its
first argument. The callback is called asynchronously and before 'error' is
emitted.
The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk.
If false is returned, further attempts to write data to the stream should
stop until the 'drain' event is emitted.
While a stream is not draining, calls to write() will buffer chunk, and
return false. Once all currently buffered chunks are drained (accepted for
delivery by the operating system), the 'drain' event will be emitted.
Once write() returns false, do not write more chunks
until the 'drain' event is emitted. While calling write() on a stream that
is not draining is allowed, Node.js will buffer all written chunks until
maximum memory usage occurs, at which point it will abort unconditionally.
Even before it aborts, high memory usage will cause poor garbage collector
performance and high RSS (which is not typically released back to the system,
even after the memory is no longer required). Since TCP sockets may never
drain if the remote peer does not read the data, writing a socket that is
not draining may lead to a remotely exploitable vulnerability.
Writing data while the stream is not draining is particularly
problematic for a Transform, because the Transform streams are paused
by default until they are piped or a 'data' or 'readable' event handler
is added.
If the data to be written can be generated or fetched on demand, it is
recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is
possible to respect backpressure and avoid memory issues using the 'drain' event:
function write(data, cb) {
if (!stream.write(data)) {
stream.once('drain', cb);
} else {
process.nextTick(cb);
}
}
// Wait for cb to be called before doing any other write.
write('hello', () => {
console.log('Write completed, do more writes now.');
});
A Writable stream in object mode will always ignore the encoding argument.
Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer},
{TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.
Optionalcallback: (error: Error) => voidCallback for when this chunk of data is flushed.
false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.
The writable.write() method writes some data to the stream, and calls the
supplied callback once the data has been fully handled. If an error
occurs, the callback will be called with the error as its
first argument. The callback is called asynchronously and before 'error' is
emitted.
The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk.
If false is returned, further attempts to write data to the stream should
stop until the 'drain' event is emitted.
While a stream is not draining, calls to write() will buffer chunk, and
return false. Once all currently buffered chunks are drained (accepted for
delivery by the operating system), the 'drain' event will be emitted.
Once write() returns false, do not write more chunks
until the 'drain' event is emitted. While calling write() on a stream that
is not draining is allowed, Node.js will buffer all written chunks until
maximum memory usage occurs, at which point it will abort unconditionally.
Even before it aborts, high memory usage will cause poor garbage collector
performance and high RSS (which is not typically released back to the system,
even after the memory is no longer required). Since TCP sockets may never
drain if the remote peer does not read the data, writing a socket that is
not draining may lead to a remotely exploitable vulnerability.
Writing data while the stream is not draining is particularly
problematic for a Transform, because the Transform streams are paused
by default until they are piped or a 'data' or 'readable' event handler
is added.
If the data to be written can be generated or fetched on demand, it is
recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is
possible to respect backpressure and avoid memory issues using the 'drain' event:
function write(data, cb) {
if (!stream.write(data)) {
stream.once('drain', cb);
} else {
process.nextTick(cb);
}
}
// Wait for cb to be called before doing any other write.
write('hello', () => {
console.log('Write completed, do more writes now.');
});
A Writable stream in object mode will always ignore the encoding argument.
Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer},
{TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.
The encoding, if chunk is a string.
Optionalcallback: (error: Error) => voidCallback for when this chunk of data is flushed.
false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.
Write a JavaScript BigInt to the stream. Negative numbers will be written as their 2's complement version.
The value to write.
True on success.
Write a signed big-endian 64-bit BigInt to the stream. Adds 8 bytes.
BigInt.
True on success.
Write a signed little-endian 64-bit BigInt to the stream. Adds 8 bytes.
BigInt.
True on success.
Write an unsigned big-endian 64-bit BigInt to the stream. Adds 8 bytes.
Non-negative BigInt.
True on success.
Write an unsigned little-endian 64-bit BigInt to the stream. Adds 8 bytes.
Non-negative BigInt.
True on success.
Write a big-endian 64-bit float to the stream. Adds 8 bytes.
64-bit float.
True on success.
Write a little-endian 64-bit double to the stream. Adds 8 bytes.
64-bit float.
True on success.
Write a big-endian 32-bit float to the stream. Adds 4 bytes.
32-bit float.
True on success.
Write a little-endian 32-bit float to the stream. Adds 4 bytes.
32-bit float.
True on success.
Write a signed big-endian 16-bit integer to the stream. Adds 2 bytes.
(-32768)..32767.
True on success.
Write a signed little-endian 16-bit integer to the stream. Adds 2 bytes.
(-32768)..32767.
True on success.
Write a signed big-endian 32-bit integer to the stream. Adds 4 bytes.
(-231)..(231-1).
True on success.
Write a signed little-endian 32-bit integer to the stream. Adds 4 bytes.
(-231)..(231-1).
True on success.
Write a signed 8-bit integer to the stream. Adds 1 byte.
(-128)..127.
True on success.
Write a big-endian 16-bit unsigned integer to the stream. Adds 2 bytes.
0..65535.
True on success.
Write a little-endian 16-bit unsigned integer to the stream. Adds 2 bytes.
0..65535.
True on success.
Write a big-endian 32-bit unsigned integer to the stream. Adds 4 bytes.
0..2**32-1.
True on success.
Write a little-endian 32-bit unsigned integer to the stream. Adds 4 bytes.
0..2**32-1.
True on success.
Write an 8-bit unsigned integer to the stream. Adds 1 byte.
0..255.
True on success.
StaticcompareThe same as nf1.compare(nf2). Useful for sorting an Array of NoFilters.
StaticconcatReturns a buffer which is the result of concatenating all the NoFilters in the list together. If the list has no items, or if the totalLength is 0, then it returns a zero-length buffer.
If length is not provided, it is read from the buffers in the list. However, this adds an additional loop to the function, so it is faster to provide the length explicitly if you already know it.
Inputs. Must not be all either in object mode, or all not in object mode.
Optionallength: numberNumber of bytes or objects to read.
The concatenated values as an array if in object mode, otherwise a Buffer.
StaticfromA utility method for creating duplex streams.
Stream converts writable stream into writable Duplex and readable stream
to Duplex.Blob converts into readable Duplex.string converts into readable Duplex.ArrayBuffer converts into readable Duplex.AsyncIterable converts into a readable Duplex. Cannot yield null.AsyncGeneratorFunction converts into a readable/writable transform
Duplex. Must take a source AsyncIterable as first parameter. Cannot yield
null.AsyncFunction converts into a writable Duplex. Must return
either null or undefinedObject ({ writable, readable }) converts readable and
writable into Stream and then combines them into Duplex where the
Duplex will write to the writable and read from the readable.Promise converts into readable Duplex. Value null is ignored.StaticfromStaticisIs the given object a {NoFilter}?
The object to test.
True if obj is a NoFilter.
Staticto
NoFilter stream. Can be used to sink or source data to and from other node streams. Implemented as the "identity" Transform stream (hence the name), but allows for inspecting data that is in-flight.
Allows passing in source data (input, inputEncoding) at creation time. Source data can also be passed in the options object.
Example: source and sink