StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics https://doi.org/10.1109/TVCG.2020.3030352
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
StackGenVis/frontend/node_modules/fsevents/node_modules/minizlib
Angelos Chatzimparmpas e069030893 fix the frontend 4 years ago
..
LICENSE fix the frontend 4 years ago
README.md fix the frontend 4 years ago
constants.js fix the frontend 4 years ago
index.js fix the frontend 4 years ago
package.json fix the frontend 4 years ago

README.md

minizlib

A tiny fast zlib stream built on minipass and Node.js's zlib binding.

This module was created to serve the needs of node-tar v2. If your needs are different, then it may not be for you.

How does this differ from the streams in require('zlib')?

First, there are no convenience methods to compress or decompress a buffer. If you want those, use the built-in zlib module. This is only streams.

This module compresses and decompresses the data as fast as you feed it in. It is synchronous, and runs on the main process thread. Zlib operations can be high CPU, but they're very fast, and doing it this way means much less bookkeeping and artificial deferral.

Node's built in zlib streams are built on top of stream.Transform. They do the maximally safe thing with respect to consistent asynchrony, buffering, and backpressure.

This module does support backpressure, and will buffer output chunks that are not consumed, but is less of a mediator between the input and output. There is no high or low watermarks, no state objects, and so artificial async deferrals. It will not protect you from Zalgo.

If you write, data will be emitted right away. If you write everything synchronously in one tick, and you are listening to the data event to consume it, then it'll all be emitted right away in that same tick. If you want data to be emitted in the next tick, then write it in the next tick.

It is thus the responsibility of the reader and writer to manage their own consumption and process execution flow.

The goal is to compress and decompress as fast as possible, even for files that are too large to store all in one buffer.

The API is very similar to the built-in zlib module. There are classes that you instantiate with new and they are streams that can be piped together.