dataset-processing-multiprocessing
CommunityProcess large datasets in parallel with ease.
Data & Analytics#pipeline#preprocessing#tokenization#datasets#huggingface#multiprocessing#data-preparation
Authoranhvth
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Large dataset preprocessing often becomes a bottleneck due to memory constraints and sequential processing. This Skill enables safe, parallelized handling of tokenization, packing, and merging for big HuggingFace datasets.
Core Features & Use Cases
- Distributed sharding across CPU cores to maximize throughput while keeping workers isolated.
- End-to-end data prep pipeline: load, shard, tokenize, pack, and merge into a final dataset.
- Use Case: preprocess and tokenize multi-GB datasets for model pretraining with robust error handling and incremental saves.
Quick Start
Run the example_tokenize_pack.py script with your source dataset path and a tokenizer to start end-to-end processing.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: dataset-processing-multiprocessing Download link: https://github.com/anhvth/speedy_utils/archive/main.zip#dataset-processing-multiprocessing Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.