model-pruning
CommunityCompress LLMs, accelerate inference.
Software Engineering#llm#sparsity#model compression#inference acceleration#model pruning#wanda#sparsegpt
AuthorDoanNgocCuong
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill addresses the challenge of large, computationally expensive Large Language Models (LLMs) by enabling their compression and accelerating inference speed without significant accuracy loss.
Core Features & Use Cases
- Model Compression: Reduce LLM size by 40-60% using techniques like Wanda and SparseGPT.
- Inference Acceleration: Achieve 2-4x speedup on hardware accelerators through structured and semi-structured sparsity.
- Deployment on Constrained Hardware: Enable LLM deployment on edge devices and systems with limited memory.
- Use Case: You have a 7-billion parameter LLM that is too slow for real-time applications. Use this Skill to prune it to 50% sparsity, making it run 2x faster on your edge device with less than 1% accuracy degradation.
Quick Start
Use the model-pruning skill to prune the 'meta-llama/Llama-2-7b-hf' model to 50% sparsity using the Wanda method.
Dependency Matrix
Required Modules
transformerstorchacceleratedatasetssparsegpt
Components
scriptsreferences
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: model-pruning Download link: https://github.com/DoanNgocCuong/continuous-training-pipeline_T3_2026/archive/main.zip#model-pruning Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.