awq-quantization

Community

Compress LLMs with minimal accuracy loss.

AuthorDoanNgocCuong
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill addresses the challenge of deploying large language models (LLMs) on hardware with limited GPU memory by compressing them using Activation-aware Weight Quantization (AWQ).

Core Features & Use Cases

  • 4-bit Quantization: Reduces model size significantly while preserving accuracy.
  • Speedup: Achieves up to 3x faster inference compared to FP16.
  • Use Case: Deploying large instruction-tuned or multimodal models (7B-70B parameters) on edge devices or servers with constrained GPU memory, ensuring fast and efficient inference.

Quick Start

Use the awq-quantization skill to quantize the 'mistralai/Mistral-7B-Instruct-v0.2' model to 4-bit precision.

Dependency Matrix

Required Modules

autoawqtransformerstorch

Components

scriptsreferences

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: awq-quantization
Download link: https://github.com/DoanNgocCuong/continuous-training-pipeline_T3_2026/archive/main.zip#awq-quantization

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.