qlora
CommunityAdvance QLoRA tuning and multi-adapter workflows.
Authoratrawog
Version1.0.0
Installs0
System Documentation
What problem does it solve?
Advanced QLoRA experiments enable efficient fine-tuning of large language models by using low-rank adapters to reduce memory and compute while preserving performance.
Core Features & Use Cases
- Experiment with alpha scaling, LoRA rank, and target modules to tailor adapters for different task requirements.
- Compare multi-adapter hot-swapping and continual learning workflows to support sequential domain adaptation.
- Evaluate quantization strategies (e.g., 4-bit NF4 vs BF16) to balance memory usage and model quality.
Quick Start
Load a base model compatible with PEFT and apply a qlora adapter with default r=16 and lora_alpha=16 to begin comparing target_modules across experiments.
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: qlora Download link: https://github.com/atrawog/overthink-plugins/archive/main.zip#qlora Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.