vla-training

Community

Streamline VLA model training.

AuthorWangJie-cn
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill simplifies the complex process of setting up and managing the training of Vision-Language-Action (VLA) models, particularly for autonomous driving applications.

Core Features & Use Cases

  • End-to-End Workflow: Covers data preparation, distributed training configuration, and hyperparameter tuning.
  • Multi-Modal Data Handling: Supports standard datasets like nuScenes and custom data formats.
  • Distributed Training: Integrates with DeepSpeed and PyTorch FSDP for efficient large-model training.
  • Use Case: When you need to train a VLA model for autonomous driving, this Skill provides the necessary scripts and configurations to handle data loading, set up distributed training across multiple GPUs, and define optimal training recipes.

Quick Start

Use the vla-training skill to set up a DeepSpeed training pipeline for autonomous driving VLA models.

Dependency Matrix

Required Modules

None required

Components

references

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: vla-training
Download link: https://github.com/WangJie-cn/clawdbot-skills/archive/main.zip#vla-training

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.