vla-patterns

Community

VLA patterns for cognitive robotics.

Authoruneezaismail
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill provides Vision-Language-Action integration patterns for cognitive robotics with ROS 2, enabling end-to-end perception-to-action flows that fuse vision, language, and control.

Core Features & Use Cases

  • VLA Pipeline: Fuse Whisper/STT, LLM planning, and ROS 2 actions to execute robot tasks.
  • Vision-Language Grounding: Resolve deictic references and commands with visual context.
  • Use Case: A service robot translates "bring me the cup" into a pickup and delivery sequence.

Quick Start

Try a sample prompt: "Plan to navigate to (1,2) and grasp object cup_01 using VLA."

Dependency Matrix

Required Modules

None required

Components

Standard package

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: vla-patterns
Download link: https://github.com/uneezaismail/Physical-AI-Humanoid-Robotics/archive/main.zip#vla-patterns

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository