avatar

Community

Control a VTuber avatar with lip sync.

AuthorCastrozan
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill automates the orchestration of lip-sync, expressive changes, and audio routing for a VTuber avatar, reducing manual setup and latency during performances and troubleshooting.

Core Features & Use Cases

  • Lip-sync driven avatar animation and mouth movement synchronized with spoken text.
  • Real-time expression control and idle behavior coordination to reflect mood.
  • Renderer synchronization via a WebSocket API and HTTP audio endpoints, enabling live demos, Meet calls, and streams.
  • Use Case: During a live stream, drive the avatar to speak with lip-sync, switch expressions on the fly, and route audio to both room speakers and the virtual mic.

Quick Start

  1. Ensure the Avatar system is installed and start it: start-avatar.sh
  2. Speak with the avatar: avatar-speak.sh "Hello world" neutral speakers
  3. Change expression: avatar-speak.sh "Hello!" happy speakers

Dependency Matrix

Required Modules

edge-ttsffmpegpactlxdotoolpw-linkcurlnode

Components

scripts

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: avatar
Download link: https://github.com/Castrozan/.dotfiles/archive/main.zip#avatar

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.