grpo-finetuning

Official

GRPO fine-tuning for vision-language models

Authoraws-solutions-library-samples
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This skill addresses data-efficient fine-tuning of vision-language models when labeled data is scarce, enabling stronger performance with GRPO.

Core Features & Use Cases

  • Reward-based fine-tuning: Uses multiple completions with reward functions to optimize the policy.
  • Data-efficient training: Effective on small datasets (<1000 examples) to improve performance.
  • Use Case: When you have limited labeled data and need robust vision-language alignment, apply GRPO fine-tuning to improve model quality with minimal data.

Quick Start

Run the GRPO fine-tuning workflow with your dataset and configured reward functions to start training a vision-language model.

Dependency Matrix

Required Modules

None required

Components

Standard package

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: grpo-finetuning
Download link: https://github.com/aws-solutions-library-samples/guidance-for-claude-code-with-amazon-bedrock/archive/main.zip#grpo-finetuning

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.