torchserve

Community

Deploy PyTorch models with ease.

Authorcuba6112
Version1.0.0
Installs0

System Documentation

What problem does it solve?

This Skill simplifies the deployment of PyTorch models into production-ready inference servers, handling packaging, scaling, and custom request processing.

Core Features & Use Cases

  • Model Packaging: Creates self-contained .mar files for consistent deployment.
  • Custom Handlers: Allows Python code for pre/post-processing and custom inference logic.
  • Multi-GPU Scaling: Manages worker distribution across available GPUs for efficient load balancing.
  • Use Case: Deploy a custom image classification model with specific pre-processing steps by packaging it with a custom handler and serving it via TorchServe's REST API.

Quick Start

Use the torchserve skill to package the model 'my_model.pt' with the handler 'my_handler.py' into a MAR file named 'my_model.mar'.

Dependency Matrix

Required Modules

None required

Components

scriptsreferences

💻 Claude Code Installation

Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.

Please help me install this Skill:
Name: torchserve
Download link: https://github.com/cuba6112/skillfactory/archive/main.zip#torchserve

Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
View Source Repository

Agent Skills Search Helper

Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.