onnx-inference
OfficialDeploy ML models with ONNX Runtime.
AuthorJNZader-Vault
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill streamlines the deployment of machine learning models by leveraging ONNX Runtime for efficient, cross-platform inference, reducing complexity and improving performance.
Core Features & Use Cases
- Cross-Platform Deployment: Deploy models to Python, Rust, and Go environments seamlessly.
- Optimized Inference: Utilizes ONNX Runtime for high-performance execution on various hardware.
- Model Optimization: Includes quantization and optimization techniques for edge deployment.
- Use Case: Deploy a trained PyTorch image classification model to a Rust application for real-time analysis on an edge device.
Quick Start
Use the onnx-inference skill to deploy the model located at 'models/image_classifier.onnx' for Python inference.
Dependency Matrix
Required Modules
None requiredComponents
scriptsreferences
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: onnx-inference Download link: https://github.com/JNZader-Vault/project-starter-framework/archive/main.zip#onnx-inference Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.