fiftyone-model-evaluation
OfficialEvaluate model predictions against ground truth.
Authorvoxel51
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill enables you to evaluate model predictions against ground truth across multiple evaluation protocols (COCO, Open Images, and custom methods) within FiftyOne.
Core Features & Use Cases
- Interactive Evaluation: Run COCO, Open Images, or custom metric evaluations from the Model Evaluation Panel.
- Programmatic Evaluation: Use the Python SDK to perform detections, classifications, segmentations, and regressions with configurable keys and metrics.
- Real-world Scenario: Compare two object-detection models by computing mAP, precision, recall, and per-class metrics, then inspect failures via evaluation patches.
Quick Start
- set_context(dataset_name="my-dataset")
- dataset_summary(name="my-dataset")
- launch_app(dataset_name="my-dataset")
- execute_operator( operator_uri="@voxel51/evaluation/evaluate_model", params={ "pred_field": "predictions", "gt_field": "ground_truth", "eval_key": "eval", "method": "coco", "iou": 0.5, "compute_mAP": true } )
Dependency Matrix
Required Modules
None requiredComponents
Standard package💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: fiftyone-model-evaluation Download link: https://github.com/voxel51/fiftyone-skills/archive/main.zip#fiftyone-model-evaluation Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.