haipipe-data-1-source
CommunityStandardize raw data into structured SourceSets.
Education & Research#data pipeline#etl#research data#data standardization#source processing#data wrangling
Authorjluo41
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill transforms raw data files from various formats into a standardized, structured format called a SourceSet, making data ready for downstream processing in research and analytics pipelines.
Core Features & Use Cases
- Data Ingestion: Reads diverse raw data formats (CSV, XML, Parquet, JSON).
- Standardization: Converts data into a consistent
SourceSetformat (dictionary of DataFrames). - Pipeline Orchestration: Manages the execution of data transformation logic via
Source_Pipeline. - Use Case: Process a collection of raw patient data files (e.g., EHR extracts, sensor logs) into a unified
SourceSetthat can be consistently fed into a patient record-building pipeline.
Quick Start
Run the haipipe-data-1-source skill to process the OhioT1DM dataset using the OhioT1DMxmlv250302 SourceFn.
Dependency Matrix
Required Modules
None requiredComponents
scriptsreferences
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: haipipe-data-1-source Download link: https://github.com/jluo41/research-skills/archive/main.zip#haipipe-data-1-source Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.