Documentation
Installation guides, data formats, and code examples
Getting Started
Installation
1. Clone Repository
# Clone repository git clone https://github.com/aim-lab/AFtoolkit.git cd AFtoolkit
2. Create Conda Environment
# Create conda environment conda create -n aftoolkit python=3.10 -y conda activate aftoolkit
3. Install Dependencies
# Install dependencies pip install poetry poetry install
4. Verify Installation
# Verify installation python -c "import tensorflow as tf; print(tf.__version__)"
Quick Start
import wfdb
import numpy as np
# Load ECG data
record = wfdb.rdrecord('04015', pn_dir='afdb/1.0.0')
beats = wfdb.rdann('04015', 'qrs', pn_dir='afdb/1.0.0')
# Extract RR intervals
rr_intervals = np.diff(beats.sample) / record.fs
# Run AF detection (example - adjust to actual API)
# results = arnet2.detect(rr_intervals)Data Formats
Supported Input Formats
WFDB Format
Files: .dat, .hea, .atr
Standard PhysioNet format
CSV Format
Comma-separated RR intervals
One value per line
EDF Format
European Data Format
Continuous ECG signals
JSON Format
Structured ECG data
With metadata
Required Fields
- RR intervals (in seconds or samples)
- Sampling rate (Hz)
- Recording duration
Output Format
{
"af_burden_percent": 9.8,
"episodes": [
{
"start": "02:14:32",
"duration": "28m 15s",
"confidence": 0.985
},
...
],
"phenotype": "Type I: Nocturnal" // if applicable
}Recommended Datasets
1. MIT-BIH Atrial Fibrillation Database
URL: physionet.org/content/afdb/1.0.0/
- 25 long-term recordings (10 hours each)
- Annotated AF episodes
- Free download
2. CPSC 2021
URL: physionet.org/content/cpsc2021/1.0.0/
- 730+ recordings
- Paroxysmal AF focus
- PhysioNet access
Download Instructions
Using wget:
wget -r -N -c -np https://physionet.org/files/afdb/1.0.0/
Using Python:
import wfdb
record = wfdb.rdrecord('04015', pn_dir='afdb/1.0.0')Code Examples
Running AF Detection
# Example: Running ArNet2 on test data
from AFtoolkit.detection import run_ArNet2
import wfdb
# Load sample ECG data
record = wfdb.rdrecord('path/to/record')
annotations = wfdb.rdann('path/to/record', 'qrs')
# Run detection
results = run_ArNet2.detect(
rr_intervals=annotations.sample,
sampling_rate=record.fs
)
# Print results
print(f"AF Burden: {results['af_burden_percent']}%")
print(f"Episodes detected: {len(results['episodes'])}")Running Phenotype Clustering
# Example: Phenotype prediction
from AFtoolkit.phenotyping import run_prediction
# Assuming you have 24-hour burden profile
hourly_burden = [...] # 24 values
# Predict phenotype
phenotype_result = run_prediction.predict(
burden_profile=hourly_burden
)
print(f"Predicted phenotype: {phenotype_result['phenotype']}")
print(f"Cluster assignment: {phenotype_result['cluster']}")Need More Help?
Check out our research papers for detailed methodology or explore the demo