Frequently Asked Questions¶
General Questions¶
What is BatteryML?¶
BatteryML is a modular machine learning platform for battery degradation modeling, designed for research on the LG M50T dataset from Oxford University's Battery Intelligence Lab.
What Python version is required?¶
Python 3.8 or higher is required.
Do I need a GPU?¶
No, BatteryML works on CPU, but GPU acceleration is recommended for neural network models (MLP, LSTM, Neural ODE).
How do I cite BatteryML?¶
See Citation for citation information.
Data Questions¶
Where do I get the dataset?¶
The LG M50T dataset is from Oxford University's Battery Intelligence Lab. Contact them for dataset access.
What experiments are supported?¶
Experiments 1-5 are supported: - Experiment 1: Si-based Degradation - Experiment 2: C-based Degradation - Experiment 3: Cathode Degradation and Li-Plating - Experiment 4: Drive Cycle Aging (Control) - Experiment 5: Standard Cycle Aging (Control)
How do I load data from a different experiment?¶
from src.data.tables import SummaryDataLoader
loader = SummaryDataLoader(experiment_id=1, base_path=Path("Raw Data"))
df = loader.load_all_cells(...)
Model Questions¶
Which model should I use?¶
- LightGBM: Fast baseline, good for tabular data
- MLP: Neural baseline, flexible architecture
- LSTM: For sequential data
- Neural ODE: For continuous-time modeling
See Model Selection Guide for details.
How do I add a new model?¶
See Adding Models for step-by-step instructions.
Can I use my own model?¶
Yes! See Custom Model for examples.
Pipeline Questions¶
How do I add a new pipeline?¶
See Adding Pipelines for step-by-step instructions.
What is the Sample dataclass?¶
The Sample dataclass is the universal format that all pipelines produce and all models consume. See Core Concepts for details.
How does caching work?¶
Expensive computations (especially ICA) are cached to disk with automatic invalidation based on input parameters. See Core Concepts for details.
Training Questions¶
How do I monitor training?¶
Use TensorBoard:
Or MLflow:
How do I resume training?¶
Load checkpoint:
checkpoint = torch.load("artifacts/runs/{run_id}/checkpoints/best.pt")
model.load_state_dict(checkpoint['model_state_dict'])
How do I tune hyperparameters?¶
See Neural ODE Tuning for hyperparameter tuning guide.
Configuration Questions¶
How do I use Hydra configs?¶
See Configuration Guide for details.
How do I override config parameters?¶
Troubleshooting Questions¶
My model isn't learning. What should I do?¶
- Check learning rate
- Verify data normalization
- Check model capacity
- Verify data quality
See Training Issues for details.
I'm getting out of memory errors. How do I fix it?¶
- Reduce batch size
- Enable mixed precision
- Use gradient accumulation
- Use CPU if GPU memory insufficient
See Training Issues for details.
How do I debug training issues?¶
- Monitor training with TensorBoard
- Check gradients
- Validate data
- Profile training
See Training Issues for details.
Contributing Questions¶
How do I contribute?¶
See Contributing Guide for details.
How do I add a new feature?¶
- Create feature branch
- Implement feature
- Add tests
- Update documentation
- Create pull request
See Contributing Guide for details.
Next Steps¶
- Getting Started - Installation guide
- User Guide - Usage documentation
- Troubleshooting - Common issues