Lessons Learned from Developing Automated Machine Learning on HPC
This presentation by Romain EGELE explores various aspects of developing automated machine learning on High-Performance Computing (HPC) systems. Topics covered include multi-fidelity optimization, hyperparameters, model evaluation methods, learning curve extrapolation, and more valuable insights for efficient machine learning development.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
Lessons Learned from Developing Automated Machine Learning on HPC Romain EGELE romain.egele@universite-paris-saclay.fr
Selecting the Baseline Multi-Fidelity Hyperparameter Optimization Is One Epoch All You Need for Multi-Fidelity Hyperparameter Optimization? arXiv: 2307.15422 2
Augmentation, Normalization Optimizer, Learning rate, Number of layers, Type of layer 3
Multi-Fidelity Example with Successive Halving From: https://amueller.github.io/aml/04-model-evaluation/parameter_tuning_automl.html#successive-halving-different-example 4
Learning Curve Extrapolation (LCE) Probability of Performing Worse Observation 5
RoBER versus Weighted Prob. Mixture FAILURE SUCCESS
Baselines for Budget Bounds on budget Training Steps Minimum Number of Training Steps Maximum Number of Training Steps 1-Epoch 100-Epoch 7
Low fidelity evaluations 1-Epoch can be accurate predictors for model-selection. Paper Software 10
Hyper par am eter Sear ch Space Sear ch continue Sugges t Conf igur ation Tr ue F al s e Tr aining continue Execute Tr aining Step Model Sel ection Tr ue F al s e Tr ained Model with Es tim ated Bes t Hyper par ameter s 12