Tutorials
1. Tutorial - automatic quantization
This notebook shows how to apply enot-autodl framework for automatic quantization to create quantized model for enot-lite framework.
2. Tutorial - pruning
This experimental notebook shows how to apply enot-autodl framework for automatic network pruning and fine-tuning.
3. Tutorial - Ultralytics YOLOv8 quantization
This notebook shows how to apply enot-autodl framework for automatic network quantization of Ultralytics YOLOv8.
4. Tutorial - ENOT baseline optimizer
This notebook describes how to use ENOT optimizer.
5. Tutorial - label selector starting points
This notebook demonstrates how to add additional starting points to
OptimalPruningLabelSelector
.
Distributed (multi-gpu / multi-node) pretrain example
In this folder you can find .sh script for running multi-gpu pretrain. You can change it’s configuration to run on single GPU, on multiple GPU within a single node, or on multiple compute nodes with multiple GPUs.
The second file in this folder is a .py script which is launched by .sh script. This script uses functions from other tutorials, but is adapted to run in distributed manner. This script should be viewed as a reference point for user-defined distributed pretrain scripts.
Distributed search is not recommended as it is under development. Moreover, the search procedure is usually relatively fast. At the tuning stage, you will have a regular model without any of the ENOT specifics, so it is your responsibility to write correct distributed code (probably by wrapping found model with DistributedDataParallel module).