.. _ENOT Tutorials: ######### Tutorials ######### .. _E1 ref: `1. Tutorial - automatic quantization`_ ===================================================== .. _1. Tutorial - automatic quantization: https://github.com/ENOT-AutoDL/ENOT_Tutorials/blob/%ENOT_VERSION%/1.%20Tutorial%20-%20automatic%20quantization.ipynb This notebook shows how to apply enot-autodl framework for automatic quantization to create quantized model for enot-lite framework. .. _E2 ref: `2. Tutorial - pruning`_ ================================== .. _2. Tutorial - pruning: https://github.com/ENOT-AutoDL/ENOT_Tutorials/blob/%ENOT_VERSION%/2.%20Tutorial%20-%20pruning.ipynb This experimental notebook shows how to apply enot-autodl framework for automatic network pruning and fine-tuning. .. _E3 ref: `3. Tutorial - Ultralytics YOLOv8 quantization`_ ================================================= .. _3. Tutorial - Ultralytics YOLOv8 quantization: https://github.com/ENOT-AutoDL/ENOT_Tutorials/blob/%ENOT_VERSION%/3.%20Tutorial%20-%20Ultralytics%20YOLOv8%20quantization.ipynb This notebook shows how to apply enot-autodl framework for automatic network quantization of Ultralytics YOLOv8. .. _E4 ref: `4. Tutorial - ENOT baseline optimizer`_ ================================================= .. _4. Tutorial - ENOT baseline optimizer: https://github.com/ENOT-AutoDL/ENOT_Tutorials/blob/%ENOT_VERSION%/4.%20Tutorial%20-%20ENOT%20baseline%20optimizer.ipynb This notebook describes how to use ENOT optimizer. .. _E5 ref: `5. Tutorial - label selector starting points`_ ================================================ .. _5. Tutorial - label selector starting points: https://github.com/ENOT-AutoDL/ENOT_Tutorials/blob/%ENOT_VERSION%/5.%20Tutorial%20-%20label%20selector%20starting%20points.ipynb This notebook demonstrates how to add additional starting points to :class:`~enot.pruning.OptimalPruningLabelSelector`. .. _Distributed (multi-gpu / multi-node) pretrain example reference: `Distributed (multi-gpu / multi-node) pretrain example`_ ======================================================== .. _Distributed (multi-gpu / multi-node) pretrain example: https://github.com/ENOT-AutoDL/ENOT_Tutorials/tree/%ENOT_VERSION%/multigpu_pretrain .. _.sh script: https://github.com/ENOT-AutoDL/ENOT_Tutorials/blob/%ENOT_VERSION%/multigpu_pretrain/run_multigpu_pretrain.sh .. _.py script: https://github.com/ENOT-AutoDL/ENOT_Tutorials/blob/%ENOT_VERSION%/multigpu_pretrain/multigpu_pretrain.py In this folder you can find `.sh script`_ for running multi-gpu pretrain. You can change it's configuration to run on single GPU, on multiple GPU within a single node, or on multiple compute nodes with multiple GPUs. The second file in this folder is a `.py script`_ which is launched by .sh script. This script uses functions from other tutorials, but is adapted to run in distributed manner. This script should be viewed as a reference point for user-defined distributed pretrain scripts. Distributed search is not recommended as it is under development. Moreover, the search procedure is usually relatively fast. At the tuning stage, you will have a regular model without any of the ENOT specifics, so it is your responsibility to write correct distributed code (probably by wrapping found model with `DistributedDataParallel module `_).