.. ENOT reference documentation master file. ############################ ENOT Reference Documentation ############################ You can find ENOT reference documentation here. If you experience lack of documentation, you probably should contact ENOT team to complete it or to make it more clear. **Before proceeding to examples or documentation, we recommend you to read this page carefully.** ENOT framework supports `neural architecture search (NAS) `_, `quantization `_ and `pruning `_. **Neural network quantization** attempts to make weights of neural network discrete (instead of using their floating-point representation, usually float32 format). Quantization decreases model size by a factor of 4 by using int8 data type. On NVIDIA GPUs and Intel CPUs quantization may also provide noticeable inference time decrease. * :ref:`Tutorial - automatic quantization for enot-lite ` * :ref:`Tutorial - Ultralytics YOLOv5 quantization ` **Neural network pruning** removes redundant (or least important) channels (or features) from neural network. Our framework implements structured pruning (removing channels or neurons, not separate connections). * :ref:`Tutorial - pruning ` * :ref:`Tutorial - pruning (manual) ` To improve metrics of baseline or to finetune model it might be helpful to try our optimizer. * :ref:`Tutorial - ENOT baseline optimizer ` **Neural architecture search** is a procedure which aims to find suitable architecture from a specific search space - set of architectures to consider. This procedure is the most resource-intensive, and requires the most programming and experimenting among methods listed above. * :ref:`Tutorial - getting started ` * :ref:`Tutorial - search space autogeneration ` * :ref:`Tutorial - custom model ` * :ref:`Tutorial - latency calculation ` * :ref:`Tutorial - search with the specified latency ` * :ref:`Tutorial - resolution search for image classification ` * :ref:`Tutorial - search space autogeneration (EfficientNet-V2 S) ` * :ref:`Tutorial - metric learning ` * :ref:`Tutorial - evolution search ` Before reading the documentation, we recommend you to look at the :ref:`ENOT Tutorials` to clarify basic notions and concepts of the framework. ***************** Table of contents ***************** Packages are listed in descending importance order. .. toctree:: :maxdepth: 1 Autogeneration Autogeneration transforms Models Optimize Distributed pretrain Latency Distributed Quantization Distillation Pruning Utils Visualization Experimental Converting dataloader items to PyTorch model inputs Modules Model builders