Model construction functional

enot.models package contains classes and functions for search space and regular model construction. It also defines basic classes which encapsulates search-intended logic.

We highly recommend you to use search variants autogeneration as it is the easiest way to create search space for most of the projects. For search variants autogeneration, please see Search space auto generation.

You can also build models with search variants by yourself by using our pre-defined model builders (see Model builders) and standard blocks with search variants (see ENOT modules).

ENOT search space construction depends on these two main classes: enot.models.SearchSpaceModel and enot.models.SearchVariantsContainer.

class SearchSpaceModel(original_model, **kwargs)

Search space model class.

This class takes a regular PyTorch model (with SearchVariantsContainer in it) and moves it to the search space. SearchSpaceModel is responsible for all necessary preparations to move regular model to the search space and to extract the best model from the already pre-trained search space.

Parameters

original_model (Module) –

__init__(original_model, **kwargs)
Parameters
  • original_model (torch.nn.Module) – Model with search variants containers, which will be moved to search space.

  • kwargs – Experimental options (should be ignored by user).

apply_latency_container(latency_container)

Applies latencies from SearchSpaceLatencyContainer to the search space.

Parameters

latency_container (SearchSpaceLatencyContainer) – Latency container to use for search space latency initialization.

Return type

None

property constant_latency: float

Search space constant latency value.

Returns

The total latency of search space constant modules (which are outside of any SearchVariantsContainer).

Return type

float

forward(*args, **kwargs)

Executes search space forward pass.

Parameters
  • args – Network input arguments. They are passed directly to the original model.

  • kwargs – Network input keyword arguments. They are passed directly to the original model.

Returns

User network execution result.

Return type

Any

property forward_latency: torch.Tensor

Returns current forward latency of search space’s current selected architecture.

Returns

Latency of the search space sub-network stored in tensor with a single float value.

Return type

torch.Tensor

get_latency_container()

Extracts search space latencies as a SearchSpaceLatencyContainer.

Search space consists of two parts: dynamic (search blocks with their operations) and static (constant). SearchSpaceLatencyContainer represents latency information of a search space as latencies of static and dynamic parts.

You can save and load SearchSpaceLatencyContainer to and from your hard drive, and apply them later again in your search space models.

Returns

Container with the necessary search space latency information.

Return type

SearchSpaceLatencyContainer

See also

SearchSpaceLatencyContainer()

latency container documentation.

enot.latency.search_space_latency_statistics()

module with search space latency statistical information.

get_network_by_indexes(selected_op_index)

Extracts regular model with the fixed architecture.

Parameters

selected_op_index (tuple with int or list with int) – Indices of the selected architecture. i-th list value is an i-th SearchVariantsContainer operation index.

Returns

Model with the fixed architecture.

Return type

torch.nn.Module

get_network_with_best_arch()

Extracts model with the best architecture.

Returns

Model with the best architecture.

Return type

torch.nn.Module

initialize_output_distribution_optimization(*sample_input_args, **sample_input_kwargs)

Initializes “output distribution” optimization.

Output distribution optimization is highly recommended for “pretrain” step.

Parameters
  • sample_input_args – Input arguments used in initialization forward pass.

  • sample_input_kwargs – Input keyword arguments used in initialization forward pass.

Raises

RuntimeError – If output distribution optimization is already enabled.

Return type

None

property latency_type: Optional[str]

Selected latency type.

Returns

Name of the latency type or None if latency is not initialized.

Return type

str or None

property output_distribution_optimization_enabled: bool

“output distribution” optimization status.

Returns

True if “output distribution” optimization is enabled, and False otherwise.

Return type

bool

property search_variants_containers: List[enot.models.operations.search_variants_container.SearchVariantsContainer]

Finds all SearchVariantsContainer in the original model.

Returns

List with all SearchVariantsContainer of the original model.

Return type

list with SearchVariantsContainer

class SearchVariantsContainer(search_variants, default_operation_index=None, **kwargs)

Bases: torch.nn.modules.module.Module

Container class which keeps searchable operations.

If you want to perform Neural Architecture Search, your model should have at least one SearchVariantsContainer.

User can add any modules as search operations into this container. Currently, we are not allowing nested search variant containers.

Notes

This module keeps choice options as nn.ModuleList in search_variants attribute. After search space initialization, this attribute is replaced with ENOT search container.

Parameters
__init__(search_variants, default_operation_index=None, **kwargs)
Parameters
  • search_variants (iterable with torch.nn.Modules) – Iterable object, which contains search variants of current graph node.

  • default_operation_index (int or None, optional) – Index of operation which will be used as a default operation in forward before SearchVariantsContainer is wrapped with SearchSpaceModel. Default value is None, which ensures that ValueError is raised in the case of attempt to call SearchVariantsContainer before wrapping.

forward(*args, **kwargs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type

Any