ENOT modules¶
This page contains operation definitions which can be useful for search space and regular model construction.
We do not recommend you to use this functional as it can be replaced by autogeneration in most cases.
operations¶
This module contains classes for searchable model construction. It contains blocks from some popular architectures such as MobileNetV2 and ResNet.
conv_blocks¶
- class SearchableConv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=None, activation='relu', use_skip_connection=True)¶
Bases:
torch.nn.modules.module.Module
,enot.latency.latency_mixin.LatencyMixin
Sequence of (conv2d, [activation_function], batch_norm) layers.
- Parameters
- __init__(in_channels, out_channels, kernel_size=3, stride=1, padding=None, activation='relu', use_skip_connection=True)¶
- Parameters
in_channels (int) – Number of channels in the input tensor.
out_channels (int) – Number of channels produced by the convolution.
kernel_size (int) – Size of the convolving kernel. Default value: 3
stride (Union[int, Tuple[int, int]]) – Stride of the convolution. Default value: 1
padding (int, optional) – Padding added to both sides of the input. Default value: None (that means ‘same’)
activation (str, optional) –
Name for used activation function.
This function must be registered in GLOBAL_ACTIVATION_FUNCTION_REGISTRY from enot.operations.operations_registry.
If activation is None - there is no activation_function.
Default value: ‘relu’
use_skip_connection (bool) – Add skip connection (y+=x) if this flag is True and output produced by convolution has the same shape as input. Default value: True
- forward(x)¶
Perform (conv2d, [activation_function if not None], batch_norm) sequentially.
- Parameters
x (torch.Tensor) –
- Returns
output – Tensor with results of computation.
- Return type
- class SearchableFuseableSkipConv(in_channels, out_channels, stride=1, use_skip_connection=True)¶
Bases:
enot.models.operations.conv_blocks.SearchableConv2d
SearchableConv2d without activation function, and with kernel_size=1. Used for matching input and output channels of search blocks.
- Parameters
- __init__(in_channels, out_channels, stride=1, use_skip_connection=True)¶
- Parameters
in_channels (int) – Number of channels in the input tensor.
out_channels (int) – Number of channels produced by the convolution.
kernel_size (int) – Size of the convolving kernel. Default value: 3
stride (Union[int, Tuple[int, int]]) – Stride of the convolution. Default value: 1
padding (int, optional) – Padding added to both sides of the input. Default value: None (that means ‘same’)
activation (str, optional) –
Name for used activation function.
This function must be registered in GLOBAL_ACTIVATION_FUNCTION_REGISTRY from enot.operations.operations_registry.
If activation is None - there is no activation_function.
Default value: ‘relu’
use_skip_connection (bool) – Add skip connection (y+=x) if this flag is True and output produced by convolution has the same shape as input. Default value: True
mobilenet_blocks¶
- class SearchableMobileInvertedBottleneck(in_channels, out_channels, dw_channels=None, expand_ratio=None, kernel_size=3, stride=1, padding=None, affine=True, track=True, activation='relu6', use_skip_connection=True)¶
Bases:
torch.nn.modules.module.Module
,enot.latency.latency_mixin.LatencyMixin
Searchable block from MobileNetV2. .. _original paper: https://arxiv.org/pdf/1801.04381.pdf
- Parameters
- __init__(in_channels, out_channels, dw_channels=None, expand_ratio=None, kernel_size=3, stride=1, padding=None, affine=True, track=True, activation='relu6', use_skip_connection=True)¶
- Parameters
in_channels (int) – Number of channels in the input tensor.
out_channels (int) – Number of channels of output tensor.
dw_channels (int, optional) – Number of channels in depthwise convolution. Only one of dw_channels and expand_ratio con be set. If both is None there is no expand operation and only depthwise and squeeze blocks will be compute.
expand_ratio (float, optional) –
Used for computing dw_channels.
dw_channels = round(in_channels * expand_ratio).
kernel_size (int) – Size of the convolving kernel in the depthwise convolution. Default value: 3
stride (int) – Stride of the depthwise convolution. Default value: 1
padding (int, optional) – Padding added to both sides of the input. Default value: None (that means ‘same’)
affine (bool) – Flag for using affine in all BatchNorm-s of MIB.
track (bool) – Flag for using track_running_stats in all BatchNorm-s of MIB.
activation (str, optional) –
Name for used activation function.
This function must be registered in GLOBAL_ACTIVATION_FUNCTION_REGISTRY from enot.operations.operations_registry.
If activation is None - there is no activation_function.
Default value: ‘relu6’
use_skip_connection (bool) – Add skip connection (y+=x) if this flag is True and output produced by convolution has the same shape as input. Default value: True
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
resnet_blocks¶
- class SearchableResNetD(in_channels, out_channels, hidden_channels=None, expand_ratio=None, squeeze_kernel_size=3, expand_kernel_size=3, stride=1, padding=None, activation='relu', use_skip_connection=True)¶
Bases:
torch.nn.modules.module.Module
,enot.latency.latency_mixin.LatencyMixin
D type of ResNet block. .. _original paper: https://arxiv.org/pdf/1603.05027.pdf fig. 4(d)
- Parameters
- __init__(in_channels, out_channels, hidden_channels=None, expand_ratio=None, squeeze_kernel_size=3, expand_kernel_size=3, stride=1, padding=None, activation='relu', use_skip_connection=True)¶
- Parameters
in_channels (int) – Number of channels in the input tensor.
out_channels (int) – Number of channels of output tensor.
hidden_channels (int, optional) – Number of channels in the convolution inside ResnetBlock. Only one of hidden_channels and expand_ratio con be set. If both is None - hidden_channels=in_channels // 2.
expand_ratio (float, optional) – Used for computing hidden_channels. hidden_channels = round(in_channels * expand_ratio).
squeeze_kernel_size (int) – Kernel size for squeeze convolution from in_channels to hidden_channels.
expand_kernel_size (int) – Kernel size for expand convolution from hidden_channels to out_channels.
stride (int) – Stride of the depthwise convolution. Default value: 1
padding (int, optional) – Padding added to both sides of the input. Default value: None (that means ‘same’)
activation (str, optional) –
Name for used activation function.
This function must be registered in GLOBAL_ACTIVATION_FUNCTION_REGISTRY from enot.operations.operations_registry.
If activation is None - there is no activation_function
Default value: ‘relu’
use_skip_connection (bool) – Add skip connection (y+=x) if this flag is True, in_channels==out_channels and stride==1. Default value: True
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class SearchableResNetE(in_channels, out_channels, hidden_channels=None, expand_ratio=None, squeeze_kernel_size=3, expand_kernel_size=3, stride=1, padding=None, activation='relu', use_skip_connection=True)¶
Bases:
torch.nn.modules.module.Module
,enot.latency.latency_mixin.LatencyMixin
E type of ResNet block. .. _original paper: https://arxiv.org/pdf/1603.05027.pdf, fig. 4(e)
- Parameters
- __init__(in_channels, out_channels, hidden_channels=None, expand_ratio=None, squeeze_kernel_size=3, expand_kernel_size=3, stride=1, padding=None, activation='relu', use_skip_connection=True)¶
- Parameters
in_channels (int) – Number of channels in the input tensor.
out_channels (int) – Number of channels of output tensor.
hidden_channels (int, optional) – Number of channels in the convolution inside ResnetBlock. Only one of hidden_channels and expand_ratio con be set. If both is None - hidden_channels=in_channels // 2.
expand_ratio (float, optional) – Used for computing hidden_channels. hidden_channels = round(in_channels * expand_ratio).
squeeze_kernel_size (int) – Kernel size for squeeze convolution from in_channels to hidden_channels.
expand_kernel_size (int) – Kernel size for expand convolution from hidden_channels to out_channels.
stride (int) – Stride of the depthwise convolution. Default value: 1
padding (int, optional) – Padding added to both sides of the input. Default value: None (that means ‘same’)
activation (str, optional) –
Name for used activation function.
This function must be registered in GLOBAL_ACTIVATION_FUNCTION_REGISTRY from enot.operations.operations_registry.
If activation is None - there is no activation_function.
Default value: ‘relu’
use_skip_connection (bool) – Add skip connection (y+=x) if this flag is True and output produced by convolution has the same shape as input. Default value: True
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
resnext_blocks¶
- class SearchableResNext(in_channels, out_channels, hidden_channels=None, expand_ratio=None, kernel_size=3, cardinality=8, stride=1, padding=None, activation='relu', use_skip_connection=True)¶
Bases:
torch.nn.modules.module.Module
,enot.latency.latency_mixin.LatencyMixin
ResNext block with group convolution. .. _original paper: https://arxiv.org/pdf/1611.05431.pdf, fig. 3(c)
- Parameters
- __init__(in_channels, out_channels, hidden_channels=None, expand_ratio=None, kernel_size=3, cardinality=8, stride=1, padding=None, activation='relu', use_skip_connection=True)¶
- Parameters
in_channels (int) – Number of channels in the input tensor.
out_channels (int) – Number of channels of output tensor.
hidden_channels (int, optional) – Number of channels in the convolution inside ResnetBlock. Only one of hidden_channels and expand_ratio con be set. If both is None - hidden_channels=in_channels // 2.
expand_ratio (float, optional) – Used for computing hidden_channels. hidden_channels = round(in_channels * expand_ratio).
kernel_size (int) – Kernel size for middle group convolution in the block.
cardinality (int) – Number of groups for middle group convolution.
stride (int) – Stride of the depthwise convolution. Default value: 1
padding (int, optional) – Padding added to both sides of the input. Default value: None (that means ‘same’)
activation (str, optional) –
Name for used activation function.
This function must be registered in GLOBAL_ACTIVATION_FUNCTION_REGISTRY from enot.operations.operations_registry.
If activation is None - there is no activation_function.
Default value: ‘relu’
use_skip_connection (
bool
) –
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
mobilenet¶
This package contains classes for MobileNetV2-like model construction.
heads¶
- class MobileNetBaseHead(bottleneck_channels, *, activation='relu6', last_channels=1280, dropout_rate=0.0, num_classes=1000, width_multiplier=1.0)¶
Bases:
torch.nn.modules.module.Module
,enot.latency.latency_mixin.LatencyMixin
- Parameters
- __init__(bottleneck_channels, *, activation='relu6', last_channels=1280, dropout_rate=0.0, num_classes=1000, width_multiplier=1.0)¶
Builds last layers of network.
- Parameters
bottleneck_channels (
int
) – Number of input channels for convolution before FC layer.activation (str, optional) –
Name for used activation function.
This function must be registered in GLOBAL_ACTIVATION_FUNCTION_REGISTRY from enot.operations.operations_registry.
If activation is None - there is no activation_function.
Default value: ‘relu6’
last_channels (
int
) – Number of output channels for convolution before FC layer. Default value: 1280dropout_rate (
float
) – Default value: 0num_classes (
int
) – Number of predicted classes. Default value: 1000.width_multiplier (
float
) – Adjusts number of channels in each layer (conv2d and fc) by this amount.
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ArcfaceHead(bottleneck_channels, *, radius=64, angle_margin=0.5, cos_margin=0.0, angle_scale=1.0, last_channels=1280, num_classes=1000)¶
Bases:
torch.nn.modules.linear.Linear
- Parameters
- __init__(bottleneck_channels, *, radius=64, angle_margin=0.5, cos_margin=0.0, angle_scale=1.0, last_channels=1280, num_classes=1000)¶
Arcface layer with parameters from original paper. .. _original paper: https://arxiv.org/pdf/1801.07698.pdf.
- Parameters
bottleneck_channels (int) – Number of input channels for convolution before FC layer.
radius (float) – Radius of sphere for embeddings. Default value: 64.0
angle_margin (float) – Value for m2 parameter from original paper. Default value: 0.5
cos_margin (float) – Value for m3 parameter from original paper. Default value: 0.0
angle_scale (float) – Value for m1 parameter from original paper. Default value: 1.0
last_channels (int) – Number of output channels for convolution before FC layer. Default value: 1280
num_classes (int) – Number of predicted classes. Default value: 1000.
- forward(inputs, labels=None)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
stems¶
- class MobileNetBaseStem(*, activation='relu6', in_channels=3, strides=(2, 1), output_channels=(32, 16), kernel_sizes=(3, 3), width_multiplier=1.0, min_channels=8)¶
Bases:
torch.nn.modules.module.Module
,enot.latency.latency_mixin.LatencyMixin
- Parameters
- __init__(*, activation='relu6', in_channels=3, strides=(2, 1), output_channels=(32, 16), kernel_sizes=(3, 3), width_multiplier=1.0, min_channels=8)¶
Makes first layers of network.
- Parameters
activation (str, optional) –
Name for used activation function.
This function must be registered in GLOBAL_ACTIVATION_FUNCTION_REGISTRY from enot.operations.operations_registry.
If activation is None - there is no activation_function.
Default value: ‘relu6’
in_channels (
int
) – Number of channels in the input tensor. Default value: 3strides (tuple) – Strides for two convolution layers in stem. Default value: (2, 1)
output_channels (tuple) – Number of channels in output tensors for two convolution layers in stem Default value: (32, 16)
kernel_sizes (tuple) – Kernel sizes for two convolution layers in stem. Default value: (3, 3)
width_multiplier (
float
) – Adjusts number of channels in each layer by this amount.min_channels (
int
) – Min_value parameter for _make_divisible function from original tf repo. It can be see here: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py.
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
yolo¶
This package contains some classes for yolov5-like model construction.
yolo blocks¶
- class ConvBNActivation(conv, bn, act)¶
Bases:
torch.nn.modules.module.Module
Default Conv+BN+Act block, but with modules names like in YoloV5.
- Parameters
conv (
Conv2d
) –bn (
BatchNorm2d
) –act (
Module
) –
- __init__(conv, bn, act)¶
- Parameters
conv (torch.nn.Conv2d) – Convolution module.
bn (torch.nn.BatchNorm2d) – Batchnorm module.
act (torch.nn.Module) – Activation function.
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class Bottleneck(conv1, conv2, skip)¶
Bases:
torch.nn.modules.module.Module
Default Bottleneck block, but with modules names like in YoloV5.
- Parameters
conv1 (
ConvBNActivation
) –conv2 (
ConvBNActivation
) –skip (
bool
) –
- __init__(conv1, conv2, skip)¶
- Parameters
conv1 (ConvBNActivation) – First convolution in bottleneck.
conv2 (ConvBNActivation) – Second convolution in bottleneck.
skip (bool) – Add skip connection or not.
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class BottleNecksSequence(bottlenecks)¶
Bases:
torch.nn.modules.module.Module
Stack of Bottleneck blocks. We need this to make leaf module to find C3 and Conv+C3 blocks. Also store expansion and depth for pruning.
- Parameters
bottlenecks (
Union
[Sequential
,List
[Bottleneck
]]) –
- __init__(bottlenecks)¶
- Parameters
bottlenecks (torch.nn.Sequential or list of Bottleneck) – List or Sequential module with stacked Bottleneck blocks.
- property bottlenecks_count: int¶
Number of bottlenecks following each other.
- Returns
- Return type
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class C3(cv1, cv2, cv3, bottlenecks)¶
Bases:
torch.nn.modules.module.Module
C3 block from Yolov5.
- Parameters
cv1 (
ConvBNActivation
) –cv2 (
ConvBNActivation
) –cv3 (
ConvBNActivation
) –bottlenecks (
BottleNecksSequence
) –
- __init__(cv1, cv2, cv3, bottlenecks)¶
- Parameters
cv1 (ConvBNActivation) – The first input module with conv2d, batch norm2d and activation. Output goes to bottlenecks.
cv2 (ConvBNActivation) – The second input module with conv2d, batch norm2d and activation.
cv3 (ConvBNActivation) – conv2d, batch norm2d and activation module. Input – concated outputs of bottlenecks and cv2.
bottlenecks (BottleNecksSequence) – Sequence of bottlenecks.
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.