Welcome again to the Tiny Large collection — a collection the place I share what I realized about MobileNet architectures. Previously two articles I coated MobileNetV1 and MobileNetV2. Try references [1] and [2] when you’re desirous about studying them. In at the moment’s article I wish to proceed with the subsequent model of the mannequin: MobileNetV3.
MobileNetV3 was first proposed in a paper titled “Looking for MobileNetV3” written by Howard et al. in 2019 [3]. Only a fast assessment: the principle concept of the primary MobileNet model was changing full-convolutions with depthwise separable convolutions, which decreased the variety of params by almost 90% in comparison with its normal CNN counterpart. Within the second MobileNet model, the authors launched the so-called inverted residual and linear bottleneck mechanisms, which they built-in into the unique MobileNetV1 constructing blocks. Now within the third MobileNet model, the authors tried to push the efficiency of the community even additional by incorporating Squeeze-and-Excitation (SE) modules and onerous activation capabilities into the constructing blocks. Moreover, the general construction of MobileNetV3 itself is partially designed utilizing NAS (Neural Structure Search), by which it primarily works considerably like a parameter tuning that operates on the architectural stage by maximizing accuracy whereas minimizing latency. Nonetheless, observe that on this article I received’t go into how NAS works intimately. As an alternative, I’ll concentrate on the ultimate design of MobileNetV3 proposed within the paper.
The Detailed MobileNetV3 Structure
The authors suggest two variants of this mannequin which they confer with as MobileNetV3-Giant and MobileNetV3-Small. You’ll be able to see the main points of the 2 architectures in Determine 1 beneath.

Taking a more in-depth have a look at the structure, we will see that the 2 networks primarily include bneck (bottleneck) blocks. The configuration of the blocks themselves is described in columns exp measurement, #out, SE, NL, and s. The inner construction of those blocks in addition to the corresponding parameter configurations shall be mentioned additional within the following subsection.
The Bottleneck
MobileNetV3 makes use of the modified model of the constructing blocks utilized in MobileNetV2. As I’ve talked about earlier, what makes the 2 completely different is the presence of SE module and the usage of onerous activation operate. You’ll be able to see the 2 constructing blocks in Determine 2, with MobileNetV2 on the high and MobileNetV3 on the backside.

Discover that the primary two convolution layers in each constructing blocks are mainly the identical: a pointwise convolution adopted by a depthwise convolution. The previous is used for increasing the variety of channels to exp measurement (growth measurement), whereas the latter is accountable to course of every channel of the ensuing tensor independently. The one distinction between the 2 constructing blocks lies within the activation capabilities used, which they confer with as NL (Nonlinearity). In MobileNetV2, the activation capabilities positioned after the 2 convolution layers are set fastened to ReLU6, whereas in MobileNetV3 it will possibly both be ReLU6 or hard-swish. The RE and HS you noticed earlier in Determine 1 mainly refer to those two sorts of activations.
Subsequent, in MobileNetV3 we place the SE module after the depthwise convolution layer. For those who’re not but aware of SE module, it’s primarily a sort of constructing block we will connect in any sort of CNN-based mannequin. This part is helpful for giving weights to completely different channels, permitting the mannequin to pay extra consideration to the vital channels solely. I even have a separate article discussing the SE module intimately. Click on on the hyperlink at reference quantity [4] if you wish to learn that one. It is very important observe that the SE module used right here is barely completely different, in that the final FC layer makes use of hard-sigmoid moderately than the usual sigmoid activation operate. (I’ll discuss extra in regards to the onerous activations utilized in MobileNetV3 later within the subsequent subsection.) In actual fact, the SE module itself will not be at all times included in each bottleneck block. For those who return to Determine 1, you’ll discover that among the bottleneck blocks have a checkmark within the SE column, indicating that the SE module is utilized. However, some blocks don’t embody the module, which could most likely be as a result of the NAS course of didn’t discover any efficiency enchancment from utilizing SE modules in these blocks.
Because the SE module has been related, we have to place one other pointwise convolution, which is accountable to regulate the variety of output channels in response to the #out column in Determine 1. This pointwise convolution doesn’t embody any activation operate, aligning with the linear bottleneck design initially launched in MobileNetV2. I really have to make clear one thing right here. For those who check out the MobileNetV2 constructing block in Determine 2 above, you’ll discover that the final pointwise convolution has a ReLU6 positioned on it. I imagine it is a mistake made by the authors, as a result of in response to the MobileNetV2 paper [6], the ReLU6 ought to be within the first pointwise convolution in the beginning of the block as an alternative.
Final however not least, discover that there’s additionally a residual connection that skips throughout all layers within the bottleneck block. This connection is barely current when the output tensor has the very same dimensions because the enter, i.e., when the variety of enter and output channels is identical and when the s (stride) is 1.
Onerous-Sigmoid and Onerous-Swish
The activation capabilities utilized in MobileNetV3 are usually not generally present in different deep studying fashions. To start out with, let’s have a look at the hard-sigmoid activation first, which is the one used within the SE module as a substitute for the traditional sigmoid. Check out Determine 3 beneath to see the distinction between the 2.

Right here you may most likely be questioning, why don’t we simply use the traditional sigmoid? Why do we actually want to make use of piecewise linear operate that seems much less clean as an alternative? To reply this query, we have to perceive the mathematical definition of a sigmoid operate upfront, which I present in Determine 4 beneath.

We will clearly see within the above determine that the sigmoid operate initially entails an exponential time period within the denominator. In actual fact, this time period causes the operate to be computationally costly, which in flip makes the activation operate much less appropriate for low-power units. Not solely that, the output of the sigmoid operate itself is a high-precision floating-point worth, which can also be not preferable for low-power units resulting from their restricted help for dealing with such values.
For those who have a look at Determine 3 once more, you may suppose that the hard-sigmoid operate is straight derived from the unique sigmoid. In actual fact, that’s really not fairly proper. Regardless of having an analogous form, hard-sigmoid is mainly constructed utilizing ReLU6 as an alternative, which might formally be expressed in Determine 5 beneath. Right here you’ll be able to see that the equation is way easier because it solely consists of fundamental arithmetic operations and clipping, permitting it to be processed a lot quicker.

The following activation operate we’re going to make the most of in MobileNetV3 is the so-called hard-swish, which shall be applied after every of the primary two convolution layers within the bottleneck block. Similar to sigmoid and hard-sigmoid, the graph of the hard-swish operate seems to be much like the unique one.

The unique swish operate itself can mathematically be expressed within the equation in Determine 7. Once more, for the reason that equation entails sigmoid, it would positively decelerate the computation. Therefore, to hurry up the method, we will merely substitute the sigmoid operate with hard-sigmoid we simply mentioned. By doing so, we now have the onerous model of the swish activation operate as proven in Determine 8.


Some Experimental Outcomes
Earlier than we get into the experimental outcomes, you might want to know that there are two parameters in MobileNetV3 that enable us to regulate the mannequin measurement in response to our wants. These two parameters are width multiplier and enter decision, which in MobileNetV1 are often known as α and ρ, respectively. Though we will technically regulate the worth for the 2 freely, the authors already supplied a number of numbers we will use. For the width multiplier, we will set it to both 0.35, 0.5, 0.75, 1.0, or 1.25, the place utilizing a price smaller than 1.0 causes the mannequin to have fewer variety of channels than these disclosed in Determine 1, successfully lowering the mannequin measurement. As an example, if we set this parameter to 0.35, then the mannequin will solely have 35% of its default width (i.e., channel depend) all through your complete community.
In the meantime, the enter decision can both be 96, 128, 160, 192, 224, or 256, which because the identify suggests, it straight controls the spatial dimension of the enter picture. It’s price noting that although utilizing a small enter measurement reduces the variety of operations throughout inference, it doesn’t have an effect on the mannequin measurement in any respect. So, in case your goal is to cut back mannequin measurement, you might want to regulate the width multiplier, whereas in case your purpose is to decrease computational value, you’ll be able to mess around with each the width multiplier and enter decision.
Now trying on the experimental ends in Determine 9, we will clearly see that MobileNetV3 outperforms MobileNetV2 when it comes to accuracy at related latency. The MobileNetV3-Small of default configuration (i.e., width multiplier 1.0 and enter decision 224×224) certainly has a decrease accuracy than the most important MobileNetV2 variant. However when you take the default MobileNetV3-Giant under consideration, it bought a simple win over the most important MobileNetV2 each when it comes to accuracy and latency. Moreover, we will nonetheless push the accuracy of MobileNetV3 even additional by enlarging the mannequin measurement by 1.25 instances (the blue datapoint on the high proper), however take into account that doing so considerably sacrifices computational velocity.

The authors additionally carried out a comparative evaluation with different light-weight fashions, of which the outcomes are proven within the desk in Determine 10.

The rows of the desk above are divided into two teams, the place the higher group is used to check fashions with complexity much like MobileNetV3-Giant, whereas the decrease group consists of fashions corresponding to MobileNetV3-Small. Right here you’ll be able to see that each V3-Giant and V3-Small obtained the very best accuracy on ImageNet inside their respective teams. It’s price noting that though MnasNet-A1 and V3-Giant have the very same accuracy, the variety of operations (MAdds) of the previous mannequin is greater, which leads to greater latency, as seen in columns P-1, P-2, and P-3 (measured in milliseconds). In case you’re questioning, the labels P-1, P-2, and P-3 primarily correspond to completely different Google Pixel collection used to check the precise computational velocity. Subsequent, it’s essential to acknowledge that each MobileNetV3 variants have the very best parameter depend (the params column) in comparison with different fashions of their group. Nonetheless, this doesn’t appear to be a serious concern for the authors as the first purpose of MobileNetV3 is to attenuate computational latency, even when which means having a barely greater mannequin.
The following experiment the authors carried out was in regards to the results of worth quantization, i.e., a method that reduces the precision of floating-point numbers to hurry up computation. Whereas the networks already incorporate onerous activation capabilities, that are appropriate with quantized values, this experiment takes quantization a step additional by making use of it to your complete community to see how a lot the velocity improves. The experimental outcomes when worth quantization was utilized are proven in Determine 11 beneath.

For those who examine the outcomes of V2 and V3 in Determine 11 with the corresponding fashions in Determine 10, you’ll discover that there’s a lower in latency, proving that the usage of low-precision numbers does enhance computational velocity. Nonetheless, you will need to take into account that this additionally results in a lower in accuracy.
MobileNetV3 Implementation
I believe all the reasons above cowl just about the whole lot you might want to know in regards to the principle behind MobileNetV3. Now on this part I’m going to carry you into probably the most enjoyable a part of this text: implementing MobileNetV3 from scratch.
As at all times, the very very first thing we do is importing the required modules.
# Codeblock 1
import torch
import torch.nn as nn
Afterwards, we have to initialize the configurable parameters of the mannequin, specifically WIDTH_MULTIPLIER, INPUT_RESOLUTION, and NUM_CLASSES, as proven in Codeblock 2 beneath. I imagine the primary two variables are easy as I’ve defined them completely within the earlier part. Right here I made a decision to assign default values for the 2. You’ll be able to positively change these numbers based mostly on the values supplied within the paper if you wish to regulate the complexity of the mannequin. Subsequent, the third variable corresponds to the variety of output neurons within the classification head. Right here I set it to 1000 as a result of the mannequin is initially educated on the ImageNet-1K dataset. It’s price noting that the MobileNetV3 structure is definitely not restricted to classification duties solely. As an alternative, it can be used for object detection and semantic segmentation as demonstrated within the paper. Nonetheless, for the reason that focus of this text is to implement the spine, let’s simply use the usual classification head for the output layer to maintain issues easy.
# Codeblock 2
WIDTH_MULTIPLIER = 1.0
INPUT_RESOLUTION = 224
NUM_CLASSES = 1000
What we’re going to do subsequent is to wrap the repeating parts into separate lessons. By doing this, we’ll later be capable of merely instantiate them every time wanted as an alternative of rewriting the identical code time and again. Now let’s start with the Squeeze-and-Excitation module first.
The Squeeze-and-Excitation Module
The implementation of this part is proven in Codeblock 3. I’m not going to get very deep into the code since it’s virtually precisely the identical because the one in my earlier article [4]. Nonetheless, usually talking, this code works by representing every enter channel with a single quantity (line #(1)), processing the ensuing vector with a sequence of linear layers (#(2–3)), then changing it right into a weight vector (#(4)). Take into account that within the authentic SE module we usually use the usual sigmoid activation operate to acquire the burden vector, however right here in MobileNetV3 we use hard-sigmoid as an alternative. This weight vector will then be multiplied with the unique tensor, which by doing so we will scale back the affect of channels that don’t give contribution to the ultimate output (#(5)).
# Codeblock 3
class SEModule(nn.Module):
def __init__(self, num_channels, r):
tremendous().__init__()
self.global_pooling = nn.AdaptiveAvgPool2d(output_size=(1,1))
self.fc0 = nn.Linear(in_features=num_channels,
out_features=num_channels//r,
bias=False)
self.relu6 = nn.ReLU6()
self.fc1 = nn.Linear(in_features=num_channels//r,
out_features=num_channels,
bias=False)
self.hardsigmoid = nn.Hardsigmoid()
def ahead(self, x):
print(f'originaltt: {x.measurement()}')
squeezed = self.global_pooling(x) #(1)
print(f'after avgpooltt: {squeezed.measurement()}')
squeezed = torch.flatten(squeezed, 1)
print(f'after flattentt: {squeezed.measurement()}')
excited = self.fc0(squeezed) #(2)
print(f'after fc0tt: {excited.measurement()}')
excited = self.relu6(excited)
print(f'after relu6tt: {excited.measurement()}')
excited = self.fc1(excited) #(3)
print(f'after fc1tt: {excited.measurement()}')
excited = self.hardsigmoid(excited) #(4)
print(f'after hardsigmoidt: {excited.measurement()}')
excited = excited[:, :, None, None]
print(f'after reshapett: {excited.measurement()}')
scaled = x * excited #(5)
print(f'after scalingtt: {scaled.measurement()}')
return scaled
Now let’s verify if the above code works correctly by creating an SEModule occasion and passing a dummy tensor by means of it. See Codeblock 4 beneath for the main points. Right here I configure the SE module to just accept a 512-channel picture for the enter. In the meantime, the r (discount ratio) parameter is ready to 4, which means that the vector size between the 2 FC layers goes to be 4 instances smaller than that of its enter and output. It may be price understanding that this quantity is completely different from the one talked about within the authentic Squeeze-and-Excitation paper [7], the place r = 16 is claimed to be the candy spot for balancing accuracy and complexity.
# Codeblock 4
semodule = SEModule(num_channels=512, r=4)
x = torch.randn(1, 512, 28, 28)
out = semodule(x)
If the code above produces the next output, it confirms that our SE module implementation is appropriate because it efficiently handed the enter tensor by means of all layers inside the total SE module.
# Codeblock 4 Output
authentic : torch.Measurement([1, 512, 28, 28])
after avgpool : torch.Measurement([1, 512, 1, 1])
after flatten : torch.Measurement([1, 512])
after fc0 : torch.Measurement([1, 128])
after relu6 : torch.Measurement([1, 128])
after fc1 : torch.Measurement([1, 512])
after hardsigmoid : torch.Measurement([1, 512])
after reshape : torch.Measurement([1, 512, 1, 1])
after scaling : torch.Measurement([1, 512, 28, 28])
The Convolution Block
The following part I’m going to create is the one wrapped within the ConvBlock class, which the detailed implementation may be seen in Codeblock 5. In actual fact, that is really simply a regular convolution layer, however we don’t merely use nn.Conv2d as a result of in CNN we usually use the Conv-BN-ReLU construction. Therefore, it is going to be handy if we simply group these three layers collectively inside a single class. Nonetheless, as an alternative of really following this normal construction, we’re going to customise it to match the necessities for the MobileNetV3 structure.
# Codeblock 5
class ConvBlock(nn.Module):
def __init__(self,
in_channels, #(1)
out_channels, #(2)
kernel_size, #(3)
stride, #(4)
padding, #(5)
teams=1, #(6)
batchnorm=True, #(7)
activation=nn.ReLU6()): #(8)
tremendous().__init__()
bias = False if batchnorm else True #(9)
self.conv = nn.Conv2d(in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=stride,
padding=padding,
teams=teams,
bias=bias)
self.bn = nn.BatchNorm2d(num_features=out_channels) if batchnorm else nn.Identification() #(10)
self.activation = activation
def ahead(self, x): #(11)
print(f'originaltt: {x.measurement()}')
x = self.conv(x)
print(f'after convtt: {x.measurement()}')
x = self.bn(x)
print(f'after bntt: {x.measurement()}')
x = self.activation(x)
print(f'after activationt: {x.measurement()}')
return x
There are a number of parameters you might want to cross to instantiate a ConvBlock occasion. The primary 5 ones (#(1–5)) are fairly easy as they’re mainly simply the usual parameters for the nn.Conv2d layer. Right here I set the teams parameter to be configurable (#(6)) in order that this class may be flexibly used not just for normal convolutions but in addition for depthwise convolutions. Subsequent, at line #(7) I create a parameter known as batchnorm, which determines whether or not or not a ConvBlock occasion implements a batch normalization layer. That is primarily accomplished as a result of there are some instances the place we don’t implement this layer, i.e., within the final two convolutions with NBN label (which stands for no batch normalization) in Determine 1. The final parameter we have now right here is the activation operate (#(8)). In a while, there shall be instances that require us to set it to both nn.ReLU6(), nn.Hardswish() or nn.Identification() (no activation).
Contained in the __init__() technique, there are two issues taking place if we alter the enter argument for the batchnorm parameter. Once we set it to True, firstly, the bias time period of the convolution layer shall be deactivated (#(9)), and secondly, bn shall be an nn.BatchNorm2d() layer (#(10)). The bias time period is not going to be used on this case as a result of making use of batch normalization after convolution will cancel it out. So, there may be mainly no level of using bias within the first place. In the meantime, if we set the batchnorm parameter to False, the bias variable goes to be True since on this scenario it is not going to be canceled out. The bn itself will simply be an id layer, which means that it received’t do something to the tensor.
Concerning the ahead() technique (#(11)), I don’t suppose I want to clarify something as a result of what we do right here is simply passing a tensor by means of the layers sequentially. Now let’s simply transfer on to Codeblock 6 to see whether or not our ConvBlock implementation is appropriate. Right here I attempt to create two ConvBlock situations, the place the primary one makes use of default batchnorm and activation, whereas the second omits the batch normalization layer (#(1)) and makes use of hard-swish activation operate (#(2)). As an alternative of passing a tensor by means of them, right here I need you to see within the ensuing output that our code appropriately implements each constructions in response to the enter arguments we cross.
# Codeblock 6
convblock1 = ConvBlock(in_channels=64,
out_channels=128,
kernel_size=3,
stride=2,
padding=1)
convblock2 = ConvBlock(in_channels=64,
out_channels=128,
kernel_size=3,
stride=2,
padding=1,
batchnorm=False, #(1)
activation=nn.Hardswish()) #(2)
print(convblock1)
print('')
print(convblock2)
# Codeblock 6 Output
ConvBlock(
(conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU6()
)
ConvBlock(
(conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(bn): Identification()
(activation): Hardswish()
)
The Bottleneck
Because the SEModule and the ConvBlock are accomplished, we will now transfer on to the principle part of the MobileNetV3 structure: the bottleneck. What we primarily do within the bottleneck is simply putting one layer after one other which the overall construction is proven earlier in Determine 2. Within the case of MobileNetV2, it solely consists of three convolution layers, whereas right here in MobileNetV3 we have now a further SE block positioned between the second and the third convolutions. Take a look at Codeblock 7a and 7b to see how I implement the bottleneck block for MobileNetV3.
# Codeblock 7a
class Bottleneck(nn.Module):
def __init__(self,
in_channels,
out_channels,
kernel_size,
stride,
padding,
exp_size, #(1)
se, #(2)
activation):
tremendous().__init__()
self.add = in_channels == out_channels and stride == 1 #(3)
self.conv0 = ConvBlock(in_channels=in_channels, #(4)
out_channels=exp_size, #(5)
kernel_size=1, #(6)
stride=1,
padding=0,
activation=activation)
self.conv1 = ConvBlock(in_channels=exp_size, #(7)
out_channels=exp_size, #(8)
kernel_size=kernel_size, #(9)
stride=stride,
padding=padding,
teams=exp_size, #(10)
activation=activation)
self.semodule = SEModule(num_channels=exp_size, r=4) if se else nn.Identification() #(11)
self.conv2 = ConvBlock(in_channels=exp_size, #(12)
out_channels=out_channels, #(13)
kernel_size=1, #(14)
stride=1,
padding=0,
activation=nn.Identification()) #(15)
The enter parameters of the Bottleneck class look much like these of the ConvBlock class at a look. This positively is smart as a result of we’ll certainly use them to instantiate ConvBlock situations contained in the Bottleneck. Nonetheless, when you take a more in-depth have a look at them once more, you’ll discover that there are another parameters you haven’t seen earlier than, specifically se (#(1)) and exp_size (#(2)). In a while, the enter arguments for these parameters shall be obtained from the configuration supplied within the desk in Determine 1.
Contained in the __init__() technique, what we have to do first is to verify whether or not the enter and output tensor dimensions are the identical utilizing the code at line #(3). By doing this, we may have our add variable containing both True or False. This dimensionality checking is vital as a result of we have to determine whether or not or not we carry out element-wise summation between the 2 to implement the skip-connection that skips by means of all layers inside the bottleneck block.
Subsequent, let’s now instantiate the layers themselves, of which the primary two are a pointwise convolution (conv0) and a depthwise convolution (conv1). For conv0, we have to set the kernel measurement to 1×1 (#(6)), whereas for conv1 the kernel measurement ought to match the one within the enter argument (#(9)), which might both be 3×3 or 5×5. It’s vital to use padding within the ConvBlock to stop the picture measurement from shrinking after each convolution operation. For kernel sizes of 1×1, 3×3, and 5×5, the required padding values are 0, 1, and a couple of, respectively. Speaking in regards to the variety of channels, conv0 is accountable to develop it from in_channels to exp_size (#(4–5)). In the meantime, the variety of enter and output channels of conv1 are precisely the identical (#(7–8)). Along with the conv1 layer, the teams parameter ought to be set to exp_size (#(10)) as a result of we wish every enter channel to be processed independently of one another.
After the primary two convolution layers are accomplished, what we have to instantiate subsequent is the Squeeze-and-Excitation module (#(11)). Right here we have to set the enter channel depend to exp_size, matching with the tensor measurement produced by the conv1 layer. Do not forget that SE module will not be at all times used, therefore the instantiation of this part ought to be accomplished inside a situation, the place it would really be instantiated solely when the se parameter is True. In any other case, it would simply be an id layer.
Lastly, the final convolution layer (conv2) is accountable to map the variety of output channels from exp_size to out_channels (#(12–13)). Similar to the conv0 layer, this one can also be a pointwise convolution, therefore we set the kernel measurement to 1×1 (#(14)) in order that it solely focuses on aggregating data alongside the channel dimension. The activation operate of this layer is ready fastened to nn.Identification() (#(15)) as a result of right here we’ll implement the concept of linear bottleneck.
And that’s just about the whole lot for the layers inside the bottleneck block. All we have to do afterwards is to create the circulate of the community within the ahead() technique as proven in Codeblock 7b beneath.
# Codeblock 7b
def ahead(self, x):
residual = x
print(f'originaltt: {x.measurement()}')
x = self.conv0(x)
print(f'after conv0tt: {x.measurement()}')
x = self.conv1(x)
print(f'after conv1tt: {x.measurement()}')
x = self.semodule(x)
print(f'after semodulett: {x.measurement()}')
x = self.conv2(x)
print(f'after conv2tt: {x.measurement()}')
if self.add:
x += residual
print(f'after summationtt: {x.measurement()}')
return x
Now I wish to take a look at the Bottleneck class we simply created by simulating the third row of the MobileNetV3-Giant structure within the desk in Determine 1. Take a look at the Codeblock 8 beneath to see how I do that. For those who return to the architectural particulars, you’ll discover that this bottleneck accepts a tensor of measurement 16×112×112 (#(7)). On this case, the bottleneck block is configured to develop the variety of channels to 64 (#(3)) earlier than finally shrinking it to 24 (#(1)). The kernel measurement of the depthwise convolution is ready to three×3 (#(2)) and the stride is ready to 2 (#(4)) which is able to scale back the spatial dimension by half. Right here we use ReLU6 for the activation operate (#(6)) of the primary two convolutions. Lastly, SE module is not going to be applied (#(5)) since there isn’t any checkmark within the SE column within the desk.
# Codeblock 8
bottleneck = Bottleneck(in_channels=16,
out_channels=24, #(1)
kernel_size=3, #(2)
exp_size=64, #(3)
stride=2, #(4)
padding=1,
se=False, #(5)
activation=nn.ReLU6()) #(6)
x = torch.randn(1, 16, 112, 112) #(7)
out = bottleneck(x)
For those who run the above code, the next output ought to seem in your display screen.
# Codeblock 8 Output
authentic : torch.Measurement([1, 16, 112, 112])
after conv0 : torch.Measurement([1, 64, 112, 112])
after conv1 : torch.Measurement([1, 64, 56, 56])
after semodule : torch.Measurement([1, 64, 56, 56])
after conv2 : torch.Measurement([1, 24, 56, 56])
This output confirms that our implementation is appropriate when it comes to the tensor form, the place the spatial dimension halves from 112×112 to 56×56 whereas the variety of channels appropriately expands from 16 to 64 after which reduces from 64 to 24. Speaking extra particularly in regards to the SE module, we will see within the above output that the tensor continues to be handed by means of this part regardless of we have now set the se parameter to False. In actual fact, when you attempt to print out the detailed structure of this bottleneck like what I do in Codeblock 9, you will note that semodule is simply an id layer, which successfully makes this construction behave as if we’re passing the output of conv1 on to conv2.
# Codeblock 9
bottleneck
# Codeblock 9 Output
Bottleneck(
(conv0): ConvBlock(
(conv): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU6()
)
(conv1): ConvBlock(
(conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), teams=64, bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU6()
)
(semodule): Identification()
(conv2): ConvBlock(
(conv): Conv2d(64, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): Identification()
)
)
The above bottleneck goes to behave otherwise if we instantiate it with the se parameter set to True. In Codeblock 10 beneath, I attempt to create the bottleneck block within the fifth row within the MobileNetV3-Giant structure. On this case, when you print out the detailed construction, you will note that semodule consists of all layers within the SEModule class we created earlier as an alternative of simply being an id layer like earlier than.
# Codeblock 10
bottleneck = Bottleneck(in_channels=24,
out_channels=40,
kernel_size=5,
exp_size=72,
stride=2,
padding=2,
se=True,
activation=nn.ReLU6())
bottleneck
# Codeblock 10 Output
Bottleneck(
(conv0): ConvBlock(
(conv): Conv2d(24, 72, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU6()
)
(conv1): ConvBlock(
(conv): Conv2d(72, 72, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), teams=72, bias=False)
(bn): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU6()
)
(semodule): SEModule(
(global_pooling): AdaptiveAvgPool2d(output_size=(1, 1))
(fc0): Linear(in_features=72, out_features=18, bias=False)
(relu6): ReLU6()
(fc1): Linear(in_features=18, out_features=72, bias=False)
(hardsigmoid): Hardsigmoid()
)
(conv2): ConvBlock(
(conv): Conv2d(72, 40, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): Identification()
)
)
The Full MobileNetV3
As all parts have been created, what we have to do subsequent is to assemble the principle class of the MobileNetV3 mannequin. However earlier than doing so, I wish to initialize an inventory that shops the enter arguments used for instantiating the bottleneck blocks as proven in Codeblock 11 beneath. Take into account that these arguments are written in response to the MobileNetV3-Giant model. You’ll want to regulate the values within the BOTTLENECKS record if you wish to create the small model as an alternative.
# Codeblock 11
HS = nn.Hardswish()
RE = nn.ReLU6()
BOTTLENECKS = [[16, 16, 3, 16, False, RE, 1, 1],
[16, 24, 3, 64, False, RE, 2, 1],
[24, 24, 3, 72, False, RE, 1, 1],
[24, 40, 5, 72, True, RE, 2, 2],
[40, 40, 5, 120, True, RE, 1, 2],
[40, 40, 5, 120, True, RE, 1, 2],
[40, 80, 3, 240, False, HS, 2, 1],
[80, 80, 3, 200, False, HS, 1, 1],
[80, 80, 3, 184, False, HS, 1, 1],
[80, 80, 3, 184, False, HS, 1, 1],
[80, 112, 3, 480, True, HS, 1, 1],
[112, 112, 3, 672, True, HS, 1, 1],
[112, 160, 5, 672, True, HS, 2, 2],
[160, 160, 5, 960, True, HS, 1, 2],
[160, 160, 5, 960, True, HS, 1, 2]]
The arguments listed above are structured within the following order (from left to proper): in channels, out channels, kernel measurement, growth measurement, SE, activation, stride, and padding. Take into account that padding will not be explicitly said within the authentic desk, however I embody it right here as a result of it’s required as an enter when instantiating the bottleneck blocks.
Now let’s really create the MobileNetV3 class. See the code implementation in Codeblocks 12a and 12b beneath.
# Codeblock 12a
class MobileNetV3(nn.Module):
def __init__(self):
tremendous().__init__()
self.first_conv = ConvBlock(in_channels=3, #(1)
out_channels=int(WIDTH_MULTIPLIER*16),
kernel_size=3,
stride=2,
padding=1,
activation=nn.Hardswish())
self.blocks = nn.ModuleList([]) #(2)
for config in BOTTLENECKS: #(3)
in_channels, out_channels, kernel_size, exp_size, se, activation, stride, padding = config
self.blocks.append(Bottleneck(in_channels=int(WIDTH_MULTIPLIER*in_channels),
out_channels=int(WIDTH_MULTIPLIER*out_channels),
kernel_size=kernel_size,
exp_size=int(WIDTH_MULTIPLIER*exp_size),
stride=stride,
padding=padding,
se=se,
activation=activation))
self.second_conv = ConvBlock(in_channels=int(WIDTH_MULTIPLIER*160), #(4)
out_channels=int(WIDTH_MULTIPLIER*960),
kernel_size=1,
stride=1,
padding=0,
activation=nn.Hardswish())
self.avgpool = nn.AdaptiveAvgPool2d(output_size=(1,1)) #(5)
self.third_conv = ConvBlock(in_channels=int(WIDTH_MULTIPLIER*960), #(6)
out_channels=int(WIDTH_MULTIPLIER*1280),
kernel_size=1,
stride=1,
padding=0,
batchnorm=False,
activation=nn.Hardswish())
self.dropout = nn.Dropout(p=0.8) #(7)
self.output = ConvBlock(in_channels=int(WIDTH_MULTIPLIER*1280), #(8)
out_channels=int(NUM_CLASSES), #(9)
kernel_size=1,
stride=1,
padding=0,
batchnorm=False,
activation=nn.Identification())
Discover in Determine 1 that we initially begin from the usual convolution layer. Within the above codeblock, I confer with this layer as first_conv (#(1)). It’s price noting that the enter arguments for this layer are usually not included within the BOTTLENECKS record, therefore we have to outline them manually. Keep in mind to multiply the channel counts at every step by WIDTH_MULTIPLIER since we wish the mannequin measurement to be adjustable by means of that variable. Subsequent, we initialize a placeholder named blocks for storing all of the bottleneck blocks (#(2)). With a easy loop at line #(3), we’ll iterate by means of all gadgets within the BOTTLENECKS record to truly instantiate the bottleneck blocks and append them one after the other to blocks. In actual fact, this loop constructs the vast majority of the layers within the community, because it covers almost all parts listed within the desk.
Because the sequence of bottleneck blocks is completed, we’ll now proceed with the subsequent convolution layer, which I confer with as second_conv (#(4)). Once more, for the reason that configuration parameters for this layer are usually not saved within the BOTTLENECKS record, we have to manually hard-code them. The output of this layer will then be handed by means of a worldwide common pooling layer (#(5)) which is able to drop the spatial dimension to 1×1. Afterwards, we join this layer to 2 consecutive pointwise convolutions (#(6) and #(8)) with a dropout layer in between (#(7)).
Speaking extra particularly in regards to the two convolutions, you will need to know that making use of a 1×1 convolution on a tensor that has a 1×1 spatial dimension is basically equal to making use of an FC layer to a flattened tensor, the place the variety of channels will correspond to the variety of neurons. That is the explanation that I set the output channel depend of the final layer equal to the variety of lessons within the dataset (#(9)). The batchnorm parameter of each third_conv and output layers are set to False, as advised within the structure.
In the meantime, the activation operate of third_conv is ready to nn.Hardswish(), whereas the output layer makes use of nn.Identification(), which is equal to not making use of any activation operate in any respect. That is primarily accomplished as a result of throughout coaching softmax is already included within the loss operate (nn.CrossEntropyLoss()). Later within the inference section, we have to substitute nn.Identification() with nn.Softmax() within the output layer in order that the mannequin will straight return the chance rating of every class.
Subsequent, let’s check out the ahead() technique beneath, which I received’t clarify any additional since I believe it’s fairly simple to know.
# Codeblock 12b
def ahead(self, x):
print(f'originaltt: {x.measurement()}')
x = self.first_conv(x)
print(f'after first_convt: {x.measurement()}')
for i, block in enumerate(self.blocks):
x = block(x)
print(f"after bottleneck #{i}t: {x.form}")
x = self.second_conv(x)
print(f'after second_convt: {x.measurement()}')
x = self.avgpool(x)
print(f'after avgpooltt: {x.measurement()}')
x = self.third_conv(x)
print(f'after third_convt: {x.measurement()}')
x = self.dropout(x)
print(f'after dropouttt: {x.measurement()}')
x = self.output(x)
print(f'after outputtt: {x.measurement()}')
x = torch.flatten(x, start_dim=1)
print(f'after flattentt: {x.measurement()}')
return x
The code in Codeblock 13 demonstrates how we initialize a MobileNetV3 occasion and cross a dummy tensor by means of it. Do not forget that right here we use the default enter decision, so we will mainly consider the tensor as a batch of a single RGB picture of measurement 224×224.
# Codeblock 13
mobilenetv3 = MobileNetV3()
x = torch.randn(1, 3, INPUT_RESOLUTION, INPUT_RESOLUTION)
out = mobilenetv3(x)
And beneath is what the ensuing output seems like, by which the tensor dimension after every block matches precisely with the MobileNetV3-Giant structure in Determine 1.
# Codeblock 13 Output
authentic : torch.Measurement([1, 3, 224, 224])
after first_conv : torch.Measurement([1, 16, 112, 112])
after bottleneck #0 : torch.Measurement([1, 16, 112, 112])
after bottleneck #1 : torch.Measurement([1, 24, 56, 56])
after bottleneck #2 : torch.Measurement([1, 24, 56, 56])
after bottleneck #3 : torch.Measurement([1, 40, 28, 28])
after bottleneck #4 : torch.Measurement([1, 40, 28, 28])
after bottleneck #5 : torch.Measurement([1, 40, 28, 28])
after bottleneck #6 : torch.Measurement([1, 80, 14, 14])
after bottleneck #7 : torch.Measurement([1, 80, 14, 14])
after bottleneck #8 : torch.Measurement([1, 80, 14, 14])
after bottleneck #9 : torch.Measurement([1, 80, 14, 14])
after bottleneck #10 : torch.Measurement([1, 112, 14, 14])
after bottleneck #11 : torch.Measurement([1, 112, 14, 14])
after bottleneck #12 : torch.Measurement([1, 160, 7, 7])
after bottleneck #13 : torch.Measurement([1, 160, 7, 7])
after bottleneck #14 : torch.Measurement([1, 160, 7, 7])
after second_conv : torch.Measurement([1, 960, 7, 7])
after avgpool : torch.Measurement([1, 960, 1, 1])
after third_conv : torch.Measurement([1, 1280, 1, 1])
after dropout : torch.Measurement([1, 1280, 1, 1])
after output : torch.Measurement([1, 1000, 1, 1])
after flatten : torch.Measurement([1, 1000])
In an effort to be certain that our implementation is appropriate, we will print out the variety of parameters contained within the mannequin utilizing the next code.
# Codeblock 14
total_params = sum(p.numel() for p in mobilenetv3.parameters())
total_params
# Codeblock 14 Output
5476416
Right here you’ll be able to see that this mannequin comprises round 5.5 million parameters, by which that is roughly the identical because the one disclosed within the authentic paper (see Determine 10). Moreover, the parameter depend given within the PyTorch documentation can also be much like this quantity as you’ll be able to see in Determine 12 beneath. Based mostly on these information, I imagine I can affirm that our MobileNetV3-Giant implementation is appropriate.

Ending
Nicely, that’s just about the whole lot in regards to the MobileNetV3 structure. Right here I encourage you to truly practice this mannequin from scratch on any datasets you need. Not solely that, I additionally need you to mess around with the parameter configurations of the bottleneck blocks to see whether or not we will nonetheless enhance the efficiency of MobileNetV3 even additional. By the way in which, the code used on this article can also be accessible in my GitHub repo, which you will discover within the hyperlink at reference quantity [9].
Thanks for studying. Be at liberty to succeed in me by means of LinkedIn [10] when you spot any mistake in my clarification or within the code. See ya in my subsequent article!
References
[1] Muhammad Ardi. MobileNetV1 Paper Walkthrough: The Tiny Large. AI Advances. https://medium.com/ai-advances/mobilenetv1-paper-walkthrough-the-tiny-giant-987196f40cd5 [Accessed October 24, 2025].
[2] Muhammad Ardi. MobileNetV2 Paper Walkthrough: The Smarter Tiny Large. In the direction of Information Science. https://towardsdatascience.com/mobilenetv2-paper-walkthrough-the-smarter-tiny-giant/ [Accessed October 24, 2025].
[3] Andrew Howard et al. Looking for MobileNetV3. Arxiv. https://arxiv.org/abs/1905.02244 [Accessed May 1, 2025].
[4] Muhammad Ardi. SENet Paper Walkthrough: The Channel-Clever Consideration. AI Advances. https://medium.com/ai-advances/senet-paper-walkthrough-the-channel-wise-attention-8ac72b9cc252 [Accessed October 24, 2025].
[5] Picture created initially by writer.
[6] Mark Sandler et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks. Arxiv. https://arxiv.org/abs/1801.04381 [Accessed May 12, 2025].
[7] Jie Hu et al. Squeeze and Excitation Networks. Arxiv. https://arxiv.org/abs/1709.01507 [Accessed May 12, 2025].
[8] Mobilenet_v3_large. PyTorch. https://docs.pytorch.org/imaginative and prescient/foremost/fashions/generated/torchvision.fashions.mobilenet_v3_large.html#torchvision.fashions.mobilenet_v3_large [Accessed May 12, 2025].
[9] MuhammadArdiPutra. The Tiny Large Getting Even Smarter — MobileNetV3. GitHub. https://github.com/MuhammadArdiPutra/medium_articles/blob/foremost/Thepercent20Tinypercent20Giantpercent20Gettingpercent20Evenpercent20Smarterpercent20-%20MobileNetV3.ipynb [Accessed May 12, 2025].
[10] Muhammad Ardi Putra. LinkedIn. https://www.linkedin.com/in/muhammad-ardi-putra-879528152/ [Accessed May 12, 2025].


