Implementation of MobileNetV2 as done in py-torchvision See: // https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv2.py#L66.
More...
#include <mobilenet_v2.h>
|
| | MobileNetV2 (int num_classes=1000, float width_mult=1.0f, int round_nearest=8, float dropout=0.2) |
| | Construct a new MobileNetV2 object.
|
| |
| torch::Tensor | forward (torch::Tensor x) |
| | Performs the forward pass.
|
| |
|
void | initialize_weights () |
| | Initialize conv/bn/linear similar to torchvision defaults.
|
| |
| void | load_torchvision_weights (std::string pt) |
| | Loads a .pt weight file containing a dict with key/parameter pairs.
|
| |
| int | getNinputChannelsOfClassifier () const |
| | Gets the number of input channels of the classifier.
|
| |
| void | replaceClassifier (torch::nn::Sequential &newClassifier) |
| | Replaces classifier with a new one.
|
| |
| void | setFeaturesLearning (bool doLearn) |
| | Enables/disables learning in the feature layers.
|
| |
| torch::nn::Sequential | getClassifier () const |
| | Gets the Classifier object.
|
| |
|
| static torch::Tensor | preprocess (cv::Mat img, bool resizeOnly=false) |
| | Preprocessing of an openCV image for inference or learning.
|
| |
◆ MobileNetV2()
| MobileNetV2::MobileNetV2 |
( |
int |
num_classes = 1000, |
|
|
float |
width_mult = 1.0f, |
|
|
int |
round_nearest = 8, |
|
|
float |
dropout = 0.2 |
|
) |
| |
|
inline |
Construct a new MobileNetV2 object.
If you want to load the weight files from torchvision into this class use the default values for the parameters.
- Parameters
-
| num_classes | Number of classes. |
| width_mult | Width multiplier - adjusts number of channels in each layer by this amount. |
| round_nearest | Round the number of channels in each layer to be a multiple of this number. |
| dropout | Dropout probability for the dropout layer in the classifier. |
◆ forward()
| torch::Tensor MobileNetV2::forward |
( |
torch::Tensor |
x | ) |
|
|
inline |
Performs the forward pass.
- Parameters
-
| x | The batch of input images. |
- Returns
- torch::Tensor The category scores for the different labels.
◆ getClassifier()
| torch::nn::Sequential MobileNetV2::getClassifier |
( |
| ) |
const |
|
inline |
Gets the Classifier object.
Gets a shared pointer to the classifier, for example, to attach an optimiser for transfer learning.
- Returns
- torch::nn::Sequential
◆ getNinputChannelsOfClassifier()
| int MobileNetV2::getNinputChannelsOfClassifier |
( |
| ) |
const |
|
inline |
Gets the number of input channels of the classifier.
This will make it easy to replace the classifier with anything the user wants by creating their own torch::nn::Sequential() for the classifier.
- Returns
- int The number of intput channels of classifier class "classfier".
◆ load_torchvision_weights()
| void MobileNetV2::load_torchvision_weights |
( |
std::string |
pt | ) |
|
|
inline |
Loads a .pt weight file containing a dict with key/parameter pairs.
See https://github.com/pytorch/pytorch/issues/36577 The difference between pytorch and libtorch is that pytorch just has named parameters but libtorch has both named parameters and named buffers. This method makes sure that the key/parameters pairs are loaded into both named buffers and named parameters.
- Parameters
-
| pt | filename of the .pt weight file. |
◆ preprocess()
| static torch::Tensor MobileNetV2::preprocess |
( |
cv::Mat |
img, |
|
|
bool |
resizeOnly = false |
|
) |
| |
|
inlinestatic |
Preprocessing of an openCV image for inference or learning.
The images are resized to 256x256, followed by a central crop of 224x224. Finally the values are first rescaled to [0.0, 1.0] and then normalized using mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225].
- Parameters
-
| img | 8bit BGR openCV image with an aspect ratio of 1:1. |
| resizeOnly | If true the image is only resized to 224x224 but not cropped. Default: false. |
- Returns
- torch::Tensor The image as a tensor ready to be used for inference and learning.
◆ replaceClassifier()
| void MobileNetV2::replaceClassifier |
( |
torch::nn::Sequential & |
newClassifier | ) |
|
|
inline |
Replaces classifier with a new one.
For transfer learning the default classifier is replaced with a new one.
- Parameters
-
| newClassifier | The new classifier. |
◆ setFeaturesLearning()
| void MobileNetV2::setFeaturesLearning |
( |
bool |
doLearn | ) |
|
|
inline |
Enables/disables learning in the feature layers.
For transfer learning one needs to disable learning in the features submodule.
◆ classifierModuleName
| constexpr char MobileNetV2::classifierModuleName[] = "classifier" |
|
staticconstexpr |
Name of the classifier submodule.
This appears as part of the key in named_parametes and named_buffers.
◆ featuresModuleName
| constexpr char MobileNetV2::featuresModuleName[] = "features" |
|
staticconstexpr |
Name of the features submodule.
This appears as part of the key in named_parametes and named_buffers.
The documentation for this class was generated from the following file: