This is a re-implementation of MobileNet as described in [1] changed to fulfill hardware constraints of the Vitis AI framework for inference on Xilinx FPGAs.
Pretrained weights are available from the model database as follows:
Source file: /models/mobilenet_vitis.py
models.mobilenet_vitis.mobilenet_vitis(
input_tensor=None,
include_top=True,
weight_path=None,
return_tensor=False,
classes=1000,
classifier_activation="softmax",
alpha=1.0,
depth_multiplier=1
)
tf.keras.model
(if true, weights will not be loaded).include_top=True
.include_top=True
.alpha
< 1.0, proportionally decreases the number
of filters in each layer.
- If alpha
> 1.0, proportionally increases the number
of filters in each layer.
- If alpha
= 1, default number of filters from the paper
are used at each layer.The CNN architecture as tf.keras.model
if return_tensor=False
, otherwise as tf.keras.layers
.
[1] A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” 2017.
[2] O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” International Journal of Computer Vision (IJCV), vol. 115, no. 3, pp. 211–252, 2015, doi: 10.1007/s11263-015-0816-y.