I do not use dnn-modern, only opencv dnn module.
Nothing(I compile opencv with intelTBB). I import the model and forward it as the example shown(you can study my source codes). I believe opencv dnn module do aggressive optimization on cpu part, after all it is a project back by intel
.
If you want to save some pain, you can use torch or caffe to train your 8 layers network, I described squeezeNet1.1 by torch and import it by opencv dnn without any pain(almost).
I do not recommend tensorflow if you plan to export train model to opencv dnn, because
- opencv dnn do not have good support on tensorflow yet, there are many surprise
- tensorflow is an over complicated library, better stay away from it unless you cannot find better candidate or your boss ask you to do that
Although my experiences of lua is close to zero, I find out torch still much more easier to learn and use compare with tensorflow, it is a well designed library + you can import trained model by opencv dnn with zero pain(almost). Right now opencv do not support nngraph by torch, I guess they are busy on fixing the issues related to tensorflow(as Jeremy said, tensorflow is too complicated)
Following are the things you need to take care when saving torch model
--clear the state first, else model size will be ultra big and trained batchnorm
--may not be used correctly
net:clearState()
--convert model from gpu to cpu, else save model cannot be loaded by opencv dnn
net = net:float()
torch.save('squeeze_homo', net)
After that, just load it by opencv dnn.
dnn::Net net = dnn::readNetFromTorch(model);
Very simple, isnāt it?If you need yolo, maybe this post(#9705, port yolo v2 to opencv) can help you(unless I am quite interesting).
ps : You can change your network architectures from vgg like architectures to squeezeNet or mobileNet like.