Using fast.ai course to create neural networks suitable for embedded applications

Hi,

I’m an undergraduate EEE student doing a research internship during this summer. I’m hoping to build an embedded traffic sign detection system that uses neural networks for recognition of already segmented traffic signs. At this stage, I am looking to implement it on the PYNQ platform (containing ZynQ SoC).

I’ve completed the first three lessons so far and I’ve managed to get some great results already using the fast.ai library. It seems however that the ResNet architecture will be too resource demanding for the FPGA implementation. I was wondering if after completing this course I will be able to construct and train more suitable architectures? Or, in this circumstances, should I go with the Andrew Ng’s specialization on Coursera? At this point, I don’t know much about neural networks.

I’m kinda pressed for time and I need to choose wisely so I would appreciate any advice.

Thanks,
Marek.

1 Like

With any practical goal in mind, fastai is the right choice. I am not aware of there existing any other route that can take you from not knowing much about NNs to doing really, really neat things as quickly as fastai can.

4 Likes

I totally concur about quality of results with minimal effort (and code).

But I sometimes wonder if in the long run it could be, in itself, a hindrance to learning.

Many things can be learned by digging into the code rather than taking it as black magic, but a lot of stuff doesn’t even have a docstring, and the library is coded in a quite efficient, but sometimes hard to read, way (e.g. the generous usage of broadcasting and unpacking).
Given that the authors do have a lot of personal problems to worry about, I think the community should put more effort in documenting the library.

It should be a coordinated effort, though. Submitting pull reqs every time one adds a docstring would be unwise.

Ng’s courses are ok, but they take the bottom-up approach, and you have to decide if it suits you.

That said, I think that by the time you become an expert in DL, pytorch models will be deployable on such hardware. In the meantime, there are other solutions (ONNX, etc…).