Really appreciate the notebooks you organized and videos you are creating.
Have been wondering how to go about running fastai2 on video data or data with multiple 2d slices of images with variable length. Meaning x is a set of 2d slices composing a 3d volume and between two distinct x’s the number of 2d slices may vary (i.e. one video may have more frames than the other since its a longer shot).
Just a little plug for Dokku as a free/ on-premise option for Heroku buildpacks. I’m using it on a workstation and some rented servers and it’s great PaaS…
After giving it some thought, I rearranged when pose detection will show up. I believe this will be better as the technique uses both of the topics discussed the previous week. (Object and keypoints)
@muellerzr Is it possible for you to make a notebook that introduces Higher, middle and lower level fastai APIs for any problem of your choice? Fastai API docs aren’t straight forward enough for me, I’m often confused what to use and when to use something (databunch, datablocks etc…). A clear intro to all the APIs would benefit a lot for beginners.
PS: Not necessarily a video, just a notebook would do, or even a simple flowgraph would do.
That’s the goal of the notebooks. Each will have a very different example and the best way to go about each. From what I’ve found though, 99% of the problems you’ll try and do can come from the mid level API, what we’ll use the most. MNIST shows the lowest level, but I think that’s the only example I’ll wind up using (as literally any other problem can use the mid level DataBlock), possibly tommorow’s lecture may with k fold
@muellerzr I’m working through the first video and thanks for the github help. As an fyi,I used to use the JS snippet, but I’ve switched to using a chrome plugin called “Download All Images” which I’ve had good luck with. You have to do a little cleanup since it downloads icons, … but it’s nice if you want the images locally.
I have an idea for today’s lesson, let me know if it would be of interest
Today’s lesson is a bit briefer (in terms of walking through the code, not so much running the code with the KFold) so I was debating on going through some of the super low level API (like what a PILImage is, etc). We will eventually get into it (briefly) next week along with in more detail in 2-3 weeks, but I would like to know if you’d rather do that now
Hey @mrfabulous1! I know I answered this in the video but:
Colab notebooks have their own naming structure and it’s different than the file name so sometimes they may not match when you open it in Colab. So long as it’s the right link you’re set! (I’ll try to work on fixing that when I can)
When we dig into the source code and look at the files, where do we get the TEST_IMAGE from?
I know it is possible to download another image and change the name, the reason i wanted to know this because i want to see if the examples are dependent on the size of the TEST_IMAGE.(If some particular tests are being performed etc which would break if i bring in a new image)