@muellerzr Thanks for your information. I missed this helpful tutorial…
By the way, you might have know this, in this colab notebook(https://walkwithfastai.com/Binary_Segmentation), dls = binary.dataloaders(path/'images_data_crop', bs=8) gives me an error because NameError: name 'p2c' is not defined.
On the above cell vals = n_codes(lbl_names) should be p2c = n_codes(lbl_names) I guess.
@muellerzr I have a need to use CropPad for the batch ,is it possible to use ?
i am passing albumenations to the item tfms so cant pass CropPad ,unless there is any other method
We have finished the video lessons from the 2020 course, there is only left the 4 lessons on tabular data, remember if you want to join Tuesday we watch the lesson and Thursday we run it and try to see if we get it, so you can watch the video on your time and come with us on Thursday.
You should definitely take a look at IceVision. IceVision is an object-detection framework that connects to different libraries/frameworks such as Fastai, Pytorch Lightning, and Pytorch. So you can use all of the FastAI goodness and get cracking with a big range of object detection models and algorithms.
This area is moving so quickly these days, it is really helpful to engage with some folks who are following it energetically and adding new capabilities as they emerge.
Like FastAI, IceVision is also open source and has a great community.
If so, it is resolved with the latest version of fastai. However, fair warning… I get a different error when I run 07_Super_Resolution.ipynb with the latest fastai . Still digging into that, it may be that I’ve messed something up in installs or in modifying the notebook.
EDIT: My error came from an issue working with single-channel data, which I fixed. It wasn’t a bug in the latest fastai version. @studboii I’d suggest trying to rerun your notebook with the latest fastai!
great notebooks! I was looking at the object detection notebook and I noticed something odd. The final output for the RetinaNet has anchors of sizes [[32, 32], [64, 64], [8, 8], [4, 4], [2, 2]]]. But looking at the RetinaNet paper shouldn’t the second anchor grid be of size 16,16 instead? If you change the merges to idx,hook in zip(sfs_idxs[:2], self.sfs[:2])]) it gives the desired sizes. Is my understanding correct or am I missing something with how the model merges the feature maps? (link)
If you are running " 06_Object_Detection" in June 2021 the code is broken tnx to some tensor subclassing introducing at some point by fastai. To fix the code just add a simple callback that cast the tensors back to ‘Tensor’ class