I’m also interested in having this feature as long as its technically feasible…in my case I’m working with images that are ratio 2:1
fwiw, I know some people in the recently completed Carvana Kaggle competition seemed to be using varying rectangular shaped images since that dataset contained images that were all ratio 1.5:1
No new object is getting created. But on each Epoch, the Dataset Loader will apply transforms with a random parameter (like zoom, sheer, shift etc) on each image to create slightly modified version of the input image so that network doesn’t over-train to the input images.
It’s easy enough to do it in the transforms - the issue is that the model itself needs to have a consistent input size. I guess it doesn’t really have to be square - just consistent. But in practice generally we have a mix of landscape and portrait orientation, which means square is the best compromise.
If you have a dataset that’s consistently of a particular orientation, then perhaps it does indeed make sense to use a rectangular input - in which case feel free to provide a PR which allows that (i.e. sz everywhere it’s used would assume square if it’s an int, or rectangle if a tuple).
But should we be creating new images? i.e. keep the original and create a new transformed version of it as well. I thought that’s data augmentation.
I guess if we have a sufficiently large dataset, doing an in-place transformation might be okay. But if we start will fewer data, it might be better to add. What’s the guidance here?
Our architecture ~= Predefined architecture (No change) + second last layer (Calculating activations for this layer for the provided data?) + last layer (Output layer)
We specify 3 learning rates for different layer groups and not for one layer. Different layer groups need different amount of fine tuning and hence different learning rates. Before unfreezing, we were only training the last layer and we only needed to supply one learning rate. After unfreezing, if we supply only learning rate, fastai library will use the same learning rate for all the layer groups and this may not be ideal.
So, ‘Adam’ when used without unfreezing will adapt learning rate over time for last layer?
How does Adam’s adaptive nature help just for the last layer?
Trying to test Dogs v Cats super-charged! ipynb getting weights not found exception. Where can I get these weights. Is there any pre-requisite to run these ipynb
FileNotFoundError: [Errno 2] No such file or directory: ‘…/fastai/courses/dl1/fastai/weights/resnext_50_32x4d.pth’
Adam optimization contains a momentum parameter that controls how quickly the optimizer reaches the global/local minimum.
The momentum parameter is in the range 0 - 1. Values close to 1 represent 'high` momentum.
The higher the momentum, the more likely the optimizer will overshoot the minimum.
Question: Can you simulate SGDR by manipulating the momentum parameter in Adam optimization (at specific times in the iteration cycle), without varying the actual learning rate?
I was among a few unfortunate people who do not have free $500 AWS access. Therefore i am trying to test everything in my local machine. When I try to run lesson1-sgd.ipynb. I am getting following error. I know this error is because model is trained in GPU that we are tying to run in CPU…But I do not know how to solve this any idea?
AssertionError: Torch not compiled with CUDA enabled
I am getting:
FileNotFoundError: [Errno 2] No such file or directory: 'wgts/resnext_50_32x4d.pth’
am i missing something?
i did git pull before I started
You probably have installed PyTorch with Cuda. If you pip uninstall torch and then reinstall using the non-cuda version it should not give that error. To install non-cuda version see the Getting Started section in http://pytorch.org/
But please note that you may not be able to run things to completion in your local (non-GPU) machine unless you wait for a very long time. The best course of action might be using Crestle or AMI. If you need financial help please post - Request or share AWS credits here and someone might be able to setup a AWS box using their credits and provide you access for limited hours / week.