True. I was considering the entire dataset more broadly.
Why is that a problem ? Wouldn’t you store the model as the compression algorithm, not as part of the ouputed images ?
how are we producing for hi res imges for benchmarking…?
gc.collect() now thats a good nugget! I’ve been restarting kernels forever!!
I guess your problem would be compute afterward, you can’t require a GPU each time you want to decompress your image, with a software that is super heavy.
And an even better version of gc.collect that does it for you automatically is: https://github.com/stas00/ipyexperiments/
FAQ, resources, and official course updates ✅
Fast.ai v3 2019课程中文版笔记
Lesson 7 Official Resources ✅
I thought it compares pairs of images: an original hi-res with it’s corresponding predicted. It seems the discriminator is a regular 2 class image classifier with no knowledge of image pairs. Why don’t we compare them pairwise?
Because it’s a harder task when you decouple them.
GAN helps us to generate new images from existing ones and same we can do with Image augmentation? Wanted to know how they both are different?
Are you asking how the GAN (which includes Generator & Discriminator) is better than just using the Generator? The Generator can end up learning to create images that are different in significant ways from what we want (in this case, our high-res input), but the GAN addresses this.
OK…Thats great…so if we need significant change from original image then we should go with GAN,
Can the generator and critic be combined in a single network so they are trained concurrently?
If we compared the blinded hi-res cat to the original hi-res cat, maybe the generator would see a difference and try to fix it? Maybe the problem is that it looks at each image by itself to judge its hi-res-ness?
I would say that some basic understanding of GAN training process could be discovered in this PyTorch tutorial.
If UNets are well-suited for cases when the output resolution is like the input resolution, then are they likely to be the right architecture when you want to train a model to identify a specific location in an image rather than a specific object?
Or would it be wiser to train a regressor to predict the location as coordinates in the image?
Would GANs work for text classification? I.e., use the critic to classify text as “spam” vs “not spam”, and the generator to generate increasingly sophisticated spam text?
I am thinking of classifying emails for example - where spammers can create multiple variations of stuff like “viagra”, “v1agra”, “v iag ra”, …
Can Gan help in generating natural looking text(with text content based on input) like what happen to Gan generating natural looking image? If can, how to implement it?
Can the generator-critic pair work out images that are of the same type but different indeed to the hi-res master samples? For example a new dog in different position or with different color/fur/proportions, but that match a dog thing?
Is it possible to use similar ideas to U-Net and GANs for NLP? For example if I want to tag the verbs and nouns in a sentence, or create a really good Shakespeare generator?
Considering that a UNET is retaining the fidelity of the input all the way till the output, I would think it will provide higher accuracy even for a classifier. For example, a UNET may do a better job of classifying very similar looking cat or dog breeds. Is that not the case?
Also I wonder if perhaps UNETs could avoid bias creeps as well? For example if huskies are getting identified only because of the background snow in an image, then perhaps using a UNET over a more diverse data set would train the model not to look at background but more at the features of a husky?