Learning to See in the Dark (Resnet)

Hey guys,

I’m trying to build some equvilant Resnet code to this code:

Basically, by using long-exposure images (GT), we can train the model to reconstruct dark images (short-exposure images) as if they were shot in long-exposure mode.

I’ve got two Qs:

  1. Is there any previous code, based on fastai library to train the same model?
  2. Let’s say that I have the Dataset. It contains hundreds of pairs (Short-vs-Long exposure images). How do I label them? After all it isn’t a classification task, right?

Oh, one more question. Oh do I use path function here? Just use the file path I have?

One way to look for implementations is to use paperswithcode:

This is more like a segmentation task, where the input and output are both images. You will have to set it up in a similar way. You can use DataBlocks, with blocks=(ImageBlock,ImageBlock)

BTW, I don’t think what you want is a ResNet classification model that cnn_learner gives. You can try using a U-net, which fastai has an implementation of (unet_learner). The paper apparently has some context-aggregration network combined with a U-net, so you could also implement the model in PyTorch and use it with fastai. There are a few existing PyTorch implementations you can take inspiration from.

Hey thanks!

I’ve seen those codes already. But maybe it’s a bit harder for me to understand their logic, either because I’m new on Python or their trainning code seems a bit complicated. I needed something as simple and functional as the fastai codes.

According the articles it shows that some researchers made it to improve the results by using Resnet over Unet… That’s why I wanted to try that.

I’ll check up the Datablocks you mentioned… And try to apply it on my needs

1 Like

Maybe there 's something about using residual blocks, which are the building blocks of the ResNet model, but the classical ResNet model is a classification model and does not support outputing of full images. Hope that clarifies things…