Dev Projects General Discussion

(Jeremy Howard (Admin)) #1

This thread is exclusively for asking questions about how dev projects work and how you could get involved.

If you have questions about a proposed project listed at Dev Projects Index, discuss that project in the thread dedicated to that project and linked to from the index.

If you would like to propose a new project, create a new thread under https://forums.fast.ai/c/fastai-users/dev-projects and discuss it there.

This thread is not for discussing problems in your or fastai code.

Thank you.

1 Like

Dev Projects Index
(Ilia) #2

Could you please clarify, is that correct that any forum’s user could propose their candidature to implement something or an idea? Or does it limited to fastai team members and experienced contributors?

0 Likes

(Stas Bekman) #3

Any user. I will update the index post.

1 Like

(Henri Palacci) #4

Hi - I know Jeremy mentioned he would go deeper into how to use the tabular interface for time series data in a future lecture, so maybe this is a moot point but:

To load TS data into the fastai workflow, this is what I’m doing/am seeing others do:

  1. create custom TensorDataset (from numpy arrays - I imagine there would be a way to load from csv)
  2. create pytorch DataLoader from these
  3. load those in a DataBunch
  4. create a custom model (I don’t think the NLP RNNs would work out of the box?)
  5. instantiate a Learner

If there’s a plan/a desire to make some of these operations more fastai-friendly, I’d love to take a shot at it. I have a little time in the coming weeks and have a little OS contribution experience.

0 Likes

(Jeremy Howard (Admin)) #5

You can use DataBunch.create to skip step 2. I’m not teaching RNNs for time series, so it would be interesting to see what you come up with.

1 Like

#6

I’m finishing a tutorial on how to create your custom ItemBase/ItemList which would be helpful if you want to have the methods like show_batch/show_results working properly.

2 Likes

(Ilia) #7

I wonder how difficult it could be to bring Reinforcement Learning support into the library? Here is a small discussion. I think that supervised methods should work well with modern RL algorithms, like, DQN and playback buffers. So probably optimal strategies learning could somehow benefit from features implemented in the library.

1 Like

(Henri Palacci) #8

Ok, will give it a shot and report back. Will you be teaching CNN for TS?

0 Likes

#9

I’m doing GANs without any issue. You just need to write your own callback for the training.

1 Like

(Ilia) #10

Yeah, that sounds interesting. Could you please share a link with your implementation? (If it is published, of course). I have in mind some ideas but I believe that your example would be a good hint for me and @Lankinen

0 Likes

#11

It’s in the dev notebooks folder, cycle gan is here and the basic gan is now inside the library (in vision.models.gan and callbacks.gan)

1 Like

(Sam) #12

Just so that someone from the Wasserstein clan does not get a heartburn :grinning:
The name Wasserstein is with an s.
not
Wassertein (s missing)

0 Likes

(Stas Bekman) #14

@bfarzin, this thread is about dev projects. I updated the first post to clarify the setup.

So please post your question elsewhere in the forums - either in a dedicated thread if you can’t find something similar or perhaps in Developer chat.

1 Like

(Sanyam Bhutani) #15

Edit: Apologies, I wasn’t sure if I had to discuss it here before creating a thread.
I’ll wait for your response before creating a thread in the dev projects category.

@stas I’m trying to experiment with this idea to improve Language Models performance as well as the accuracy of Sentiment Classifier.

The approach is inspired by this paper: “EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks”

@rishi_mrb and I are trying to take some ideas and experiment on the IMDB dataset to see if we can improve the training time or accuracies.

Here is a kaggle kernel demoing the code for augmenting text".

An image to show the TL;DR augmentation approach:

Other ideas I want to try are back-translation: Translating from one language and back to English, using the variance as augmented text.

Please share your thoughts, or let me know if it sounds like I’m shooting myself in the foot with foolish ideas.

If these look good, I’ll be happy to co-ordinate this project.

1 Like

(Stas Bekman) #16

Yes, we need to make this more clear. I just wasn’t sure how to phrase it so that nobody gets mislead into believing that if they propose a project and implement it, it’ll get integrated into fastai. Please help me to refine that section of the main post in Dev Projects Index.

My thinking is this: don’t wait for anybody’s approval and go ahead and propose the project, link to it, discuss, coordinate and implement the project. And then it’ll either get integrated into fastai or it can live some other github repository and accessible to those who want it. We can make a dedicated index of projects generated “under” fastai but which for one reason or another couldn’t be made part of the fastai core.

If however you’d only want to invest your energy if your contribution is to be accepted into fastai, you will need to “secure” such agreement with Sylvain and Jeremy. But my feeling is that even if that were to happen it’d still give you no guarantee that your code will get integrated at the end.

The key is to communicate very clearly that there is no obligation here. At the same time anybody is welcome to use this space to brainstorm and create great things and find collaborators to do so.

How does that sound?

I’ll wait for your response before creating a thread in the dev projects category.

That’s said, if you feel you are in agreement, go ahead and do it.

1 Like

(Sanyam Bhutani) #17
I've added this line to the index

You can also propose new experiments that you are excited to work on and coordinate or volunteer for the same. However, if these haven’t approved by the fastai team, these might not get introduced to the library later.

Thank you for the detailed explanations.

I’m pretty excited about my idea, I’ll create a thread, share my ideas and intuition and see if it works.
I’m totally happy to dedicate efforts knowing that this plan may not work at all and I might fall flat on my face. But I’m excited about the learning experience and knowing why it didn’t work.

Thanks!

1 Like

(Stas Bekman) #18

Thank you for the edit, @init_27

I’m pretty excited about my idea, I’ll create a thread, share my ideas and intuition and see if it works.
I’m totally happy to dedicate efforts knowing that this plan may not work at all and I might fall flat on my face. But I’m excited about the learning experience and knowing why it didn’t work.

I’m excited for you being excited, @init_27!

1 Like