New Category for the Fast.ai library?

I’m starting to dive into the library a little and I have some questions. There are a few threads going where different changes are being talked about, but I thought it might be nice to put all of the discussion under one roof as the discourse is only going to grow.

Right now there doesn’t seem to be a specific spot to talk deeply on the library or ask questions and a forum category seems to be the natural place to me.

2 Likes

Bumping this for @jeremy just in case it was missed. I think it would be helpful for contributers/users to have a dedicated forum location.

I did miss this - thx for the bump.

I think #part1-v2 is the best place for this for now. I’d be delighted to have deep discussions about the lib there - if it gets to be a popular topic we can give it a separate category; but I avoid that when I can since otherwise it can fragment the discussion.

Okay, that makes sense. I’m hoping to contribute to the library at some point. I’m working on some modifications related to recommendation that I’m hoping to publish after which I’d want to add it, assuming you found it worthwhile to include.

Just curious who else is working on the library and what the best way to keep in touch with people on it is. I tried posting a few questions in the part1v2 forum but the answers to them were intermittent and a couple went unanswered.

I’d really like to contribute to the library and I have a pull request I’m preparing, but it would be good to know who I can reach out to so I’m not always bugging you with an “@jeremy” because I know how busy you are.

Perhaps you can ping me on unanswered questions? That way we can keep the discussion going. If you jot down links to the questions you’re referring to, I can take a look. Quite possibly there are very few people working on this at the moment! And since it’s a big library, those people may be looking at totally different bits.

1 Like

@Even

You can ping me, too. I’m not an expert at the library but I’d be down to figure things out with you. I made two pull requests today:

https://github.com/fastai/fastai/pull/112
https://github.com/fastai/fastai/pull/113

(TIL you can put a space in front of a Discourse link to disable the link preview box.)

@jeremy

1 Like

Thanks @jeremy, I just didn’t want to always be pinging you on topics. The main question I had posted on was:

In terms of functionality I have a pull request to submit that handles the loss function correctly and generalizes it for the case of padding indexes. I think what I’ve done is correct, but I was hoping for others to weigh in in terms of whether my use of lambda functions was correct in this case.

I’ve got another PR that I need to check in that solves a big memory issue I ran into as well. I’ll hopefully get to that tonight after I get my little guy to bed.

I’m working on a new architecture for recommendation using the fast.ai library so I can take advantage of a number of the optimizations and existing blocks. I’m planning on submitting for Recsys 2018 if I can get it finished and written up in time. The functionality and details end up getting so complex that I’m hoping to keep in touch with others who are digging into the library for mutual support. I think what’s been created so far is super powerful and is worth developing upon, but I guess the community isn’t quite as big as I thought it was. Which isn’t an issue, but it means that we’ll be more reliant on you to answer questions until we get up to speed.

Thanks @Matthew, I’ll definitely keep in touch if I’ve got questions. I’m mostly modding the NLP side of the library right now. Have you looked at it in detail?

I’ve been focusing on the computer vision side. I haven’t looked at the NLP side yet.

Note that fastai.nlp will be deprecated soon in favor of fastai.text.

Good to know. I’m guessing there’s a lot of overlap. I’ll see if I can’t make my changes in both.

@jeremy it looks like the FastAI library is only designed to be used with GPU’s right now. Installing on my macbook got errors trying to install CUDA. I commented that out and tried to install the CPU version of py-torch, but got other errors about the MKL library. What are current system requirements?
If I can help to make a cpu-compatible version, I’m down. Or better yet, have it be transparent between cpu and gpu. Not having to always use GPU time when you’re just fiddling with a problem is really useful.
If nothing else, I think having something on the README explaining current system requirements would be great!

Thanks! - Blake

I was wondering whether the question to create a new category for the fastai library is still under consideration. I’d like to have one where users of and contributors to the library can have discussions that are apart from discussions that pertain to the courses.

2 Likes