I’m starting to dive into the library a little and I have some questions. There are a few threads going where different changes are being talked about, but I thought it might be nice to put all of the discussion under one roof as the discourse is only going to grow.
Right now there doesn’t seem to be a specific spot to talk deeply on the library or ask questions and a forum category seems to be the natural place to me.
I think #part1-v2 is the best place for this for now. I’d be delighted to have deep discussions about the lib there - if it gets to be a popular topic we can give it a separate category; but I avoid that when I can since otherwise it can fragment the discussion.
Okay, that makes sense. I’m hoping to contribute to the library at some point. I’m working on some modifications related to recommendation that I’m hoping to publish after which I’d want to add it, assuming you found it worthwhile to include.
Just curious who else is working on the library and what the best way to keep in touch with people on it is. I tried posting a few questions in the part1v2 forum but the answers to them were intermittent and a couple went unanswered.
I’d really like to contribute to the library and I have a pull request I’m preparing, but it would be good to know who I can reach out to so I’m not always bugging you with an “@jeremy” because I know how busy you are.
Perhaps you can ping me on unanswered questions? That way we can keep the discussion going. If you jot down links to the questions you’re referring to, I can take a look. Quite possibly there are very few people working on this at the moment! And since it’s a big library, those people may be looking at totally different bits.
Thanks @jeremy, I just didn’t want to always be pinging you on topics. The main question I had posted on was:
In terms of functionality I have a pull request to submit that handles the loss function correctly and generalizes it for the case of padding indexes. I think what I’ve done is correct, but I was hoping for others to weigh in in terms of whether my use of lambda functions was correct in this case.
I’ve got another PR that I need to check in that solves a big memory issue I ran into as well. I’ll hopefully get to that tonight after I get my little guy to bed.
I’m working on a new architecture for recommendation using the fast.ai library so I can take advantage of a number of the optimizations and existing blocks. I’m planning on submitting for Recsys 2018 if I can get it finished and written up in time. The functionality and details end up getting so complex that I’m hoping to keep in touch with others who are digging into the library for mutual support. I think what’s been created so far is super powerful and is worth developing upon, but I guess the community isn’t quite as big as I thought it was. Which isn’t an issue, but it means that we’ll be more reliant on you to answer questions until we get up to speed.
Thanks @Matthew, I’ll definitely keep in touch if I’ve got questions. I’m mostly modding the NLP side of the library right now. Have you looked at it in detail?
@jeremy it looks like the FastAI library is only designed to be used with GPU’s right now. Installing on my macbook got errors trying to install CUDA. I commented that out and tried to install the CPU version of py-torch, but got other errors about the MKL library. What are current system requirements?
If I can help to make a cpu-compatible version, I’m down. Or better yet, have it be transparent between cpu and gpu. Not having to always use GPU time when you’re just fiddling with a problem is really useful.
If nothing else, I think having something on the README explaining current system requirements would be great!
I was wondering whether the question to create a new category for the fastai library is still under consideration. I’d like to have one where users of and contributors to the library can have discussions that are apart from discussions that pertain to the courses.