My first blog

Creating this thread for discussion on possible topics for blogs and peer review.

I’m thinking to get started with an idea that I had shared as a kernel. It didn’t get much traction on Kaggle but I believe it’s worth sharing. Below is the link to the kernel. Any feedback will be appreciated.
https://www.kaggle.com/shikhar1/train-test-similarity

You guys should definitely check this post by Tyler. It’ll definitely inspire you.

You can also go through some of the blogs that were discussed in the deep learning forum:

7 Likes

getting 404 on first link

sorry…fixed now :slight_smile:

BTW there’s also a nice post from @kcturgutlu over on #part1-v2 : Thread for Blogs (Just created one for ResNet)

Thanks so much for sharing! It may be just me, but I found the math at the start made it hard to get in to, and then I didn’t really follow how you went from there to the rest of the post. Having said that, I’m not a strong mathematician, so I may not be the best person to comment on this kernel.

I have found however that outside of a university environment, you see a lot of code, and nearly no math, so possibly there’s a lot of people like me. If that’s the case, you may find a bigger audience if you spend more time explaining the context of what you’re doing, and where you use math to explain things, take us through the math more gradually and explain the relevance and meaning of each step.

1 Like

Thanks for the feedback. Even I’m feeling that the math is not intuitive and making it difficult to comprehend. Will keep your suggestions in mind when converting the kernel into a post.

1 Like

First blog (although it’s not public yet). I tried to write a blog about rf interpretation that we all studied in class.
Might be trivial stuff for us who have now learnt all this in ML class, but I thought it might be interesting for people out there.

I have attempted to explain the logic behind stuff using spreadsheet examples. (not sure if it helped or not)

Any feedback will be appreciated.

1 Like

I like it @groverpr! A couple of very minor things:

  • fast.ai’ should be lowercase
  • Where you say in PDP we select a few “random” rows to change gives the wrong idea I think. Maybe better is to either remove the mention of random sampling from this section (it’s optional anyway), or else make it clear you randomly sample once at the start of the algorithm, and the averages are just taken from this random sample
  • Perhaps show a waterfall chart of tree interpreter contributions, and link to the most excellent waterfall package github repo?

Thank you for your valuable feedback.

Edited and published here –

@cpcsiszar @yyun2 - Used and referenced the waterfall package :v:

3 Likes

Great! Can you please tell me your twitter handle so I can give you credit when I share?

It’s @groverpr4. I also posted my first ever tweet :slight_smile: Thanks for motivation @jeremy

Sweet!

Here is my attempt to convert my kernel into a blog. It talks about how similar or dissimilar our test and train data are and how can we detect that. Please provide your feedback.

2 Likes

Here is my first (yet unpublished) blog post, about a useful package I created for hyper-parameter optimization.

Parfit – quick and powerful hyper-parameter optimization with visualizations

direct link to article

I hope you all enjoy the read and the package! Any and all feedback is appreciated. I will make the post live over the weekend.

3 Likes

@jeremy any suggestions before I publish it ??

@shik1470 it’s looking good! The thing that most stands out to me (as you can imagine!) is a lack of credit for your sources. I rather hope that at least some inspiration came from when we covered this in class - but you haven’t cited that at all. It is in everyone’s interest to cite and link as widely as you reasonably can, since every person you credit then has an incentive to share and promote your work.

Your descriptions are very clear, but I think it would be even more powerful if you took an example all the way through - show how this technique actually works in practice to improve some outcome. E.g. what does it actually show on the claims dataset you refer to?

Finally (and this is a matter of opinion, so feel free to ignore of course) I think memes detract from the credibility of a piece of writing, for at least some audiences (in particular, older audiences).

@jeremy Thanks for the suggestions. I admit it’s a mistake on my part that I didn’t give due credits and I should have. I’ll surely do that in my next draft. Also I’ll check whether the sample_weight method works on the insurance dataset. It’s something that I’m not sure of at this point. But the methodology to check for similarity is something that I wanted to highlight in the post. The sample_weight thing was something additional. But yeah still feels incomplete without a use case for sample weight.

Great - I really look forward to seeing how it goes. I guess the insurance dataset will only be a good example if the test set there isn’t randomly chose, and has some behavior that’s outside what’s in the training set. Do you know if that’s the case? If not, you’ll need to pick a different dataset to show off your method :slight_smile:

I checked for similarity between test and train and it appears to be very similar…but the method of using sample weight for each row is still applicable as their are rows which are more similar to test data. I’ll try that if it improves the score…if not I’ll try it on another datatset…bulldozer can be an option also as it had some covariate shift

If they’re quite similar, then I doubt this will improve things - if you find otherwise, I’d be very interested to hear about it!