My first blog

First blog (although it’s not public yet). I tried to write a blog about rf interpretation that we all studied in class.
Might be trivial stuff for us who have now learnt all this in ML class, but I thought it might be interesting for people out there.

I have attempted to explain the logic behind stuff using spreadsheet examples. (not sure if it helped or not)

Any feedback will be appreciated.

1 Like

I like it @groverpr! A couple of very minor things:

  • fast.ai’ should be lowercase
  • Where you say in PDP we select a few “random” rows to change gives the wrong idea I think. Maybe better is to either remove the mention of random sampling from this section (it’s optional anyway), or else make it clear you randomly sample once at the start of the algorithm, and the averages are just taken from this random sample
  • Perhaps show a waterfall chart of tree interpreter contributions, and link to the most excellent waterfall package github repo?

Thank you for your valuable feedback.

Edited and published here –

@cpcsiszar @yyun2 - Used and referenced the waterfall package :v:

3 Likes

Great! Can you please tell me your twitter handle so I can give you credit when I share?

It’s @groverpr4. I also posted my first ever tweet :slight_smile: Thanks for motivation @jeremy

Sweet!

Here is my attempt to convert my kernel into a blog. It talks about how similar or dissimilar our test and train data are and how can we detect that. Please provide your feedback.

2 Likes

Here is my first (yet unpublished) blog post, about a useful package I created for hyper-parameter optimization.

Parfit – quick and powerful hyper-parameter optimization with visualizations

direct link to article

I hope you all enjoy the read and the package! Any and all feedback is appreciated. I will make the post live over the weekend.

3 Likes

@jeremy any suggestions before I publish it ??

@shik1470 it’s looking good! The thing that most stands out to me (as you can imagine!) is a lack of credit for your sources. I rather hope that at least some inspiration came from when we covered this in class - but you haven’t cited that at all. It is in everyone’s interest to cite and link as widely as you reasonably can, since every person you credit then has an incentive to share and promote your work.

Your descriptions are very clear, but I think it would be even more powerful if you took an example all the way through - show how this technique actually works in practice to improve some outcome. E.g. what does it actually show on the claims dataset you refer to?

Finally (and this is a matter of opinion, so feel free to ignore of course) I think memes detract from the credibility of a piece of writing, for at least some audiences (in particular, older audiences).

@jeremy Thanks for the suggestions. I admit it’s a mistake on my part that I didn’t give due credits and I should have. I’ll surely do that in my next draft. Also I’ll check whether the sample_weight method works on the insurance dataset. It’s something that I’m not sure of at this point. But the methodology to check for similarity is something that I wanted to highlight in the post. The sample_weight thing was something additional. But yeah still feels incomplete without a use case for sample weight.

Great - I really look forward to seeing how it goes. I guess the insurance dataset will only be a good example if the test set there isn’t randomly chose, and has some behavior that’s outside what’s in the training set. Do you know if that’s the case? If not, you’ll need to pick a different dataset to show off your method :slight_smile:

I checked for similarity between test and train and it appears to be very similar…but the method of using sample weight for each row is still applicable as their are rows which are more similar to test data. I’ll try that if it improves the score…if not I’ll try it on another datatset…bulldozer can be an option also as it had some covariate shift

If they’re quite similar, then I doubt this will improve things - if you find otherwise, I’d be very interested to hear about it!

sure I’ll let you know

Just wanted to share my first blog to simply lower the bar / expectations. I wrote them before the ML course, but they do illustrate some of Jeremy’s ideas for first blogs.

It’s simply about how to to get an authentication key for Microsoft’s Azure Services.
The Azure menu was not intuitive and to find what I needed so super irritating that I decided no one should go through that alone. Part 2 was written b/c the same frustration with the available documentation and sample code, and part 3 which has a hat tip to @parrt to document a process.

Neither brilliantly written nor technically state-of-the-art, but hey they were my first blog :slight_smile:
(PS: I was inspired to write them after a talk with @rachel and @jeremy)

2 Likes

Hi everyone,

Jason and I have written a blog on ‘How to make SGD Classifier perform as well as Logistic Regression using parfit’. In this blog post, we have provided details on what an SGD Classifier is and why we would like to implement SGD Classifier instead of Logistic Regression (it is much faster on large datasets). Please review it in your free time and let me know if there are any changes to be made. Thank you for your valuable time :slight_smile:

Here is the first draft:

1 Like

Alright, I figure I better join the blog world. Here is my first blog. Decided to focus on a topic that was completely new to me, so it might need some touching up. Let me know what you think. I’d love to improve it.
Thanks!

6 Likes

Hello. Here is my attempt at writing one. Please let me know if anyone has any feedback.

3 Likes

Hey guys ! Check out the blog I wrote on Medium :

Any suggestions are welcome !