I would like to be able to use the fastai library in the anaconda environment, but on my local machine not using a instance on paperspace.
I attempted to use the resource that you gave me, but to no avail. This could be because I wasn’t using the right command prompt. I have a windows 10 & 64 bit computer.
I had to use git bash to get the copy of the repo and used python install setup.py and I thought that did the trick inside the anaconda environment.
No module found usually refers to the package not being installed on your computer. When you write from fastai.imports import * you are basically telling the computer to go grab fastai library from your computer and load it up because you are about to use. But if the package isn’t installed, then you get that error. Python has multiple ways of installing packages like pip and anaconda, so you might want to look into those. Running pip install bcolz in your terminal should install the package.
Ahh, my bad, I point you the github link that only could work out of the box if you’re using Linux machine.
And just like a week ago I installing fastai in my girlfriend laptop that also use windows 10 to learn ML videos and practicing it locally, and success, don’t give up yet, lot’s of people could do what you want.
I follow this link How to set up Windows 10 for fast.ai
Get stuck in first option, succeed in second option.
However, thread owner said it’s already obsolete, you need to check out this link also Howto: installation on Windows
Just follow the link above, you need to use Anaconda Prompt to execute everything.
And bcolz is one of them, please follow that 2 link first until sucess, using Anaconda Prompt.
And let us know if you still encounter problem like no module named bcolz, or etc.
Because when you already do right step by step, that error should not be appear at all.
I made the mistake of not running as an admin the first through and first time I did this the conda env update did work and it didn’t get stuck, but as an admin it took a little longer so be prepared to wait a couple mins and don’t panic
I have been trying all types of combination of these using it as an admin and a non-admin to see if that was the case this is what i did and still got the same result with no module named ‘fastai’. Should my path have Anaconda3?
At this point i can get to the juypter notebook the library works, but the bash commands aren’t working.
I believe that this is because jupyternotebook in running power shell and this could be why this isn’t working so i may need to use powershell code in the terminal to get the bulldozer data. I will work on finding out the equivalents to BASH -> Powershell and see if this is the case. I would of course love to use the BASH commands if that is possible, but I don’t know how to go about doing that
AWESOME!! Got it working, but it’s very slow compared to paperspace this was very much worth doing and was successful
How can we have -ve values for our prediction of the sale-prices even though RF’s hasn’t been trained on any?? (as far as i have read about RF’s they can only predict the values alike on what they have been trained)?
Sorry, I have no clue what it is that you linked to. I remember seeing this graph in the ML lectures but don’t recall what it was used for.
The way random forests work is you present them training examples and a target variable - they will not work otherwise (they use the target variables during training, the splitting of branches).
Sorry, I do not recall the discussion of partial dependence plots from the ML course. Something I have not managed to get around to study yet to the extent that I would like
It’s like you replace let’s say a particular year value in all the rows (make them same for all the rows in the dataset) and then predict what the RF will predict as the sale price,
Doing this for different years will help us to get the Partial dependence…
So I was watching lesson 5 and 6 and I saw Jeremy recommended splitting the data into training and valid sets manually when the data has temporal ordering. TIL there’s a cross-validator in sklearn called TimeSeriesSplit.
can you replicate the ggplot() idea that Jeremy did in his data on your dataset?
That may help me interpret your pdp plot. My first guess, is you’ll find an error in that plot. lets see
I had done that also,
There was a sharp dip in the value when the year was around 2009-2011(but it was not at all negative, well well it depends on the random splitting then, which it should be as this dataset is basically the Singapore Housing Re-Sale, the market did crashed and so were the fall in the prices were justified…