The kernel appears to have died. It will restart automatically

Hey everybody, I’m currently doing lesson 1 and get this error:
The kernel appears to have died. It will restart automatically

Every time I run this code:
df, y, nas = proc_df(df_raw, 'SalePrice')

Can’t find what’s wrong :confused:

can you link to a public notebook showing the error? this sounds like a problem with the underlying system and doesn’t have anything to do with the code. like its running out of memory, hd space, or something.

How do I link to a public notebook? Sorry I’m not too familiar with Jupyter Notebooks…

How do you access your notebook? What service is it running on? Is it your own computer?

It’s running on Paperspace.

This is the first time I have this issue and I think it has something to do with picking the custom Paperspace fast.ai solution.
Before, I’d set all the parameters manually (and I never had this problem). So when you said the issue “sounds like a problem with the underlying system” that’s the first thing that came to mind.
Thoughts? Or am I completely off track?

@richardreeze Hi I am also facing the similar issue. I am also using Paper space. Kindly, reply if you got the solution. Thanks.

As you are paying paperspace to provide the kernel, this is within their wheelhouse for support, assuming they offer any.

I face the same problem. The terminal says:
[E 03:41:36.015 NotebookApp] KernelRestarter: restart callback <bound method ZMQChannelsHandler.on_kernel_restarted of ZMQChannelsHandler(63a69f7e-79ea-4b34-bbb9-aa159461fe55)> failed Traceback (most recent call last): File "/home/paperspace/anaconda3/envs/fastai/lib/python3.6/site-packages/jupyter_client/restarter.py", line 86, in _fire_callbacks callback() File "/home/paperspace/anaconda3/envs/fastai/lib/python3.6/site-packages/notebook/services/kernels/handlers.py", line 473, in on_kernel_restarted self._send_status_message('restarting') File "/home/paperspace/anaconda3/envs/fastai/lib/python3.6/site-packages/notebook/services/kernels/handlers.py", line 469, in _send_status_message self.write_message(json.dumps(msg, default=date_default)) File "/home/paperspace/anaconda3/envs/fastai/lib/python3.6/site-packages/tornado/websocket.py", line 249, in write_message raise WebSocketClosedError() tornado.websocket.WebSocketClosedError

I too am having this issue, haven’t yet figured out a solution. I had signed up for Paperspace for the Deep Learning course. Starting the Machine Learning course and stuck here on Lesson 1. Wondering if I need to upgrade?

Ok, ao I ran the code without putting the df into feather and so far working. So I skipped the following lines
os.makedirs(‘tmp’, exist_ok=True)
df_raw.to_feather(‘tmp/bulldozers-raw’)

and
df_raw = pd.read_feather(‘tmp/bulldozers-raw’)

I am sure there are implications for doing it this way, which others who know more can weigh in on.

Hi, I had the same situation as @sammy500:

  • paperspace setup, worked on DL1 first, working through lesson1-rf.ipynb of the fastai ml course
  • kernel kept dying at df, y, nas = proc_df(df_raw, 'SalePrice')

As @eof pointed out, this was not a code issue so I created a new conda environment and a new ipykernel and got it to work. These were the steps:

Environment and kernel

Python 3.7 doesn’t work yet with sklearn-pandas, so we need python 3.6

conda create --name p36-fastai-ml python=3.6
source activate p36-fastai-ml

Prepare IPykernel for jupyter

conda install ipykernel
python -m ipykernel install --user --name p36-fastai-ml --display-name “Python 3.6 (fastai-ml)”

Restart Jupyter to see the kernel; click Kernel (top menu) and select it

Install required packages

To install all conda packages from a requirements.txt at once you can use a shell script, see here: https://gist.github.com/luiscape/19d2d73a8c7b59411a2fb73a697f5ed4

or you do it one by one:

Conda

conda install matplotlib
conda install pillow
conda install bcolz
conda install scipy
conda install -c menpo opencv
conda install seaborn
conda install python-graphviz
conda install -c anaconda graphviz
conda install -c conda-forge pandas-summary
conda install ipywidgets
conda install tqdm
conda install pyarrow -c conda-forge

I used pip for the following packages as conda doesn’t have them at this time:

pip install sklearn_pandas
pip install isoweek

You will also need the binaries of graphviz on your system in order to get a proper output displayed:

sudo apt install graphviz

You may need to restart your notebook again so that the kernel can access the packages from the environment, not sure about this though.
After that I could get all the code from the notebook lesson1-rf.ipynb to run. I did have the data already.

Hope this helps

2 Likes

Hi Michael, thanks for this!
I came to Fast.ai with the idea of doing the DL course. However I started watching the videos of ML course and now I’m very excited about it. So I’m looking at the possibility of practicing ML lesson in Paperspace. I’ve seen two threads about problems in PaperSpace, so my question is have you been able to continue with the following lessons without problems just with this configuration? Or have you had to make new configurations for each subsequent lesson?
Sorry mi english… and thanks in advance!

Hi @dRiv ,
I have not tried running code of the newer courses in the environment for the ML course.

I’d suggest setting up your own environment for each of the classes though, just to be on the safe side. I’m not sure if the ML code is still being maintained (probably not), and for newer classes you may want to use newer versions of libraries that do not work for the ML class anymore.

You can also use the ML environment above as a starting point for a new environment by making a clone, and then update it with whatever you need:

conda create --name myclone --clone myenv

You can find more information about managing environments here: https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html

Hope this helps,

Michael

I have had this error multiple times before and most likely it is because you run out of memory and thus, you need to upgrade your instance. Try switching to a more powerful instance. It works for me every time.