I highly recommend changing to verbose=2 which updates every epoch rather than every batch. There can be issues with keras where it essentially floods the stdout with updates to the point where it causes the notebook to freeze up. The model is still training in the background but you often won’t even get the update that it’s finished.
If you do want per batch updates on your model in the notebook there is a package that you can use that overrides the default one called tqdm. I’d recommend checking out this issue: https://github.com/fchollet/keras/issues/4880
I’d rather say this looks like out of memory, either GPU or CPU RAM. If you run out of CPU RAM you typcially won’t see that unless you were monitoring through top and see free mem going down. At least for me this is the most frequent reason for dying python kernels. And usually the rest of the system stays fine which is good. I sometimes have situations where there is one model running fine on GPU, while another notebook runs out of CPU RAM. The latter one dies, the former stays active. That’s rather benign.
@benediktschifferer
I have my workstation. I have encountered same problem when training lesson 1 vgg16 model on my local machine, jupyter-notebook will totally power off my GPU. the issue caused by jupyter-notebook progress bar. I have applied the solution of @simoneva. it worked for me. I think the reason you cannot view jupyter-notebook because your notebook crash your GPU
What is the impact of lowering batch size on the rate at which the learner converges onto the fit. If we decrease batch size should we increase # of cycles and epoch?
What about trade off in the sz that we pass into get_data?