So far I was looking for a solution to this problem with dask (http://dask.pydata.org).
On my paperspace machine the merging of the data is no problem, but when I merged everything I cannot write it to the disk without a MemoryError.
So far I didn’t find a solution on the net. I guess if I want to use the Home credit default risk dataset for looking in the the “rossmann fast.ai” approach I have to switch to a more powerful AWS instance (like suggested here Most effective ways to merge “big data” on a single machine or use another, hopefully smaller, dataset like https://www.kaggle.com/c/favorita-grocery-sales-forecasting/data, or learn SQL). 
Any suggestions?
Is the paperspace machine really my bottle-neck?