There is an older thread about this issue:
Maybe dask instead of pandas can already solve your problem?
Or chunking with pandas: http://pandas-docs.github.io/pandas-docs-travis/io.html#iterating-through-files-chunk-by-chunk ?
Once I had the problem when I had merged all data that I got an out-of-memory when writing to a compressed file format, which needed additional memory. In such cases the feather format solved the problem (I guess it writes the in-memory data directly to the file without doing anything additional).
Hope something helps!
Best regards
Michael