'Deep Visualisation Toolkit' ImportError problem

Hi, I’ve been trying to install the 'Deep Visualisation Toolkit http://yosinski.com/deepvis
(as mentioned in lesson 3)
on my Ubuntu box. I’m nearly there, but not quite…

I’m dual booting my PC into Ubuntu 16.04 LTS.
I downloaed the fast.ai github materials,
and successfully ran ‘install-gpu.sh’.
I can run the notebooks, train on the GPU, talk to Kaggle, etc, and all seems to work.

For the Depp Vis. Toolkit:
I tried to install the modified Caffe based on the descriptions on


and
http://caffe.berkeleyvision.org/install_apt.html
I seem to have successfully managed to build Caffe,
as it builds and passes all self-tests (‘make test’, then ‘make runtest’).

If it try to run the visualisation toolkit, I get ImportErrors.
$ ./run_toolbox.py
ImportError: /home/steve/anaconda2/lib/python2.7/site-packages/…/…/libstdc++.so.6: version `GLIBCXX_3.4.21’ not found

Some Googling suggested that the fix was
$ conda install libgcc
This gave:
The following packages will be UPDATED:
anaconda: 4.4.0-np112py27_0 --> custom-py27_0
libgcc: 4.8.5-2 --> 5.2.0-0

Which allowed it to get a liitle further:
$ ./run_toolbox.py
ImportError: /home/steve/git/python/caffe/…/…/build/lib/libcaffe.so.1.0.0-rc3: undefined symbol: _ZN2cv8imencodeERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_11_InputArrayERSt6vectorIhSaIhEERKSB_IiSaIiEE

More Googling suggested:
$ conda update anaconda
Which said
The following packages will be UPDATED:
anaconda: custom-py27_0 --> 4.4.0-np112py27_0
The following packages will be DOWNGRADED:
libgcc: 5.2.0-0 --> 4.8.5-2

This then got me back to the original GLIBCXX_3.4.21 error.

So, I’m guessing that the compiler/libraries that I’m using to build Caffe aren’t the same as those used by Anaconda/Python, and this is why I’m getting link trouble?

I’m not sure what to do next. I’m not familiar enough with Linux/Python/Anaconda to know where to look, and I don’t want to risk breaking my working Anaconda/Jupyter fast.ai setup.

Anyone have any advice or insights?

Thanks - Steve

(P.S. Loving the course!)

There are some dockerfiles that you can use to experiment with the system. see if that is useful. A quick google search found this:

Ok, thanks for the suggestion - that’s worth a try.

Hi,

I’m also interested to test this visualisation tool,

did you manage to make it work on your local install ? (we have the same)

Hi,
Sorry - no I didn’t. Then I got distracted by other stuff…

I managed to get the tool working using Docker,
encountered 2 problems but I fixed them, details here:

As I stated above, tool works nicely with their pre-computed models and images, but it’s not all that clear how we can load our own model into the tool. They seem to use this kind of structure below to define their model and the weights, but I don’t know if we can provide this info from the models with create with the fastai lesson1 code, if anyone knows, please let me know here

name: "CaffeNet"
input: "data"
input_dim: 1
input_dim: 3
input_dim: 227
input_dim: 227
force_backward: true
layers {
  name: "conv1"
  type: CONVOLUTION
  bottom: "data"
  top: "conv1"
  convolution_param {
    num_output: 96
    kernel_size: 11
    stride: 4
  }
}
layers {
  name: "relu1"
  type: RELU
  bottom: "conv1"
  top: "conv1"
}
layers {
  name: "pool1"
  type: POOLING
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layers {
  name: "norm1"
  type: LRN
  bottom: "pool1"
  top: "norm1"
  lrn_param {
    local_size: 5
    alpha: 0.0001
    beta: 0.75
  }
}
layers {
  name: "conv2"
  type: CONVOLUTION
  bottom: "norm1"
  top: "conv2"
  convolution_param {
    num_output: 256
    pad: 2
    kernel_size: 5
    group: 2
  }
}
layers {
  name: "relu2"
  type: RELU
  bottom: "conv2"
  top: "conv2"
}
layers {
  name: "pool2"
  type: POOLING
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layers {
  name: "norm2"
  type: LRN
  bottom: "pool2"
  top: "norm2"
  lrn_param {
    local_size: 5
    alpha: 0.0001
    beta: 0.75
  }
}
layers {
  name: "conv3"
  type: CONVOLUTION
  bottom: "norm2"
  top: "conv3"
  convolution_param {
    num_output: 384
    pad: 1
    kernel_size: 3
  }
}
layers {
  name: "relu3"
  type: RELU
  bottom: "conv3"
  top: "conv3"
}
layers {
  name: "conv4"
  type: CONVOLUTION
  bottom: "conv3"
  top: "conv4"
  convolution_param {
    num_output: 384
    pad: 1
    kernel_size: 3
    group: 2
  }
}
layers {
  name: "relu4"
  type: RELU
  bottom: "conv4"
  top: "conv4"
}
layers {
  name: "conv5"
  type: CONVOLUTION
  bottom: "conv4"
  top: "conv5"
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    group: 2
  }
}
layers {
  name: "relu5"
  type: RELU
  bottom: "conv5"
  top: "conv5"
}
layers {
  name: "pool5"
  type: POOLING
  bottom: "conv5"
  top: "pool5"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layers {
  name: "fc6"
  type: INNER_PRODUCT
  bottom: "pool5"
  top: "fc6"
  inner_product_param {
    num_output: 4096
  }
}
layers {
  name: "relu6"
  type: RELU
  bottom: "fc6"
  top: "fc6"
}
layers {
  name: "drop6"
  type: DROPOUT
  bottom: "fc6"
  top: "fc6"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layers {
  name: "fc7"
  type: INNER_PRODUCT
  bottom: "fc6"
  top: "fc7"
  inner_product_param {
    num_output: 4096
  }
}
layers {
  name: "relu7"
  type: RELU
  bottom: "fc7"
  top: "fc7"
}
layers {
  name: "drop7"
  type: DROPOUT
  bottom: "fc7"
  top: "fc7"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layers {
  name: "fc8"
  type: INNER_PRODUCT
  bottom: "fc7"
  top: "fc8"
  inner_product_param {
    num_output: 1000
  }
}
layers {
  name: "prob"
  type: SOFTMAX
  bottom: "fc8"
  top: "prob"
}