Transfer Learning from VGG - image/Object classification to sentiment or style

I am trying to determine to what extent transfer learning/fine tuning can be used in this scenario:
Using the VGG16 model but fine tune/adapt it to a sentiment or even style task.

Intuitively, the weights/layers of VGG16 seek to separate out images of different types as we progress through the network, for example, cats and dogs become more distinct over each layer.
However, for style or sentiment, objects of different classes, e.g humans and scenery could share the same sentiment/style. Similarly, objects of the same class could have different sentiment/style,
so as we progress through the network, we should progressively learn sentiment/style features instead.

Could transfer learning/fine-tuning be used here, at least for the low level features (perhaps after 2nd or 3rd convolutional layer)? Or would it make sense to train from scratch, assuming that the VGG16 architecture could still work.
Data set involved is quite small (30k images ; 20K for training, over 5 categories)

I found this paper, which uses a model (not VGG16, but should still apply) pretrained on imagenet to predict positive/negative sentiment. They concluded that “despite being pre-trained for a completely different task, the AlexNet-styled architecture clearly outperforms the custom architecture.” So, I think it would definitely be better to start out with a pretrained VGG16 model, especially since the dataset is so small.

Thanks Matthew - this is very useful.
I will be attempting Jeremy’s approach from lesson 3; beginning with fine-tuning just the final softmax, then progressively unfreezing previous layers, starting with the FC layers, and then the top Conv layers to determine the best configuration.