Unsupervised learning

Hi All,

I am attempting to do this.
Take all the images from mnist.
Put all of them in one folder (so there are no labels)
Run k-mean/Unsupervised learning algorithm
it should spit out 10 “buckets”

Does it make sense?

I am following this: `http://stamfordresearch.com/k-means-clustering-in-python/

I changed the iris data set to mnist but having issues getting through. I can figure those out but want to udnerstand if this even makes sense

2 Likes

Yeah that makes sense. But you need to create some features first - probably just raw pixels don’t have enough structure. The easiest way to do this is to use an autoencoder: https://blog.keras.io/building-autoencoders-in-keras.html . Then take one of the layers from the middle and cluster using those features.

3 Likes

I am trying this autoencoder out. At the last step, where I am doing a fit , I am getting this error.

autoencoder.fit(x_train, x_train,
nb_epoch=100,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))

Exception: Error when checking model target: expected convolution2d_10 to have shape (None, 1, 8, 8) but got array with shape (60000, 1, 28, 28)

so I printed my model
The first step in the model is
input_3 (InputLayer) (None, 1, 28, 28) 0

so looks like its expecting None in the first param.
what is the right way to fix this given that the input into the model should be variable. Where did I set the input to None?

so the issue was being caused by this line:

x = Convolution2D(16, 3, 3, activation=‘relu’)(x)

I had it as

x = Convolution2D(16,3,3, activation=‘relu’, border_mode=‘same’)(x)

This caused the layer to be slightly off and didnt add up to 28,28 vector.

I am able to fit now and with 50 epochs the loss is going down.

Successful!

2 Likes

Now I want to run the same model on a different data set of face images.
http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
the data set has 200K images.

I am thinking of splitting them randomly into our directory structure with 40K in train, 20K in test and 1K in sample/train and 50 in sample/test.

  • I will then resize them to be 176 by 176.
    what is the best way to load all these files into memory like i did with mnist_loaddata()?
    Prob something like this:

image = Image.open(os.path.join(root, dirname, file))
print "Creating numpy representation of image %s " % file
resize = image.resize((176,176), Image.NEAREST)
resize.load()
data = np.asarray( resize, dtype=“uint8” )
print(data.shape)
master_dataset.append(data)

1 Like

Yay @garima.agarwal congrats! Love this independent project idea :slight_smile:

You can use load_data from my utils.py to suck them all in (put them in a subdirectory called ‘unknown’ so that flow_from_directory can find them). You can modify load_data (or create a 2nd version of it called load_data_176) to make target_size (176,176).

(The approach you wrote looks fine too - but using the keras tooling can help avoid bugs, and also make it easier to add data augmentation later)

@jeremy I used a data set of faces to enhance my unsupervised learning with 40 people and 450 images total.

I moved about 30 to the test set and trained on the rest. I was getting about 50% loss in the beginning to so changed the optimizer to sgd and loss to mse and after running about 80 epochs I was able to come down to 1% accuracy.

input_img = Input(shape=(3, 174, 174))


x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(input_img)
x = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)
x = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)
encoded = MaxPooling2D((2, 2), border_mode='same')(x)

# at this point the representation is (8, 4, 4) i.e. 128-dimensional

x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Convolution2D(8, 2, 2, activation='relu', border_mode='same')(x)
x = UpSampling2D((2, 2))(x)
x = Convolution2D(16, 2, 2, activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Convolution2D(3, 3, 3, activation='sigmoid', border_mode='same')(x)

autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='sgd', loss='mean_squared_error')

I couldnt draw the images for some reason. They are showing up black. I am not sure why.

decoded_imgs = autoencoder.predict(tst_data)

n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
    # display original
    if i==0:
        i = 1
    ax = plt.subplot(2, n+1, i)
    newdata = np.reshape(tst_data[i], (len(tst_data[i]), 174, 174))
    newdata = np.rollaxis(newdata, 0, 3) 

    plt.imshow(newdata)
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)

    # display reconstruction
    ax = plt.subplot(2, n, i + n)
    decdata = np.reshape(decoded_imgs[i], (len(decoded_imgs[i]), 174, 174))
    decdata = np.rollaxis(decdata, 0, 3) 
    plt.imshow(decdata)
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)
plt.show()

I am not sure if i am making some major mistakes… The data set is too small to be so highly accurate…
what do you think?

Sounds exciting - I hope those results are indeed correct! Could you please put the whole code in a gist? That’ll make it more clear how you’ve created the train/test split, which will be important in answering your question about the validity of the result.

Added the gist here.

The image set I used was from this link.

Look forward to your thoughts and comments.

I have a feeling that this result is too good to be true and what I did is too simple to be right :slight_smile:

It looks pretty reasonable to me… :slight_smile: Since you’re using ‘mv’, I can’t see how your test set could be giving an incorrect result.

I think your images aren’t printing because you need to multiply your arrays by 255, since you divided by 255 earlier. You may also need to cast them to np.uint8

Pretty cool! Also interesting way of adding layers … I have been following Jeremy’s way of doing it with a list, but you seem to be doing it a bit differently … what is that data structure? It looked like a tuple initially for me but this looks new to me … help me wrap my head around this!

Here you go!:

https://keras.io/getting-started/functional-api-guide/

2 Likes

that did not work :frowning:

trn_data = load_array('faces/results/train_data.bc')
tst_data = load_array('faces/results/test_data.bc')
trn_data = trn_data.astype('float32') * 255.
tst_data = tst_data.astype('float32') * 255.

newdata = np.rollaxis(newdata, 0, 3)
newdata = np.uint8(newdata)

Any other suggestions?

I mean, just multiply by 255 when you plot it - don’t actually change your definition of trn_data.

@garima.agarwal: I am curious to know how the images look like after multiplying it by 255. Please keep us posted! :slight_smile:

I will take a stab at it too, if I get a chance today.

1 Like

I tried it… for some reason i cant print the images. I am new to python. :frowning:

Maybe we can help you. Have you tried creating the images, multiplying the data by 255 at the time you plot? Do you need any help with this? If you can show us where you’re up to, we can hopefully help you out. Putting your notebook up as a gist is often a good way to get help.

I did try the multiplication but I still see the black images.

ax = plt.subplot(2, n+1, i)
newdata = np.reshape(tst_data[i], (len(tst_data[i]), 174, 174))
newdata = np.rollaxis(newdata, 0, 3)
newdata = np.uint8(newdata)
newdata=newdata*255