Training with Entirely new Dataset

Hello All!

I am trying to get good accuracy in classification problem of entirely new kind of Dataset. The dataset contains tearing artifact(Click here to know more) in one of the folders.


The above is an example of ‘Tearing artifact’. The artifact occurs when the video feed to the device is not in sync with the display’s refresh rate.
I am trying to create a classifier which detects whether the image contains tearing or not.So there are 2 classes. One class contains ‘tearing’ images. Other class contains ‘Non-tearing’ images. I have followed the best practices taught in Lesson 1 and 2(Resnet50 and finetuning), but I am not able to push the error rate beyond 19%. And the ‘false positives’ and ‘false negatives’ are quite a lot.
Does anybody have any suggestions for this? It will be of great help for me!

1 Like

Hi this feature will be strongest in the early layers, use a really shallow ccn Network. Maybe only first two layers of the resnet.

2 Likes

Hi Bjorn Berglund hope you are having a great day!

Can you describe or show me what you used or how you know?

this feature will be strongest in the early layers

Is there a command, method or tool available.

I remember in one of Jeremie’s videos he showed a tool or something that showed the images at the various layers.

Many Thanks mrfabulous1 :smiley::smiley:

In my previous work we did write one level edge detection ccn, (not ML) and they detects these kind of Tearing without any problem.

This application should be 100%, but I don’t know fast.ai good enough yet. If you figure out how to connect correctly post the code.

Hey hi!
Sorry for very late reply.
Here is my code according to your recommendation. But I’m getting only 60% accuracy.
Is there something fishy in the code? Can you point out?

from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten,MaxPooling2D
import os
from keras.preprocessing.image import ImageDataGenerator

os.environ[“CUDA_VISIBLE_DEVICES”] = “1”

train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0,
zoom_range=0,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
directory=“Tearing_Data/Train”,
target_size=(224, 224),
color_mode=“rgb”,
batch_size=32,
class_mode=“categorical”,
shuffle=True,
seed=42
)

test_generator = test_datagen.flow_from_directory(
directory=“Tearing_Data/Test”,
target_size=(224, 224),
color_mode=“rgb”,
batch_size=32,
class_mode=“categorical”,
shuffle=True,
seed=42
)

model = Sequential()
model.add(Conv2D(64, kernel_size=7, activation=‘relu’, input_shape=(224,224,3)))
model.add(MaxPooling2D(pool_size=(2, 2), strides=None, padding=‘valid’, data_format=None))
model.add(Conv2D(32, kernel_size=3, activation=‘relu’))
model.add(Flatten())
model.add(Dense(2, activation=‘softmax’))

model.compile(loss=‘categorical_crossentropy’,
optimizer=‘adam’,
metrics=[‘accuracy’])

model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=50,
validation_data=test_generator,
validation_steps=10)

Hey check below.

Hi I would preprocess the image with the following convultional kernel, then run the analytics

1 1 1
-1 -1 -1
0 0 0

1 Like