I am trying to implement image classification for fish fillets (as a computerized quality control system)
the system has no issue in differentiating between a tuna and salmon
but its struggling to differentiate between different grades of fish and its freshness, for example:
red salmon, reddish orange salmon and orange salmon
they for all intent and purposes has practically the same shape, features but only different in color
can anyone suggest what i could probably do to improve?
should i use transformations on the training data? or should i instead transform the images during inference? what sort of transformation would help in this problem?
on side note, the training so far i use are images but for inference i use opencv webcam video