Analyzing frames from OpenCV cv2.VideoCapture()

Hi,

I’m working on an embedded system (Jetson Nano). I have connected a RaspiCam, and i grab frames using gstreamer and cv2.VideoCapture.

For now i have been saving images using cv2.imsave(), and open_image() to parse it into my learn.predict().

I want to avoid saving the file because it seens very inefficient. I have adapted a method from here but the predictions are not the same.

from PIL import Image as PImage
from fastai.vision import *

frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
pil_im = PImage.fromarray(frame) 
pil_im = 255-pil_im
x = pil2tensor(pil_im ,np.float32)
preds_num = learn.predict(Image(x))[2].numpy()

If i run:

Image(x).show()

then i see the same image as the one saved with cv2.imwrite(), but the prediction is way different. And if i run:

Image(x).save()
frame = open_image()

I get a third prediction.

Does anyone know how to do this without saving the image first?

1 Like

Got it working now:

def grab_image() :
    ret_val, img = cap.read()
    img_col = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
    pil_img = PImage.fromarray(img_col.astype('uint8'), 'RGB')
    pil_img = pil2tensor(pil_img,np.float32)
    inf_img = Image(pil_img.div_(255))
    return(inf_img, img)
6 Likes

@bth
What does this line do?
inf_img = Image(pil_img.div_(255))

So in my understanding this line converts the numpy-array we get from cv2 into an image.
pil_img = PImage.fromarray(img_col.astype('uint8'), 'RGB')
Then we turn the image into a pytorch tensor like so.
pil_img = pil2tensor(pil_img,np.float32)
But after that I couldn’t find any documentation or explanation.
It works though so thanks for that! :slight_smile:

Sadly i’ve had to take a brake in my developement, so it is hard to remember.

But - as i recall it is from the open_image source code i took it.

Glad that it worked out for you!

1 Like