I’m working on an embedded system (Jetson Nano). I have connected a RaspiCam, and i grab frames using gstreamer and cv2.VideoCapture.
For now i have been saving images using cv2.imsave(), and open_image() to parse it into my learn.predict().
I want to avoid saving the file because it seens very inefficient. I have adapted a method from here but the predictions are not the same.
from PIL import Image as PImage from fastai.vision import * frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB) pil_im = PImage.fromarray(frame) pil_im = 255-pil_im x = pil2tensor(pil_im ,np.float32) preds_num = learn.predict(Image(x)).numpy()
If i run:
then i see the same image as the one saved with cv2.imwrite(), but the prediction is way different. And if i run:
Image(x).save() frame = open_image()
I get a third prediction.
Does anyone know how to do this without saving the image first?