How to do prediction on Whole slide image

Hey everyone, I have trained my model in fastai using the MIDOG 2021 dataset, and my object detection model is working great, I have a minor issue while inferencing the model on totally new two images which are aperio images of .scn format and having aperio imagescope created annotation in XML format.
The thing is I trained my model using the fastai input pipeline based on the tutorial notebook provided by the MIDOG challenge team on their website: Google Colab
The notebook is based on the object detection library of fastai by Christian marshal:
GitHub - ChristianMarzahl/ObjectDetection: Some experiments with object detection in PyTorch
The input pipeline is as follows it takes the bunch of .tiff images and the annotations in MS COCO .json format and then creates the imagedatabunch object of fastai.
Now how should I use my new data which consists of only one .scn image and the respective . XML file of the same, which has annotations in the ellipse box (top left corner and top bottom corner coordinates) to be used for inference, and testing of my trained model.
Any resource, code snippet, notebook, etc which might be able to help to do this would be really helpful.
Thanking you all in advance,