Lesson 2 - Official Topic

We’re a bit early to be looking at this - we’ll do it later in the course.

1 Like

I think it helps to separate modelling and training.

A model, whether it is a good or bad, trained or not, will use its parameters to make predictions given an input.

Training a model (in order to make it better) is done by acting on its parameters. Training has its own set of parameters, and these are called hyper-parameters.

At the end of the lesson Jeremy shows how to export model and do inference with an image file. How can I do inference on a PIL image or np.array? I wish to make predictions on live video feed frames without saving the test images to individual files.

2 Likes

predict will run it based on all that is available, it’s not just for an input image, but whatever it can make an image out of. It’ll use what’s in the PILImage class header which if we see it’s it’s .create:

def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
        "Open an `Image` from path `fn`"
        if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
        if isinstance(fn,Tensor): fn = fn.numpy()
        if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
        if isinstance(fn,bytes): fn = io.BytesIO(fn)
        return cls(load_image(fn, **merge(cls._open_args, kwargs)))

It’ll accept bytes, a numpy array, a Pillow image, or a regular tensor

3 Likes

Awesome, thanks!

Does anybody have more information on seeme.ai ? It looks very interesting, but it also looks like they’re not active yet (the only option is to join a wait list). Is there an account for people in this class?

@giacomov Please check this thread linked in Top Wiki:

I followed the fastai2 deployment quick guide and I was able to create my account from the notebook (instead of doing it via the website).

Please check the Seeme.ai thread for discussions/Questions.

2 Likes

Ah, I see. I did see that notebook, but I thought you needed to have username and password already. This is great!

2 Likes

I signed up on azure for bing image search and provided my creidit card. I see no place to find my Azure_search_key. Did anybody have luck with this?

Hi @giacomov, I’m the founder of SeeMe.ai and a long time fan of Fast.ai.

Indeed, on our website you can only register for the waitlist for now, but we’re making an exception for members of the fast.ai forums. We created quick guides for both fastai v1 and v2, which takes you through all the steps of training (all credits to fast.ai), deploying, using and sharing your models.

To be clear, the platform is in development at the moment, but we would love for you to test it and provide feedback. You can add models, make predictions and share with friends or colleagues.

From the quick guides :slight_smile:

I would be love to talk about what you are trying to accomplish, and of course, if you have questions or need support, you can just reach out to me here.

All the best!
Jan

2 Likes

You can create an account from the sdk :slight_smile:

I’ll update the documentation to make it more clear!

Thank you!

Dear Mat,
i replace that code and it still gives me AccessDenied,
I changed the key with min

. Any idea @matdmiller? Best Regard

I wound say quantity of objects its neither of those. It is object detection aspect like YOLO or SSD.

Go here: https://azure.microsoft.com/en-us/try/cognitive-services/my-apis/
(and make sure you’re signed into your account.)
You’ll see this…and directly below you’ll see your keys as “key #1: djk283ndi82”
(obviously I’m not taking a picture of the actual key)

4 Likes

yes thanks @sut, This is what I did…I don’t have that many endpoint btw…

@jeremy, Thanks for another excellent class!
I am curious how you think about the following aspect of the Czech Republic’s stabilizing case numbers. Given the 1~14 day incubation period, it seems highly unlikely that CR’s number of cases would stop its exponential rise until roughly a week after the mask mandate (instead of leveling off immediately).

Additionally, how do you reconcile the argument that pro-social mask-wearing greatly reduces R0 value with the fact that China has also historically led the mask trend, and their R0 values span the gamut.

Do you see the /images endpoint listed in your available endpoints as per Will’s post? If not I suspect you may not have correctly signed up for image search.

3 Likes

I just submitted Spanish subtitles for the masks video. This is my first time doing this sort of contribution so please, let me know if you observe anything odd about them.

Each small grey square is a visual representation of the pattern in the input image that caused a strong activation for that particular neuron. The over simplified answer to how this was generated, is they took a the activation from each of the images on the right and ran it backwards through the neural net with all other activations within that layer zeroed out. The large squares represent the top 9 images from the validation set that generated the largest activation of that neuron. They said the the choice in which neurons to display were random.

2 Likes