You may need to delete that file as it is corrupted or something like that. If you’re using the Kaggle site I think you can open up the console at the bottom of the page and you may be able to use the rm command to remove that file. I don’t think it’s possible to delete files via the Kaggle UI in the right hand tab under “Data” so the console is all I can personally think of at the moment.
The console is this icon at the bottom of the Kaggle web page.
And the rm command in your case might be (you may need to alter the path I’m not 100%):
I tried… but then there is an issue with another file. Mmaybe something with kaggle? Can’t even download properly. Even hat to manipulate url’s manually because the (if I remember correctily) the ? in the url’s didn’t work.
I now installed jupyter notebook locally. Hope this will work.
Unfortunately I’m no longer familiar with the topic. When I wrote my post, it was best to convert the PyTorch model to ONNX and then to Apple’s CoreML format. However, it now looks like CoreML has the tooling to do a direct conversion from PyTorch. There should be sample code templates to use on Apple’s website for the app itself.
I’m not quite sure what you’re asking in relation to “where exactly is the model…”, but FWIW programmatically you can get access to it via learn.model (from Lesson 3: Practical Deep Learning for Coders 2022 - YouTube). But if you mean were is it physically then I think ResNets are built into the fastai library which is why the don’t need to be imported or installed separately to use (unlike LeViTs for example that need timm). So there isn’t a physical *.pth or *.pkl for ResNets AFAIK.
I haven’t used it but maybe Leaner.load_model() is what you need? Or if it’s a fastai vision learner you’ve already trained and exported to a pkl file you can use Learner.load().
Is your blog post still relevant in April 2023 ? It looks like Gradio removed their awesome REST API
I recently deployed on HuggingFaces my first machine learning app with FastAI and Gradio (very exciting! many thanks for the course content!). You can find the app live on Training - a Hugging Face Space by ivanho92. However, I have trouble consuming the API that should have been automatically generated when this app was built.
I have tried several different requests through Postman, but I keep getting a “404 Not Found” error. I’ve followed the instructions in the documentation and have double-checked the endpoint URL, but I still can’t get it to work.
Here’s an example of a postman request I have tried:
POST https://ivanho92-training.hf.space/predict
{
"data": [
"https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png"
]
}
Thanks, but not quite what I was looking for. That’s the notebook that trains the model (and creates a .pkl file), but I’m looking for the notebook he uses that loads the .pkl file and uses Gradio to create an interface to generate predictions using the previously trained model.
I am following along Lesson 2 of the Book and I want to use fastai to train my first model that will classify an ECG beat (data is explained below) into one of 4 classes.
I have seen in chapter 1 of the Book that at the time it was made, it was not well understood how to use transfer learning on time-series. The following questions arose:
Is there now a pretrained fastai model that can be used for this kind of classification?
What are the challenges of using transfer learning on time-series?
Is there a known way to represent this kind of data (ECG) other than time series?
How would You suggest to go about this problem?
My dataset is a .csv file containing a 87000 rows. Each row represent an ECG beat sampled at 125 Hz containing 187 samples.
Dataset can be found on Kaggle at the following link:
Is it still possible to create an HTML file that uses the API from hugging faces spaces as shown in the lesson? Or do you have to pay for endpoints now?
Hi I am getting error :
AttributeError: ‘Sequential’ object has no attribute ‘fine_tune’
import torch
from fastai.vision.all import *
# Check if GPU is available
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
# Load data and move to GPU
path = untar_data(URLs.PETS)
dls = ImageDataLoaders.from_name_re(
path, fnames=get_image_files(path/'images'), pat=r'^(.*)_\d+.jpg$', item_tfms=Resize(224),
batch_tfms=aug_transforms(), bs=64
)
dls = dls.to(device)
# Define model and move to GPU
learn = cnn_learner(dls, resnet34, metrics=error_rate).to(device)
# Train model on GPU
learn.fine_tune(3)