@stoops_ML have replied in the thread you created! Hope we can sort your issue
Thank you @hutnic so much
Has anyone worked through a way of setting up self-hosted orchestration for distributed ML-based jobs?
Re: making web apps with models from lecture homeworks
I’d like to make a collection of web apps from each lesson’s homework. I think it would be fun to share with friends and family, and would help get a feel for what it’s like to do things outside the “sandbox”. Unfortunately I’m a horribly slow beginner and it looks like I need a baseline level of knowledge:
- A framework like Flask or Django
- Template Engines
Am I missing something that will allow me to skip learning 3 new languages and 2 new software tools? For a total beginner like me, that’s a pretty intimidating list of skill-sets.
Of course, I’m not afraid of learning new things, but I am afraid of getting bogged down and distracted from the coursework.
Hi wcneill hope you are having a wonderful day!
Having been on this forum for over a year now, I have yet to see any solution that has not required learning new skills and software libraries.
Many of of the automated frameworks that say make things easy often have learning curves which are as steep as some programming languages, less flexible and when they break are seriously difficult to fix.
If I were in your position, I would try and build the teddy bear classifier, many total beginners have built this application.
See the links below for guidance
https://course.fast.ai/deployment_render.html - instructions for deploying on render and using Docker once done this can be used locally or on any server platform in the universe that supports docker.
Share you work here - highlights - Wonderful ideas and applications that people have created.
Deployment Platform: Render ✅ - Solutions to many of the problems you may encounter while trying to deploy your app.
You do not have to learn the whole framework or language just the bits you need to make your program work.
If you do find a flexible framework or strategy that requires no learning of new skills etc. please share it on this forum we’d all love it.
ps. It took me about 5 weeks to deploy my first app once I started it, but now I can build a simple app in hours or a day.
Hope this helps
@mrfabulous1 Thank you so much for the thoughtful response.
I have to admit to having a small panic attack when I saw the steep learning curve for writing an app (Jeremy said try to do it “this week even if you’ve never written a web app before” in one of the recorded lectures).
It took me 3 years to finish a 2 year master’s program because I’m not the brightest student. In the year since then, I’ve been working 8-12 hours a day both applying for jobs and trying to learn the skill-sets to get a job (because a master’s in math turned out to be useless). I think all of the frustration came to a head when I wrote that post.
Let’s just say I really appreciate your response. Sometimes it feels like no one is out there on these forums. I’m gonna get to work looking at the resources you provided
Hi wcneill Hi hope your having a wonderful evening!
You may think you are slow but I may be slower. I would really like you to get your first app going. If you look at this thread Deployment Platform: Render ✅ you will see I have helped many people get their first app going.
Building the app may be a challenge but if you try to build an app and you have a problem and you at least try and fix it yourself and you post a question with the error message and a short description of what you tried, I and others will try and help.
Building the app lets you learn every step required to deploy an app.
If you are confident and you remember the steps, I am sure if you went for an interview, you would be in a better position than if you hadn’t built one.
Also, even if you didn’t get the job as “chief systems designer of machine learning systems”, you would be in better position to get a junior support role, to get your foot in the door.
I am looking forward to your first app!
Trying to understand what datablock objects look like after each step of splitting/labelling/transforming etc.
Working with the Amazon Planet dataset on Kaggle:
Shouldn’t src1 look like this:
Seems like src1, src2 and src3 all return the same object, src1 shouldn’t know about the labels and the fact it’s a MutlilCategoryList yet.
Is it updating src1 in place to potentially save memory in case there’s millions of images and someone codes it like me?
I just had to write something being new and all. I figured this would be the place to start. Hopefully I keep at it and would not drop learning this half way.
The TextDataBunch creates the tokens automatically and it removes most of the words from my data set, it sets them to ‘unknown’. Since my dataset is small, i don’t want anything to be ‘unknown’ since every word for me is important and most of the words appear only 1 time.
How can i use TextDataBunch but do simple tokenization and not put words to ‘unknown’?
Hey ! In lesson 2 in the video Jeremy said that while predicting the results we should create a new data loader of one class but in the lesson 2 notebook the prediction has been done directly ( and it works ) so which method is the most appropriate one ? Is this because load_learner () incorporates the meta data? Wherease learn.load() does not ?
hey i am using google colab. i was trying to apply lesson one’s code to another dataset. i am using stanford-cars dataset. when i try to execute this cell
i am getting file not found error. i tried other datset also. but the problem is same.
It’s because Standford cars dataset has another structure of folders
path.ls() and you’ll see that is
[PosixPath('/root/.fastai/data/stanford-cars/cars_test'), PosixPath('/root/.fastai/data/stanford-cars/cars_train'), PosixPath('/root/.fastai/data/stanford-cars/cars_annos.mat')]
I am playing around with Caltech Birds Dataset.
In first I’ve tried copy-paste approach. It wasn’t good enough, so I make bigger classes based on general birds titles and use stratification (so every class is presented either in train and validation dataset with given proportion), maybe ImageDataBunch do it itself, I don’t know. I’ve used my script for it.
Now I want to explore bounding_boxes.txt annotations for better classification. I cropped images with another script
I works perfect on my local tests.
But when I try look on my dataset with
show_batch() I get this:
I assume that where are three possible causes:
- I do incorrect cropping
ImageDataBunchconstructor does some additional crop on my images
show_batch()method doesn’t show full images only some region (and in dataset all images are correct)
Any ideas what is going on and how can I check it?
ImageDataBunch creation code
local_data = ImageDataBunch.from_folder(Path('./images'), ds_tfms=get_transforms(), valid='validation', size=299, bs=32).normalize(imagenet_stats)
Hi all! A bunch of us are intending to go through the course together on a discord server, one lesson per week starting next week, with a weekly voice chat to follow up on progress etc. If anyone wants to join, here is a link: https://discord.gg/3UqtmN
Hi I have been interested in running the notebooks and fastai locally. I followed the guide to get it to work on Windows but it never boots up. I have the Nvidia GTX1070 with Max-Q Design. But, since I have seen success with everyone using linux as opposed to windows, I have been considering doing a partition for linux. Or perhaps have it boot on an external drive. I was wondering if I could get an educated suggestion from you. Thank you!
thank you. can u help me with this also…
i have downloaded a datset and it has three folders Train,Test and Validate. i gave it path and when i try to do ImageDataBunch it is dshopwing some errors.
for those who can not download url from that code…maybe it will work.
urls = Array.from(document.querySelectorAll(’.rg_di .rg_meta’)).map(el=>JSON.parse(el.textContent).ou);
"urls=Array.from(document.querySelectorAll('.rg_i')).map(el=> el.hasAttribute('data-src')?el.getAttribute('data-src'):el.getAttribute('data-iurl')); "window.open('data:text/csv;charset=utf-8,' + escape(urls.join('\\n')));
I think that you get this error because incorrect path value
- You should escape spaces in your path with backslash
- As I know from_folder constructor gets path with subfolders in there
path = '/Home/nitin/Documents/corona\ dataset/'
How to check what features are the intermediate layers learning in a particular model? And how can we display it in the notebook by highlighting those features in the image?
Any help is appreciated. Thanks.