#4: Deep Learning Meetup in San Francisco, Feb 12

Hello everyone, there will be sandwiches this time.

https://www.meetup.com/bayareadeeplearning/events/247637630/

Description:

Members come with their deep learning projects and topics to discuss them, and to give and receive help. We organize into small groups so each person can stay active. We recommend bringing a laptop.

Notes:

Our goal is to host the meetup every Monday.

Don from Onepanel.io will provide sandwiches and GPU access to attendees.

Feel free to arrive and leave whenever is convenient.

If you donā€™t know what you want to discuss or work on, we recommend reading a Kaggle winner interview and asking others to help you investigate a technique they used to win: http://blog.kaggle.com/category/winners-interviews/

People new to deep learning are welcome and appreciated.

ā€œDeep Learningā€ refers to the computer programming technique.

Logistics:

If you arrive after the doors lock, knock on the front door. If that doesnā€™t work, tell us on Slack or message Matt K. on meetup.com.

Slack invite:

(Slack link)

This Slack workspace is public, so youā€™ll need to use private channels and direct messages for sensitive information.

USF Data Institute is a 7-minute walk from the Embaracdero BART station: https://goo.gl/maps/KABiafswzPU2

Stats:

  • # of meetups: 4
  • # of meetups in a row: 4

Location:
USF Data Institute
101 Howard St Ā· San Francisco, CA
Room 155

Datetime:
Monday, February 12, 2018
6:30 PM to 8:30 PM

At the meetup yesterday, watched a programmer (@jmcarpenter2) fix two lines of my notebook - it was an invaluable experience. I got to hear how he read and pronounced the abbreviations.

After training on regular cat and tigercub images, I was able to plot my ā€˜testā€™ dataset with the predictions. There were no siamese cats in my train and valid datasets. I am going to make the test set more obscure to see what fails.

Screenshot-2018-2-13%20Test-catsandtigercubs

This also served as an invaluable experience for me in beginning to develop my ideas for my Deep Learning project: Music Style Transfer. @sandip and @Matthew gave me a bunch of good ideas to help with the project formulation and pre-processing. Iā€™m looking forward to working on this project and returning to continue to help and receive help on deep learning projects. Thanks for organizing @Matthew!

@jmcarpenter2 , @Matthew

I am trying to generate a blurred image dataset.
I want to take my test folder images, blur them and save in a new folder, say test1.
I am using the Gaussian filter from PIL - it has the radius attribute that lets me control how much blurring I want.
I can open an image from a source directory, do the filtering on one image and it saves in the working directory.
I cannot yet batch process an entire folder or save the processed images in a specific folder.
I think the main problem is that when I am reading the filenames recursively using glob.glob, the file name is actually the entire path plus the filename.
Any pointers?

@sandip

Leaving multiprocessing aside for now, Iā€™d just iterate through the glob, modifying the images one by one and saving them.

This is what comes to mind:

fpaths = glob("in_dir/*.jpg")
for fpath in fpaths:
    img = Image.open(fpath)
    img = transform(img)
    fname = fpath.split('/')[-1]
    fpath = os.path.join(out_dir, fname)
    img.save(fpath)

Itā€™s pseudocode, so feel free to ask more questions.

1 Like

Thanks!

Blurred images upto a radius 5 (example below) were predicted correctly.
Need to clean up my workflow.

testfile-6testfile-6b

1 Like

Matt - This is how I converted your pseudocode

Screenshot-2018-2-16%20Img%20Processing