Share your work here ✅

Hi Johnpal
Nice work!
mrfabulous1 :smiley::smiley:

1 Like

to identify calls, then to classify them and finally (hopefully) maybe use NLP models to search for underlying structure

You may also be interested in this new 8TB dataset of whale calls I stumbled on the other day http://www.soest.hawaii.edu/ore/dclde/dataset/

DCLDE being the 2020 Workshop on Detection, Classification, Localization and Density Estimation of Marine Mammals using Passive Acoustics

1 Like

Hello Jeremy, in fact it wasn’t. I used the tabular notebook from the class. I added a few lines mostly to pre process the dataset and used the one cycle policy.

I was going to share the notebook in a github repository, but I had to leave home.

I will share it on Monday.

2 Likes

Really?!

Awesome I have seen some amazing things happening with the fastai tabular framework. A winning solution for one of the kaggle competition that used k-folds and haven’t been able to completely go through that frame work, but it’s on my list.

I have worked with the tabular model and used the embedding to get better results with a random Forest, but look forward to being able to see your approach. I still think there is so much value in tabular data. Even thought nlp is what fascinates me the most.

Best Regards I’ll be back Monday night to see if you were able to upload thanks a million

1 Like

Interesting, thank you, I will check it up!

Happy to have something worthwhile to share here!

Check out Learning to Feel, which uses fastai to identify and extract emotions and mood from music / audio – built on top of Streamlit.

Any feedback is more than welcome :blush:

Github here.

5 Likes

Hi zache hope all is well!

I ran your app I can see it took a lot of work.
Well done!

Cheers mrfabulous1 :smiley::smiley:

1 Like

A bit :slight_smile: Thank you!

Hi everyone,

This has probably been done many times before, but … I’ve written a web app to create and run inference on MNIST style images;


any feedback would be welcome (o:

Hi everyone,

I’m currently working through a first pass of both Part I & II and wanted to share my first project - mostly to say thank you to fast.ai for such a great course.

The use case is classification of New Zealand birds, but the main aim was to get it running end-to-end to understand what that involved. [mostly this :thinking:]

Cheers
Kristin

fastai-project
https://birdid-ui.azurewebsites.net/

7 Likes

Hi kristinwil
Well done!

mrfabulous1 :smiley::smiley:

1 Like

This is awesome. Your website is also very nice. Quick question: how did you prepare your twitter data?

There is an API for the Twitter, and for learn I have used Kaggle Dataset: https://www.kaggle.com/kazanova/sentiment140

1 Like

Thanks. I’ll check it out

Hello everyone, I am new to the community. I went through lesson 1 recently and as part of the exercise, trained a model for classifying flowers and got 70% accuracy on the test dataset. Can anyone tell how good/bad that is? :slight_smile:

code resides here on github.

Also, I had problem adding test dataset once after ImageDatabunch object is created. I couldn’t add it via data.add_test method. I had to do the following:

test_labels = pd.read_csv(path/'test.txt', delimiter=' ', header=None)
test_labels.columns = ['image', 'label']
test = ImageList.from_df(test_labels, path)

# prediction (NOT THE BEST WAY)
preds = [learn.predict(test[x])[1].tolist() for x in range(len(test))]

please let me the correct way, if you know it. :slight_smile:

Cheers

2 Likes

This project was inspired by @astronomy88 's project, who used audio waveforms to differentiate between the voices of Ben Affleck, Joe Rogen and Elon Musk.

I made a model that can detect whether a voice sample is real or is impersonated. I used Donald Trump as a reference, and thus built a model that can tell whether a speech sample is spoken by Donald Trump himself, or Impersonated. I got an accuracy of 92.5%, on a dataset containing 200+ frequency-domain audio spectrum graphs of hand-curated audio snippets.

Following are some details if anyone is interested.
I found some shortcomings in @astronomy88’s datasets. What he had used as data were time-domain power density graphs of the audio snippets of 5 seconds each at regular intervals. The problem with this data set is that it doesn’t account for the background noises, other people speaking, silence(absence of any speech in certain sections), different intonations, etc.
What I did instead was went through entire audio files, and carefully selected only those snippets where only the subject spoke, with no or minimal background noise and voices, and where the speech was clear and loud enough.
Thankfully I could amplify and normalize the audio files in Audacity itself. So all audio snippets were roughly of the same amplitude.

Next, I plotted the frequency domain audio spectrums for all such snippets (there’s an option to plot audio spectra in Audacity), and cropped out the plot and saved the image. The reason for considering frequency-domain analysis instead of time-domain analysis, because a person’s frequency spectrum is more charecteristic of his natural voice than his/her enunciation.

Also, I noticed that different videos had different EQ’s, and some were high-cut at some frequency, while others were high-cut at yet different frequencies. So, just for the sake of consistency, I only
cropped out the frequency analysis from 0 Hz to 9KHz.

Thats it!
I trained the model, and got a 92.5% accuracy
image

5 Likes

I love this! Thanks for sharing! I learned a lot too! :heart_eyes:

1 Like

Nice man. I use prophet a bit so i’ll take a look.

  1. Welcome to take at look at https://github.com/raybellwaves/xskillscore/blob/master/setup.py then https://sites.google.com/view/raybellwaves/cheat-sheets/git#h.p_wqGIjI8L-4OK

  2. I use https://github.com/raybellwaves/xskillscore/blob/master/ci/requirements-py36.yml and travis e.g. https://github.com/raybellwaves/xskillscore/blob/master/.travis.yml and https://travis-ci.org/

Hey guys,

I’m Winston and I created The Bipartisan Press. We label our articles with the political bias, and recently, we came up with the idea of using AI to automate this process and make it more systematic.

I’ve worked with FastAi in the past and so we turned to FastAI to do this, also for it’s simplicity. Using a few tutorials we found on medium and adapting them a little, we were able to tune models like BERT, Roberta, and Albert using FastAI and achieve pretty impressive results.

We documented those here : https://www.thebipartisanpress.com/politics/calculating-political-bias-and-fighting-partisanship-with-ai/

To sum it up, we used a dataset from Adfontesmedia, trained many different variations and model, and attained the lowest error deviation (MAE) of 6.03, using Roberta.

Thank you to all the various resources on the forum and to Jeremy Howard for his very in depth tutorials and courses.

Let us know what you think and how we can improve!

7 Likes

I know I’ve posted here on the same topic, but I’ve converted my work by using nbdev and published my first pypi package (pip install profetorch). The docs can be found here. Still a long way to go but its an exciting start.

1 Like