About the Deep Learning category

Use this category to discuss anything to do with deep learning that’s not related to a fast.ai course (each of those has its own category) - including stuff that’s not related to fast.ai at all!

Topics could include new papers, projects, applications, recent news, or anything else you’re interested in!

25 Likes

This is really good news. Sometimes I have a question but it is not directly related to anything that was taught in the courses. Now there is a good place to ask.

2 Likes

This is a good place to search unfamiliar topics about deep learning.

Thanks.

:yum:

May I know what is the difference between Deep Learning and Machine Learning?
Does they both are related to Data Science or Artificial Intelligence?

Understanding how artificial intelligence works may seem to be highly overwhelming, but it all comes down to two concepts, machine learning, and deep learning. These two terms are usually used interchangeably assuming they both mean the same, but they are not. Both the terms are not new to us, but the way they are utilized to describe intelligent machines has always been changing.

Difference Between Machine Learning and Deep Learning

It is important for organizations to clearly understand the difference between machine learning and deep learning. By definition, machine learning is a concept in which algorithms parse the data, learn from it, and then apply the same to make informed decisions. A simple example would be of Netflix, which uses an algorithm to learn about your preferences and present you with the choices that you may like to watch.

In the case of machine learning, the algorithm needs to be told how to make an accurate prediction by providing it with more information, whereas, in the case of deep learning, the algorithm is able to learn that through its own data processing. It is similar to how a human being would identify something, think about it, and then draw any kind of conclusion.

Hope you understand the concept.

Thank you.
vinod kumar kasipuri.

What are the tools used to make a segmentation dataset? I want to create the masks from the raw images, and then feed those to a network. Came across U-net to go on about the segmentation problem. But how do I make the custom dataset from the raw images?

The task is to detect different types of red blood cells(different classes) in a microscopic image to give a count of each type of cell present and detect diseases if any. Basically it is a segmentation and classifcation problem.

Initally, the image is the be segmented and then patches are taken which is later fed to a network for classifcation. I am not confident in hand crafting the features myself(for segmentation)

@jeremy Also looking for your advice or suggestions if any.

Thanks!

where can i ask about an error i am getting while going through the deep learning course?

Hello,

I am a programmer at a large school district and I want to use the longitudinal data I have to predict who is at risk for dropping out and what remediation efforts are best for the student who is at risk.
I am watching the videos and I see the benefit of layered machine learning and want to apply this to my work. Where can I go to find AI models used with education data and not get information about teaching AI topics in education? By the way, I have a tendency to learn how to swim by jumping into the deep end. Thanks in advance for any help.

Hi,
Noob here.

Here’s the dataset that I’m working on https://www.kaggle.com/arpitjain007/game-of-deep-learning-ship-datasets

and I’m using fastai, I’ve successfully built the model but I have no idea how to test it with ‘test.csv’ file.

Here’s my code

from fastai import *
from fastai.vision import *

path = '../input/train'
path = Path(path)
path.ls()
df = pd.read_csv(path/'train.csv')
data = ImageDataBunch.from_df('../input/train/images', df, ds_tfms=get_transforms(), size=224, bs=64 ).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=accuracy,  model_dir='/kaggle/working/models')
learn.fit_one_cycle(5)
df_test = pd.read_csv('../input/test_ApKoW4T.csv')

I don’t know how to use the Test Dataframe to predict.

did you figure out how to post a question?

We use Discourse (a free, open source discussion platform) for these forums, and discourse relies on a system of trust levels. New users can only create a topic after they first spend 10 minutes (total) reading at least 3 different posts on the forum. We have these limits in place to encourage you to get acquainted a bit with the discussions and some of the existing content before you start posting, and to discourage spammers. After spending 10 minutes reading at least 3 different posts,

maybe you are a new user?

New Kaggle competition

Ciphertext Challenge III: Wherefore Art Thou, Simple Ciphers?

"In this new decryption competition’s dataset, we’ve gone… to a time before computers… Shakespeare’s plays are encrypted, and we time travelers must un-encrypt them so people can do innovative stage productions with intricate makeup, costumes…

As in previous ciphertext challenges, simple classic ciphers have been used to encrypt this dataset, along with a slightly less simple surprise that expands our definition of “classic” into the modern age. The mission is the same: to correctly match each piece of ciphertext with its corresponding piece of plaintext."

Hello, I’m currently working on the XRAY dataset to predict the pathologies.
Is there a way to find masks using maskrcnn on the image data bunch created using the data block api ?
Also, can we use a unet architecture without annotations or masks and directly on the xray images ?
Thanks.

Why does my validation_loss plot have many fluctuation? Also, its loss range is from 0.4 to 1.2?

And also, why does it start from low to high? Isn’t that weird?

My code:

import tensorflow as tf
import scipy.io as spio
import random as rn 
import numpy as np
import os
from keras import backend as K
mat=spio.loadmat('32_32/X_train123.mat', squeeze_me=True)
mat1=spio.loadmat('32_32/Y_train123.mat',squeeze_me=True)
mat2=spio.loadmat('32_32/X_test123.mat',squeeze_me=True)
mat3=spio.loadmat('32_32/Y_test123.mat',squeeze_me=True)
x_train=mat['x_test']                                      #  x_test is x_train. typo
y_train=mat1['y_train']
x_test=mat2['x_test']
y_test=mat3['y_test']

print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
#########################################################################################################################
os.environ['PYTHONHASHSEED']='0'
np.random.seed(37)
rn.seed(1254)
tf.set_random_seed(89)
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)
#############################################################################################################################
model=tf.keras.models.Sequential()
#model.add(tf.keras.layers.Flatten())                                                      ###############don't delet
model.add(tf.keras.layers.Dense(256,input_dim=3001,activation=tf.nn.tanh))
model.add(tf.keras.layers.Dense(256,activation=tf.nn.tanh))
#model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(128,activation=tf.nn.tanh))
model.add(tf.keras.layers.Dense(128,activation=tf.nn.tanh))
model.add(tf.keras.layers.Dropout(0.5)) 
model.add(tf.keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
history = model.fit(x_train, y_train,
                        validation_data=(x_test, y_test),
                        epochs=300)
###########################################################################################3
val_loss,val_acc =model.evaluate(x_test,y_test)

In this plot the epochs is 500.And as you see ,it seems that something is repeated frequently.

500

Have people checked out Groq?

Would love to know what @Jeremy and @Rachel think

Is it going to be a massive breakthrough in terms of deep learning applications?

Thanks

Yes, machine learning is a part of data science. So there’s plenty of relations between them. The machine learning algorithms train on data delivered by data science to become smarter and more informed in giving back business predictions. The main difference lies in the fact that data science covers the whole spectrum of data processing. It’s not limited to the algorithmic or statistical aspects. Read this article to learn the difference between data science, machine learning and artificial intelligence: https://www.cleveroad.com/blog/data-science-vs-machine-learning-vs-ai

It’s very useful!! Thanks!

Hi I am working on a cats vs dogs classifier, but for some reason my loss function is not decreasing and it is stuck at 69%. I am using

class Net(nn.Module):
 def __init__(self):
     super(Net, self).__init__()
     self.conv1 = nn.Conv2d(3, 16, 3)
     self.max_pool = nn.MaxPool2d(2)
     self.conv2 = nn.Conv2d(16, 32, 3)
     self.conv3 = nn.Conv2d(32, 64, 3)
     self.conv4 = nn.Conv2d(64, 32, 3)
     self.conv5 = nn.Conv2d(32, 16, 3)
     self.batch_norm2 = nn.BatchNorm2d(16)
     self.fc1   = nn.Linear(784, 128)
     self.fc3   = nn.Linear(128, 64)
     self.fc4   = nn.Linear(64, 2) 

 def forward(self, x):
     x = F.relu(self.conv1(x))
     x = self.max_pool(x)
     x = F.relu(self.conv2(x))
     x = self.max_pool(x)
     x = F.relu(self.conv3(x))
     x = self.max_pool(x)
     x = F.relu(self.conv4(x))
     x = self.max_pool(x)
     x = F.relu(self.conv5(x))

     x = self.max_pool(x)
     x = x.view(-1, 16*7*7)
     x = F.relu(self.fc1(x))
     x = F.relu(self.fc3(x))
     x = F.relu(self.fc4(x))
     return F.softmax(x)

for epoch in range(EPOCHS):
 for j, sample in enumerate(dataset_loader['train']):
     running_loss  = 0.0
     optimizer.zero_grad() #zero the parameter gradients
     output = net(sample['image'].to(device))

     loss = criterion(output, sample['label'].to(device))

     loss.backward()
     optimizer.step()
     running_loss += loss.item()
     print(epoch, ':', running_loss)

I am using this dataset:


Here is the whole code:
https://drive.google.com/open?id=1o_1YT859FrR2cH7si8kyNmYSDva_xyy-

hi, maybe you can try adding some dropout between your Linear layers.

Just want to share a great read on Geoffrey Hinton, Yann LeCun, and Yoshua Bengio’s talks at AAAI 2020. AAAI 2020 | A Turning Point for Deep Learning? Hinton, LeCun, and Bengio Might Have Different Approaches