I marked the question as OT (OffTopic) because it is not exactly part of the course.

After watching and practicing with good results chapters 1-3 I decide to went back to one of my previous learning models ‘from scratch’ so I can put in use the concepts I grasped so far thanks of this course (Dropouts, learning rates, optimizers, etc)

So I made this model (in keras)

```
dataset = np.loadtxt("data/pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(64, input_dim=8, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
rms = RMSprop(lr=0.003)
# Compile model
model.compile(loss='binary_crossentropy', optimizer=rms, metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=700, batch_size=100)
#model.save_weights("data/diabetes.h5")
# evaluate the model
scores = model.evaluate(X, Y)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
# calculate predictions
predictions = model.predict(X)
# round predictions
rounded = [round(x[0]) for x in predictions]
print(rounded)
```

The data is well known as a beginners exercise , the wird thing while I am trying to tune it are these

a. I need 700 epocs to hit accuracy 90+ but 710 epocs seems overfitting a lot and my acuracy lowers BAD

b. It seems not constant, I mean, with the same data I run it once and get 93.36 acc and 0.1664 loss and after 5 secs I run it again and get, 90.49 and a third time 94.40, etc. (never the same and always a lot of difference between values )

The dropout does not work here because I have little data, If i add a dropout layer the entire model falls badly no matter the value.

I tried 7000 epocs with a batch size of 1000 and got a 99.87% first run and 97.40 send, and the third time 99.87 again with 0.0103 loss

I just don’t understand why the same model with the same parameters and same data gives me such a difference