Sometimes it helps to look at another metric in addition to loss and accuracy. One other popular/useful metric for binary classification is to check the AUC (Area under Curve). In particular if you have an inbalanced dataset, you could have a very misleading accuracy for example if you had 90% of one class and 10% of another, just by guessing everything is the majority class you have 90% accuracy yet you have a classifier that is not useful. So its important to look at the balance between true positives and false positives.
Its pretty easy to use this metric, see below code:
from sklearn import metrics
roc_auc = metrics.roc_auc_score
roc_auc(y, pred)