InceptionV3 Pre training

I want to know why I used the InceptionV3 model for pre training, and then I predicted the results like this: [0., 1.], [0., 1.]…

This is train code:

def Creat_InvepV3(md_name, path):
    img_width, img_height =150,150
    base_model = InceptionV3(weights="imagenet", include_top=False,
                     input_shape=(img_width, img_height, 3))
    x = base_model.output
    x = Flatten()(x)
    x = Dense(64)(x)
    x = BatchNormalization()(x)
    x = Activation('relu')(x)
    x = Dropout(0.5)(x)
    x = Dense(2)(x)
    x = BatchNormalization()(x)
    x = Activation('softmax')(x)
    model = Model(inputs=base_model.input, outputs=x)
    model.compile(optimizer=SGD(lr=1e-4, momentum=0.4,decay=1e-6),
          loss='categorical_crossentropy', metrics=['accuracy']) + md_name + '/model.h5')
    return model

Is that the reason my training data isn’t correct, because I’m tagged, but it shouldn’t be like that, vgg16’s training data I tagged with, and it’s not like that