Why L2/L1 loss for regression?

L1 and L2 loss has some well studied properties for linear model, but for deep learning model, there is no convex optimization gurantee anyway. So is there any good reason we never use L1.5 loss or L3, L4 loss?