When does classification become regression?

Is there a point at which we should switch from classification to regression based on the number of classes? Conversely if I can adequately bucket regression answers into classes should I really just use classification?

example that I am working on:
I have an image that I want to grade on a scale from 1-100 (preferably as integers). If I had an adequate test set where each grade is represented by enough samples should I treat this as a classification problem or a regression problem? What about 1-1000 or 1-1000000 or 0-1 as .00001 increments.

Is there much of a difference between regression and classification beyond the loss function and the design of the last 1,2 layers?

Using regression loss like MSE for classification will give you some positive results, just not as positive as classification losses. Why? Because losses like sigmoid loss have the property of being resistant to outliers. Even if you get far away on either side of the curve, because it is squashed between some values, you can make so much of an error and backprogate in the wrong direction.

Also my comment on the second part of your question would be that regression is assigning value- higher value - higher the presence of certain features. If you classes are named in some random order and don’t constitute any meaning then assigning a numeric value and asking you model to make an ordered prediction and distinguish them in such a manner could be difficult and maybe impossible, or may require tremendous amount of nonlinearites. You could run some experiments and let us know. It’s pretty simple, just change the loss function!