Don't take it seriously.
This sort of thing happens all the time, e.g.: Exhibit A, Exhibit B.
It might be an anti-cheating measure, or a poops-n-giggles mechanism. Data cleansing is a huge part of being an analyst / hacker, so being able to identify why one's algo assigns a Huge loss onto certain outlier samples is something we should all be familiar with doing. If you have a huge dataset, manually looking validating it is not an option, so you'd have to use an automated process like the one described above. In the sealions competition and the cervix cancer competition, you could do other processing such as color histograms, or looking at the comparative image sizes and depths.
This is one of the few areas that separates the plug-n-play competitors from the top LB scorers. How much sweat and blood they put into fine tuning their solutions.