I set up the Dreambooth training notebook on my desktop, and the StableDiffusionSafetyChecker seems to have a weird definition of NSFW. I switched the sample prompt for the toy cat sample dataset from “a photo of sks toy riding a bicycle” to “a photo of sks toy flying a plane” and it flagged one of the outputs as NSWF.
I’m not sure what it’s seeing, but I disabled the safety checker by making the following changes to this file. The above image is from after disabling it.
innom-dt/mambaforge/envs/fastai-2022p2/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
# safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device)
# image, has_nsfw_concept = self.safety_checker(
# images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
# )
return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=False)