People still play chess even though the chess programs are totally superior, and I guess people will continue to create art and patronise human art for a long time to come, even if AI-assisted art becomes superior. For now, I think that the user still needs some artistic sense to create quality work with it.
Many artists may be struggling to make a living through their art, so I can why they might react against this threat to their livelihood. Some artists who already use digital process are embracing it as an amazing new tool, while others may be curious, and others would like to see it banned altogether… but I don’t think it’s possible to put the genie back into the bottle.
Other ethical concerns are the potential to deceive people by publishing fake media, harassment with offensive or disturbing imagery based on the victim’s likeness, illegal imagery, and other uses that might be bad PR for the company that built the model. A person can assault someone with a hammer, rather than using it for carpentry, but this doesn’t mean we should ban hammers or make them soft and bouncy. If someone commits an offence such as harassment with these tools, that person is responsible for what they did, and they might be penalised legally or socially. I do blame guns for killing people, but we shouldn’t blame a hammer or a paintbrush.
There’s also the issue of bias and lack of diversity. I suppose when asked to draw a “CEO” the model is likely to produce mostly white old men. In general it seems to draw more white people. But it’s not difficult to ask the model to draw people of different genders and races and ages, or to automate that as Open AI has done with Dalle2. As I understand, Open AI has implemented a hack to add words to the prompts, and give more diverse results with regards to gender, race, and age when it was not already specified, and it’s easy to do similarly ourselves if we want to do that. Training or fine-tuning with a less biased or affirmitively adjusted dataset would be another possibility.