People still play chess even though the chess programs are totally superior, and I guess people will continue to create art and patronise human art for a long time to come, even if AI-assisted art becomes superior. For now, I think that the user still needs some artistic sense to create quality work with it.
Many artists may be struggling to make a living through their art, so I can why they might react against this threat to their livelihood. Some artists who already use digital process are embracing it as an amazing new tool, while others may be curious, and others would like to see it banned altogether⌠but I donât think itâs possible to put the genie back into the bottle.
Other ethical concerns are the potential to deceive people by publishing fake media, harassment with offensive or disturbing imagery based on the victimâs likeness, illegal imagery, and other uses that might be bad PR for the company that built the model. A person can assault someone with a hammer, rather than using it for carpentry, but this doesnât mean we should ban hammers or make them soft and bouncy. If someone commits an offence such as harassment with these tools, that person is responsible for what they did, and they might be penalised legally or socially. I do blame guns for killing people, but we shouldnât blame a hammer or a paintbrush.
Thereâs also the issue of bias and lack of diversity. I suppose when asked to draw a âCEOâ the model is likely to produce mostly white old men. In general it seems to draw more white people. But itâs not difficult to ask the model to draw people of different genders and races and ages, or to automate that as Open AI has done with Dalle2. As I understand, Open AI has implemented a hack to add words to the prompts, and give more diverse results with regards to gender, race, and age when it was not already specified, and itâs easy to do similarly ourselves if we want to do that. Training or fine-tuning with a less biased or affirmitively adjusted dataset would be another possibility.
Another ethical issue is around free software. Dalle-2 and Imagen are both more powerful than SD1.4, but neither is open source, Imagen isnât even available to use, and Dalle-2 has odious terms of use: Open AI claims ownership of all generated images, they can retroactively cancel your license to use generated images, and there are many other restrictions. I guess their position is different because they are running SaaS, but this is the opposite of âopenâ.
Stability AI is doing much better, but their model license is still restrictive, and the model does not qualify as free software. Whatâs the point of saying âYou agree not to use the Model or Derivatives of the Model In any way that violates any applicable national, federal, state, local or international law or regulationâ? Itâs already illegal to break the law! Harassment, slander, fraud, and hate speech are already illegal or subject to law. If we need more rules around the use of AI, content, and human behavior in general, these laws should be established in the appropriate context by due process, under advice from experts in AI and ethics. They should not be appended as terms of use on a software licence.
The Apache webserver license doesnât say âthis webserver canât be used by the military, or petrochemical companies, or hedge funds, or for GM researchâ or whatever they might think is unethical. It doesnât say that it canât be used to host hate speech. If it did say any of those things, it wouldnât be free software, and hardly anyone would use it. Itâs not the role of a software developer or a model developer or a hammer manufacturer to attempt to impose their own ethics on their users or customers. Even if the ethical guidelines seem to be good, it smells paternalistic to attempt to enforce them on others. As a free software enthusiast, I think itâs unethical to add miscellaneous terms of use like this to a software licence.