Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just to be clear, the child was just an example of someone who could theoretically experience 'cruel' treatment from the current version of stable diffusion. I'm absolutely not recommending people let their children use the model unsupervised. It doesn't have to be a parenting problem, though.

The same could be said (for example) of a random mother trying to get inspiration for a 'my little pony' birthday cake for their child, and being presented with the 'other' kind of image unintentionally, without their consent. I think they would be justifiably upset in that situation.

If we were to imagine someone attempting to put stable diffusion into some future consumer product, I think they would have to be concerned about these kinds of scenarios. Therefore, the scientists are trying to figure out how to accomplish the filtering.

FWIW, I don't think a model could be made that actively prevented people from using their own NSFW training data. The only difference in the future will be that the public models won't be able to do it 'for free' with no modifications needed. You'll have to train your own model, or wait for someone else to train one.



Interestingly, better results might be achieved by exposing the model to a large corpus of apropriately tagged NSFW data, so that the prompt may exclude it. I imagine img2img could also make an image SFW, or vice-versa. I'd be curious to know what kind of alterations it would make.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: