To the extent that the people making the models feel unburdened by the data being explicitly watermarked "don't use me", you are correct.
Seems like an awful risk to deliberately strip such markings. It's a kind of DRM, and breaking DRM is illegal in many countries.
But it's not intended as a watermark, it's an attempt at disruption. And with some models it simply doesn't work.
For instance, I've seen somebody experiment with Glaze (the image AI version of this). Glaze at high levels produces visible artifacts (see middle image: https://pbs.twimg.com/media/FrbJ9ZTacAAWQQn.jpg:large ).
It seems some models ignore it and produce mostly clean images on the output (looking like the last image), while others just interpret is as a texture, the character is just wearing a funny patterned shirt. This is while the intended result is fooling the model to generate something other than the intended character.
> It seems some models ignore it and produce mostly clean images on the output (looking like the last image), while others just interpret is as a texture
This sounds like you’re talking about img2img generation based on a glazed image instead of training, which isn’t the intended purpose.
No, I'm not talking about img2img. There are people training LoRAs on these. There's been multiple experiments, so far I've seen no evidence of it working as intended.
Here's an example I found: https://www.reddit.com/r/aiwars/comments/1h1x4e2/a_very_deta...
You can see there an example of training picking up the glaze artifacts and just using them as a funky texture. That's not really what Glaze is intended to do. Glaze is supposed to interfere with training, not be interpreted as "this artist draws skin with a weird pattern on it".