hatthew 7 days ago

Let's say you have a photo of a starry night sky, and a photo of a slightly brighter sky with no visible stars. If you do "fully accurate on average" dithering, the dithered output would be identical. But in that context, the difference between "sky with dots" and "sky without dots" is more important than the difference between "dark sky" and "very slightly less dark sky". In that context, I would say a dithering algorithm that discards the very slight error in shade in favor of better accuracy in texture is objectively better.

On that wikipedia page, compare Floyd–Steinberg vs Gradient-based. In my opinion, gradient-based better preserves detail in high-contrast areas (e.g. the eyelid), whereas FS better preserves detail in low-contrast areas (e.g. the jawline between the neck and the cheek).

1
crazygringo 7 days ago

You're talking about artistic tradeoffs. That's fine.

I'm asking, how do you quantitatively measure in the first place so you can even define the tradeoffs quantitatively?

You say how in your opinion, different algorithms preserve detail better in different areas. My question is, how do we define that numerically so it's not a matter of opinion? If it depends on contrast levels, you can then test with images of different contrast levels.

It doesn't seem unreasonable that we should be able to define metrics for these things.