pitched 4 days ago

To consistently generate the same image, we’d all have to agree on a standard model, which I can’t see happening any time soon. They feel more like fonts than code libraries.

1
K0balt 3 days ago

I mean, yeah, but here we’re talking about a knowledge based compression standard, so I would assume that a specific model would be chosen.

The interesting thing here is that the model wouldn’t have to be the one that produces the end result, just -a- end result deterministically produced from the specified seed.

That end result could then act as the input to the user custom model which would add the user specific adjustments, but presumably the input image would be a strong enough influence to guide the end product to be equivalent in meaning if not in style.

Effectively, this could be lossless compression, but only for data that could be produced by a model given a specific prompt and seed, or lossy compression for other data.

It’s a pretty weird idea, but it might make sense if thermodynamic computing or similar tech fulfills its potential to run huge models cheaply and quickly on several orders of magnitude less power (and physical size) than is currently required.

But that will require nand-scale, room temperature thermodynamic wells or die scale micro-cryogenic coolers. Both are a bit of a stretch but only engineering problems rather than out-of-bounds with known physics.

The real question is whether or not thermodynamic wells will be able to scale, and especially whether we can get them working at room temperature.