JoshTriplett 3 days ago

It's also possible to enforce the use of conditional writes: https://aws.amazon.com/about-aws/whats-new/2024/11/amazon-s3...

My biggest wishlist item for S3 is the ability to enforce that an object is named with a name that matches its hash. (With a modern hash considered secure, not MD5 or SHA1, though it isn't supported for those either.) That would make it much easier to build content-addressible storage.

6
josnyder 2 days ago

While it can't be done server-side, this can be done straightforwardly in a signer service, and the signer doesn't need to interact with the payloads being uploaded. In other words, a tiny signer can act as a control plane for massive quantities of uploaded data.

The client sends the request headers (including the x-amz-content-sha256 header) to the signer, and the signer responds with a valid S3 PUT request (minus body). The client takes the signer's response, appends its chosen request payload, and uploads it to S3. With such a system, you can implement a signer in a lambda function, and the lambda function enforces the content-addressed invariant.

Unfortunately it doesn't work natively with multipart: while SigV4+S3 enables you to enforce the SHA256 of each individual part, you can't enforce the SHA256 of the entire object. If you really want, you can invent your own tree hashing format atop SHA256, and enforce content-addressability on that.

I have a blog post [1] that goes into more depth on signers in general.

[1] https://josnyder.com/blog/2024/patterns_in_s3_data_access.ht...

JoshTriplett 2 days ago

That's incredibly interesting, thank you! That's a really creative approach, and it looks like it might work for me.

UltraSane 3 days ago

S3 has supported SHA-256 as a checksum algo since 2022. You can calculate the hash locally and then specify that hash in the PutObject call. S3 will calculate the hash and compare it with the hash in the PutObject call and reject the Put if they differ. The hash and algo are then stored in the object's metadata. You simply also use the SHA-256 hash as the key for the object.

https://aws.amazon.com/blogs/aws/new-additional-checksum-alg...

thayne 3 days ago

Unfortunately, for a multi-part upload it isn't a hash of the total object, it is a hash of the hashes for each part, which is a lot less useful. Especially if you don't know how the file was partititioned during upload.

And even if it was for the whole file, it isn't used for the ETag, so, so it can't be used for conditional PUTs.

I had a use case where this looked really promising, then I ran into the multipart upload limitations, and ended up using my own custom metadata for the sha256sum.

infogulch 2 days ago

If parts are aligned on a 1024-byte boundary and you know each part's start offset, it should be possible to use the internals of a BLAKE3 tree to get the final hash of all the parts together even as they're uploaded separately. https://github.com/C2SP/C2SP/blob/main/BLAKE3.md#13-tree-has...

Edit: This is actually already implemented in the Bao project which exploits the structure of the BLAKE3 merkle tree structure to offer cool features like streaming verification and verifying slices of a file as I described above: https://github.com/oconnor663/bao#verifying-slices

UltraSane 2 days ago

That is very neat! I love clever uses of data structures like this.

vdm 2 days ago

Ways to control etag/Additional Checksums without configuring clients:

CopyObject writes a single part object and can read from a multipart object, as long as the parts total less than the 5 gibibyte limit for a single part.

For future writes, s3:ObjectCreated:CompleteMultipartUpload event can trigger CopyObject, else defrag to policy size parts. Boto copy() with multipart_chunksize configured is the most convenient implementation, other SDKs lack an equivalent.

For past writes, existing multipart objects can be selected from inventory filtering ETag column length greater than 32 characters. Dividing object size by part size might hint if part size is policy.

vdm 2 days ago

> Dividing object size by part size

Correction: and also part quantity (parsed from etag) for comparison

vdm 2 days ago

Don't the SDKs take care of computing the multi-part checksum during upload?

> To create a trailing checksum when using an AWS SDK, populate the ChecksumAlgorithm parameter with your preferred algorithm. The SDK uses that algorithm to calculate the checksum for your object (or object parts) and automatically appends it to the end of your upload request. This behavior saves you time because Amazon S3 performs both the verification and upload of your data in a single pass. https://docs.aws.amazon.com/AmazonS3/latest/userguide/checki...

tedk-42 2 days ago

It does and has a good default. An issue I've come across though is you have the file locally and you want to check the e-tag value - you'll have to do this locally first and then compare the value to the S3 stored object.

vdm 2 days ago

https://github.com/peak/s3hash

It would be nice if this got updated for Additional Checksums.

texthompson 3 days ago

That's interesting. Would you want it to be something like a bucket setting, like "any time an object is uploaded, don't let an object write complete unless S3 verifies that a pre-defined hash function (like SHA256) is called to verify that the object's name matches the object's contents?"

BikiniPrince 3 days ago

You can already put with a sha256 hash. If it fails it just returns an error.

jiggawatts 3 days ago

That will probably never happen because of the fundamental nature of blob storage.

Individual objects are split into multiple blocks, each of which can be stored independently on different underlying servers. Each can see its own block, but not any other block.

Calculating a hash like SHA256 would require a sequential scan through all blocks. This could be done with a minimum of network traffic if instead of streaming the bytes to a central server to hash, the hash state is forwarded from block server to block server in sequence. Still though, it would be a very slow serial operation that could be fairly chatty too if there are many tiny blocks.

What could work would be to use a Merkle tree hash construction where some of subdivision boundaries match the block sizes.

texthompson 3 days ago

Why would you PUT an object, then download it again to a central server in the first place? If a service is accepting an upload of the bytes, it is already doing a pass over all the bytes anyway. It doesn't seem like a ton of overhead to calculate SHA256 in the 4092-byte chunks as the upload progresses. I suspect that sort of calculation would happen anyways.

willglynn 3 days ago

You're right, and in fact S3 does this with the `ETag:` header… in the simple case.

S3 also supports more complicated cases where the entire object may not be visible to any single component while it is being written, and in those cases, `ETag:` works differently.

> * Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-S3 or plaintext, have ETags that are an MD5 digest of their object data.

> * Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-C or SSE-KMS, have ETags that are not an MD5 digest of their object data.

> * If an object is created by either the Multipart Upload or Part Copy operation, the ETag is not an MD5 digest, regardless of the method of encryption. If an object is larger than 16 MB, the AWS Management Console will upload or copy that object as a Multipart Upload, and therefore the ETag will not be an MD5 digest.

https://docs.aws.amazon.com/AmazonS3/latest/API/API_Object.h...

danielheath 3 days ago

S3 supports multipart uploads which don’t necessarily send all the parts to the same server.

texthompson 3 days ago

Why does it matter where the bytes are stored at rest? Isn't everything you need for SHA-256 just the results of the SHA-256 algorithm on every 4096-byte block? I think you could just calculate that as the data is streamed in.

jiggawatts 3 days ago

The data is not necessarily "streamed" in! That's a significant design feature to allow parallel uploads of a single object using many parts ("blocks"). See: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMu...

Dylan16807 2 days ago

> Isn't everything you need for SHA-256 just the results of the SHA-256 algorithm on every 4096-byte block?

No, you need the hash of the previous block before you can start processing the next block.

flakes 3 days ago
losteric 3 days ago

Why does the architect of blob storage matter? The hash can be calculated as data streams in for the first write, before data gets dispersed into multiple physically stored blocks.

willglynn 3 days ago

It is common to use multipart uploads for large objects, since this both increases throughput and decreases latency. Individual part uploads can happen in parallel and complete in any sequence. There's no architectural requirement that an entire object pass through a single system on either S3's side or on the client's side.

Salgat 3 days ago

Isn't that the point of the metadata? Calculate the hash ahead of time and store it in the metadata as part of the atomic commit for the blob (at least for S3).

cmeacham98 3 days ago

Is there any reason you can't enforce that restriction on your side? Or are you saying you want S3 to automatically set the name for you based on the hash?

JoshTriplett 3 days ago

> Is there any reason you can't enforce that restriction on your side?

I'd like to set IAM permissions for a role, so that that role can add objects to the content-addressible store, but only if their name matches the hash of their content.

> Or are you saying you want S3 to automatically set the name for you based on the hash?

I'm happy to name the files myself, if I can get S3 to enforce that. But sure, if it were easier, I'd be thrilled to have S3 name the files by hash, and/or support retrieving files by hash.

mdavidn 3 days ago

I think you can presign PutObject calls that validate a particular SHA-256 checksum. An API endpoint, e.g. in a Lambda, can effectively enforce this rule. It unfortunately won’t work on multipart uploads except on individual parts.

UltraSane 3 days ago

The hash of multipart uploads is simply the hash of all the part hashes. I've been able to replicate it.

thayne 3 days ago

But in order to do that you need to already know the contents of the file.

I suppose you could have some API to request a signed url for a certain hash, but that starts getting complicated, especially if you need support for multi-part uploads, which you probably do.

JoshTriplett 2 days ago

Unfortunately, last I checked, the list of headers you're allowed to enforce for pre-signing does not include the hash.

anotheraccount9 3 days ago

Could you use a meta field from the object and save the hash in it, running a compare from it?