Unfortunately, for a multi-part upload it isn't a hash of the total object, it is a hash of the hashes for each part, which is a lot less useful. Especially if you don't know how the file was partititioned during upload.
And even if it was for the whole file, it isn't used for the ETag, so, so it can't be used for conditional PUTs.
I had a use case where this looked really promising, then I ran into the multipart upload limitations, and ended up using my own custom metadata for the sha256sum.
If parts are aligned on a 1024-byte boundary and you know each part's start offset, it should be possible to use the internals of a BLAKE3 tree to get the final hash of all the parts together even as they're uploaded separately. https://github.com/C2SP/C2SP/blob/main/BLAKE3.md#13-tree-has...
Edit: This is actually already implemented in the Bao project which exploits the structure of the BLAKE3 merkle tree structure to offer cool features like streaming verification and verifying slices of a file as I described above: https://github.com/oconnor663/bao#verifying-slices
Ways to control etag/Additional Checksums without configuring clients:
CopyObject writes a single part object and can read from a multipart object, as long as the parts total less than the 5 gibibyte limit for a single part.
For future writes, s3:ObjectCreated:CompleteMultipartUpload event can trigger CopyObject, else defrag to policy size parts. Boto copy() with multipart_chunksize configured is the most convenient implementation, other SDKs lack an equivalent.
For past writes, existing multipart objects can be selected from inventory filtering ETag column length greater than 32 characters. Dividing object size by part size might hint if part size is policy.
> Dividing object size by part size
Correction: and also part quantity (parsed from etag) for comparison
Don't the SDKs take care of computing the multi-part checksum during upload?
> To create a trailing checksum when using an AWS SDK, populate the ChecksumAlgorithm parameter with your preferred algorithm. The SDK uses that algorithm to calculate the checksum for your object (or object parts) and automatically appends it to the end of your upload request. This behavior saves you time because Amazon S3 performs both the verification and upload of your data in a single pass. https://docs.aws.amazon.com/AmazonS3/latest/userguide/checki...
It does and has a good default. An issue I've come across though is you have the file locally and you want to check the e-tag value - you'll have to do this locally first and then compare the value to the S3 stored object.