PHP telegram bot, extract url in UTF-16 code units - utf-16

I have a problem with telegram bot api. I'm trying to extract a URL from a message. It is written in the MessageEntity type that the offset and length are specified in UTF-16 code units. I've tried many ways to get a substring from the text (with mb_convert_encoding, iconv, json_encode etc.), but I did not get the correct link. It works for plain text without emoji but not with them.

$output = json_decode(file_get_contents('php://input'), TRUE);
$message = $output['message']['text'];
$entities = $output['message']['entities'];
function getURLs($message, $entities) {
$URLs = [];
//$message_encode = iconv('utf-8', 'utf-16le', $message); //or utf-16
$message_encode = mb_convert_encoding($message, "UTF-16", "UTF-8"); //or utf-16le
foreach ($entities as $entitie) {
if ($entitie['url']) {
$URLs[] = $entitie['url'];
}
if ($entitie['type']=='url') {
$URL16 = substr($message_encode, $entitie['offset']*2, $entitie['length']*2);
//$URLs[] = iconv('utf-16le', 'utf-8', $URL16);
$URLs[] = mb_convert_encoding($URL16, "UTF-8", "UTF-16");
}
}
return $URLs;
}
$URLs = getURLs($message, $entities);
You can use iconv or mb_convert_encoding, UTF-16le or UTF-16.
See also PHP - length of string containing emojis/special chars

Related

Printing to pdf from Google Apps Script HtmlOutput

For years, I have been using Google Cloud Print to print labels in our laboratories on campus (to standardize) using a Google Apps Script custom HtmlService form.
Now that GCP is becoming depreciated, I am in on a search for a solution. I have found a few options but am struggling to get the file to convert to a pdf as would be needed with these other vendors.
Currently, when you submit a text/html blob to the GCP servers in GAS, the backend converts the blob to application/pdf (as evidenced by looking at the job details in the GCP panel on Chrome under 'content type').
That said, because these other cloud print services require pdf printing, I have tried for some time now to have GAS change the file to pdf format before sending to GCP and I always get a strange result. Below, I'll show some of the strategies that I have used and include pictures of one of our simple labels generated with the different functions.
The following is the base code for the ticket and payload that has worked for years with GCP
//BUILD PRINT JOB FOR NARROW TAPES
var ticket = {
version: "1.0",
print: {
color: {
type: "STANDARD_COLOR",
vendor_id: "Color"
},
duplex: {
type: "NO_DUPLEX"
},
copies: {copies: parseFloat(quantity)},
media_size: {
width_microns: 27940,
height_microns:40960
},
page_orientation: {
type: "LANDSCAPE"
},
margins: {
top_microns:0,
bottom_microns:0,
left_microns:0,
right_microns:0
},
page_range: {
interval:
[{start:1,
end:1}]
},
}
};
var payload = {
"printerid" : QL710,
"title" : "Blank Template Label",
"content" : HtmlService.createHtmlOutput(html).getBlob(),
"contentType": 'text/html',
"ticket" : JSON.stringify(ticket)
};
This generates the expected following printout:
When trying to convert to pdf using the following code:
The following is the code used to transform to pdf:
var blob = HtmlService.createTemplate(html).evaluate().getContent();
var newBlob = Utilities.newBlob(html, "text/html", "text.html");
var pdf = newBlob.getAs("application/pdf").setName('tempfile');
var file = DriveApp.getFolderById("FOLDER ID").createFile(pdf);
var payload = {
"printerid" : QL710,
"title" : "Blank Template Label",
"content" : pdf,//HtmlService.createHtmlOutput(html).getBlob(),
"contentType": 'text/html',
"ticket" : JSON.stringify(ticket)
};
an unexpected result occurs:
This comes out the same way for direct coding in the 'content' field with and without .getBlob():
"content" : HtmlService.createHtmlOutput(html).getAs('application/pdf'),
note the createFile line in the code above used to test the pdf. This file is created as expected, of course with the wrong dimensions for label printing (not sure how to convert to pdf with the appropriate margins and page size?): see below
I have now tried to adopt Yuri's ideas; however, the conversion from html to document loses formatting.
var blob = HtmlService.createHtmlOutput(html).getBlob();
var docID = Drive.Files.insert({title: 'temp-label'}, blob, {convert: true}).id
var file = DocumentApp.openById(docID);
file.getBody().setMarginBottom(0).setMarginLeft(0).setMarginRight(0).setMarginTop(0).setPageHeight(79.2).setPageWidth(172.8);
This produces a document looks like this (picture also showing expected output in my hand).
Does anyone have insights into:
How to format the converted pdf to contain appropriate height, width
and margins.
How to convert to pdf in a way that would print correctly.
Here is a minimal code to get a better sense of context https://script.google.com/d/1yP3Jyr_r_FIlt6_aGj_zIf7HnVGEOPBKI0MpjEGHRFAWztGzcWKCJrD0/edit?usp=sharing
I've made the template (80 x 40 mm -- sorry, I don't know your size):
https://docs.google.com/document/d/1vA93FxGXcWLIEZBuQwec0n23cWGddyLoey-h0WR9weY/edit?usp=sharing
And there is the script:
function myFunction() {
// input data
var matName = '<b>testing this to <u>see</u></b> if it <i>actually</i> works <i>e.coli</i>'
var disposeWeek = 'end of semester'
var prepper = 'John Ruppert';
var className = 'Cell and <b>Molecular</b> Biology <u>Fall 2020</u> a few exercises a few exercises a few exercises a few exercises';
var hazards = 'Lots of hazards';
// make a temporary Doc from the template
var copyFile = DriveApp.getFileById('1vA93FxGXcWLIEZBuQwec0n23cWGddyLoey-h0WR9weY').makeCopy();
var doc = DocumentApp.openById(copyFile.getId());
var body = doc.getBody();
// replace placeholders with data
body.replaceText('{matName}', matName);
body.replaceText('{disposeWeek}', disposeWeek);
body.replaceText('{prepper}', prepper);
body.replaceText('{className}', className);
body.replaceText('{hazards}', hazards);
// make Italics, Bold and Underline
handle_tags(['<i>', '</i>'], body);
handle_tags(['<b>', '</b>'], body);
handle_tags(['<u>', '</u>'], body);
// save the temporary Doc
doc.saveAndClose();
// make a PDF
var docblob = doc.getBlob().setName('Label.pdf');
DriveApp.createFile(docblob);
// delete the temporary Doc
copyFile.setTrashed(true);
}
// this function applies formatting to text inside the tags
function handle_tags(tags, body) {
var start_tag = tags[0].toLowerCase();
var end_tag = tags[1].toLowerCase();
var found = body.findText(start_tag);
while (found) {
var elem = found.getElement();
var start = found.getEndOffsetInclusive();
var end = body.findText(end_tag, found).getStartOffset()-1;
switch (start_tag) {
case '<b>': elem.setBold(start, end, true); break;
case '<i>': elem.setItalic(start, end, true); break;
case '<u>': elem.setUnderline(start, end, true); break;
}
found = body.findText(start_tag, found);
}
body.replaceText(start_tag, ''); // remove tags
body.replaceText(end_tag, '');
}
The script just changes the {placeholders} with the data and saves the result as a PDF file (Label.pdf). The PDF looks like this:
There is one thing, I'm not sure if it's possible -- to change a size of the texts dynamically to fit them into the cells, like it's done in your 'autosize.html'. Roughly, you can take a length of the text in the cell and, in case it is bigger than some number, to make the font size a bit smaller. Probably you can use the jquery texfill function from the 'autosize.html' to get an optimal size and apply the size in the document.
I'm not sure if I got you right. Do you need make PDF and save it on Google Drive? You can do in Google Docs.
As example:
Make a new document with your table and text. Something like this
Add this script into your doc:
function myFunction() {
var copyFile = DriveApp.getFileById(ID).makeCopy();
var newFile = DriveApp.createFile(copyFile.getAs('application/pdf'));
newFile.setName('label');
copyFile.setTrashed(true);
}
Every time you run this script it makes the file 'label.pdf' on your Google Drive.
The size of this pdf will be the same as the page size of your Doc. You can make any size of page with add-on: Page Sizer https://webapps.stackexchange.com/questions/129617/how-to-change-the-size-of-paper-in-google-docs-to-custom-size
If you need to change the text in your label before generate pdf or/and you need change the name of generated file, you can do it via script as well.
Here is a variant of the script that changes a font size in one of the cells if the label doesn't fit into one page.
function main() {
// input texts
var text = {};
text.matName = '<b>testing this to <u>see</u></b> if it <i>actually</i> works <i>e.coli</i>';
text.disposeWeek = 'end of semester';
text.prepper = 'John Ruppert';
text.className = 'Cell and <b>Molecular</b> Biology <u>Fall 2020</u> a few exercises a few exercises a few exercises a few exercises';
text.hazards = 'Lots of hazards';
// initial max font size for the 'matName'
var size = 10;
var doc_blob = set_text(text, size);
// if we got more than 1 page, reduce the font size and repeat
while ((size > 4) && (getNumPages(doc_blob) > 1)) {
size = size-0.5;
doc_blob = set_text(text, size);
}
// save pdf
DriveApp.createFile(doc_blob);
}
// this function takes texts and a size and put the texts into fields
function set_text(text, size) {
// make a copy
var copyFile = DriveApp.getFileById('1vA93FxGXcWLIEZBuQwec0n23cWGddyLoey-h0WR9weY').makeCopy();
var doc = DocumentApp.openById(copyFile.getId());
var body = doc.getBody();
// replace placeholders with data
body.replaceText('{matName}', text.matName);
body.replaceText('{disposeWeek}', text.disposeWeek);
body.replaceText('{prepper}', text.prepper);
body.replaceText('{className}', text.className);
body.replaceText('{hazards}', text.hazards);
// set font size for 'matName'
body.findText(text.matName).getElement().asText().setFontSize(size);
// make Italics, Bold and Underline
handle_tags(['<i>', '</i>'], body);
handle_tags(['<b>', '</b>'], body);
handle_tags(['<u>', '</u>'], body);
// save the doc
doc.saveAndClose();
// delete the copy
copyFile.setTrashed(true);
// return blob
return docblob = doc.getBlob().setName('Label.pdf');
}
// this function formats the text beween html tags
function handle_tags(tags, body) {
var start_tag = tags[0].toLowerCase();
var end_tag = tags[1].toLowerCase();
var found = body.findText(start_tag);
while (found) {
var elem = found.getElement();
var start = found.getEndOffsetInclusive();
var end = body.findText(end_tag, found).getStartOffset()-1;
switch (start_tag) {
case '<b>': elem.setBold(start, end, true); break;
case '<i>': elem.setItalic(start, end, true); break;
case '<u>': elem.setUnderline(start, end, true); break;
}
found = body.findText(start_tag, found);
}
body.replaceText(start_tag, '');
body.replaceText(end_tag, '');
}
// this funcion takes saved doc and returns the number of its pages
function getNumPages(doc) {
var blob = doc.getAs('application/pdf');
var data = blob.getDataAsString();
var pages = parseInt(data.match(/ \/N (\d+) /)[1], 10);
Logger.log("pages = " + pages);
return pages;
}
It looks rather awful and hopeless. It turned out that Google Docs has no page number counter. You need to convert your document into a PDF and to count pages of the PDF file. Gross!
Next problem, even if you managed somehow to count the pages, you have no clue which of the cells was overflowed. This script takes just one cell, changes its font size, counts pages, changes the font size again, etc. But it doesn't granted a success, because there can be another cell with long text inside. You can reduce font size of all the texts, but it doesn't look like a great idea as well.

Dart splits long stdout data into two ProcessResult events

When listening to a long string output from a shell process, I receive the data in two chunks. How can I get the entire text?
Here is the code in question:
int i = 0;
Process.start('perl', ['print_text.pl']).then((Process p) {
p.stdout.transform(UTF8.decoder).listen((data) => print("${i++} ${data}"));
p.stdin.writeln('print');
});
The result from running this code is:
0 text.....
1 text.....
I've reported this issue as a bug here. You can run the sample app attached to the report to see the issue.
Try using UTF8.decodeStream().
import 'dart:io';
import 'dart:convert' show UTF8;
main() {
Process.start('ls', ['-la'])
.then((p) => UTF8.decodeStream(p.stdout))
.then((s) => print('Output:\n$s'));
}
I solved this problem by doing the following:
In the shell script, tag the start and the end of the shell's output with a random number.
In Dart, concatenate all the chunks together and use the tags to check if you got the complete result.
This solution does not require exiting the process to obtain the process result.
This is a sample code:
String text = '';
process.stdout.transform(UTF8.decoder).listen((String chunk) {
text = text + chunk;
if (text.substring(errors.length - 1) == text.substring(0, 1)) {
text = text.replaceFirst(new RegExp(r'^(\d+)'), '').replaceFirst(new RegExp(r'(\d+)$'), '');
// use process result then clear text value for subsequent process results
text = '';
}
});
It's not clear what the actual question is but I assume this is what you are looking for:
import 'dart:io';
import 'dart:convert' show UTF8;
void main() {
int i = 0;
String text = '';
Process.start('perl', ['analyze.pl']).then((p) {
p.stdout.transform(UTF8.decoder).listen((data) =>
text += data);
p.exitCode.then((_) => print(text));
});
}

Get pdf-attachments from Gmail as text

I searched around the web & Stack Overflow but didn't find a solution. What I try to do is the following: I get certain attachments via mail that I would like to have as (Plain) text for further processing. My script looks like this:
function MyFunction() {
var threads = GmailApp.search ('label:templabel');
var messages = GmailApp.getMessagesForThreads(threads);
for (i = 0; i < messages.length; ++i)
{
j = messages[i].length;
var messageBody = messages[i][0].getBody();
var messageSubject = messages [i][0].getSubject();
var attach = messages [i][0].getAttachments();
var attachcontent = attach.getContentAsString();
GmailApp.sendEmail("mail", messageSubject, "", {htmlBody: attachcontent});
}
}
Unfortunately this doesn't work. Does anybody here have an idea how I can do this? Is it even possible?
Thank you very much in advance.
Best, Phil
Edit: Updated for DriveApp, as DocsList deprecated.
I suggest breaking this down into two problems. The first is how to get a pdf attachment from an email, the second is how to convert that pdf to text.
As you've found out, getContentAsString() does not magically change a pdf attachment to plain text or html. We need to do something a little more complicated.
First, we'll get the attachment as a Blob, a utility class used by several Services to exchange data.
var blob = attachments[0].getAs(MimeType.PDF);
So with the second problem separated out, and maintaining the assumption that we're interested in only the first attachment of the first message of each thread labeled templabel, here is how myFunction() looks:
/**
* Get messages labeled 'templabel', and send myself the text contents of
* pdf attachments in new emails.
*/
function myFunction() {
var threads = GmailApp.search('label:templabel');
var threadsMessages = GmailApp.getMessagesForThreads(threads);
for (var thread = 0; thread < threadsMessages.length; ++thread) {
var message = threadsMessages[thread][0];
var messageBody = message.getBody();
var messageSubject = message.getSubject();
var attachments = message.getAttachments();
var blob = attachments[0].getAs(MimeType.PDF);
var filetext = pdfToText( blob, {keepTextfile: false} );
GmailApp.sendEmail(Session.getActiveUser().getEmail(), messageSubject, filetext);
}
}
We're relying on a helper function, pdfToText(), to convert our pdf blob into text, which we'll then send to ourselves as a plain text email. This helper function has a variety of options; by setting keepTextfile: false, we've elected to just have it return the text content of the PDF file to us, and leave no residual files in our Drive.
pdfToText()
This utility is available as a gist. Several examples are provided there.
A previous answer indicated that it was possible to use the Drive API's insert method to perform OCR, but it didn't provide code details. With the introduction of Advanced Google Services, the Drive API is easily accessible from Google Apps Script. You do need to switch on and enable the Drive API from the editor, under Resources > Advanced Google Services.
pdfToText() uses the Drive service to generate a Google Doc from the content of the PDF file. Unfortunately, this contains the "pictures" of each page in the document - not much we can do about that. It then uses the regular DocumentService to extract the document body as plain text.
/**
* See gist: https://gist.github.com/mogsdad/e6795e438615d252584f
*
* Convert pdf file (blob) to a text file on Drive, using built-in OCR.
* By default, the text file will be placed in the root folder, with the same
* name as source pdf (but extension 'txt'). Options:
* keepPdf (boolean, default false) Keep a copy of the original PDF file.
* keepGdoc (boolean, default false) Keep a copy of the OCR Google Doc file.
* keepTextfile (boolean, default true) Keep a copy of the text file.
* path (string, default blank) Folder path to store file(s) in.
* ocrLanguage (ISO 639-1 code) Default 'en'.
* textResult (boolean, default false) If true and keepTextfile true, return
* string of text content. If keepTextfile
* is false, text content is returned without
* regard to this option. Otherwise, return
* id of textfile.
*
* #param {blob} pdfFile Blob containing pdf file
* #param {object} options (Optional) Object specifying handling details
*
* #returns {string} id of text file (default) or text content
*/
function pdfToText ( pdfFile, options ) {
// Ensure Advanced Drive Service is enabled
try {
Drive.Files.list();
}
catch (e) {
throw new Error( "To use pdfToText(), first enable 'Drive API' in Resources > Advanced Google Services." );
}
// Set default options
options = options || {};
options.keepTextfile = options.hasOwnProperty("keepTextfile") ? options.keepTextfile : true;
// Prepare resource object for file creation
var parents = [];
if (options.path) {
parents.push( getDriveFolderFromPath (options.path) );
}
var pdfName = pdfFile.getName();
var resource = {
title: pdfName,
mimeType: pdfFile.getContentType(),
parents: parents
};
// Save PDF to Drive, if requested
if (options.keepPdf) {
var file = Drive.Files.insert(resource, pdfFile);
}
// Save PDF as GDOC
resource.title = pdfName.replace(/pdf$/, 'gdoc');
var insertOpts = {
ocr: true,
ocrLanguage: options.ocrLanguage || 'en'
}
var gdocFile = Drive.Files.insert(resource, pdfFile, insertOpts);
// Get text from GDOC
var gdocDoc = DocumentApp.openById(gdocFile.id);
var text = gdocDoc.getBody().getText();
// We're done using the Gdoc. Unless requested to keepGdoc, delete it.
if (!options.keepGdoc) {
Drive.Files.remove(gdocFile.id);
}
// Save text file, if requested
if (options.keepTextfile) {
resource.title = pdfName.replace(/pdf$/, 'txt');
resource.mimeType = MimeType.PLAIN_TEXT;
var textBlob = Utilities.newBlob(text, MimeType.PLAIN_TEXT, resource.title);
var textFile = Drive.Files.insert(resource, textBlob);
}
// Return result of conversion
if (!options.keepTextfile || options.textResult) {
return text;
}
else {
return textFile.id
}
}
The conversion to DriveApp is helped with this utility from Bruce McPherson:
// From: http://ramblings.mcpher.com/Home/excelquirks/gooscript/driveapppathfolder
function getDriveFolderFromPath (path) {
return (path || "/").split("/").reduce ( function(prev,current) {
if (prev && current) {
var fldrs = prev.getFoldersByName(current);
return fldrs.hasNext() ? fldrs.next() : null;
}
else {
return current ? null : prev;
}
},DriveApp.getRootFolder());
}

Amazon S3 multipart upload ETag generation method [duplicate]

Files uploaded to Amazon S3 that are smaller than 5GB have an ETag that is simply the MD5 hash of the file, which makes it easy to check if your local files are the same as what you put on S3.
But if your file is larger than 5GB, then Amazon computes the ETag differently.
For example, I did a multipart upload of a 5,970,150,664 byte file in 380 parts. Now S3 shows it to have an ETag of 6bcf86bed8807b8e78f0fc6e0a53079d-380. My local file has an md5 hash of 702242d3703818ddefe6bf7da2bed757. I think the number after the dash is the number of parts in the multipart upload.
I also suspect that the new ETag (before the dash) is still an MD5 hash, but with some meta data included along the way from the multipart upload somehow.
Does anyone know how to compute the ETag using the same algorithm as Amazon S3?
Say you uploaded a 14MB file to a bucket without server-side encryption, and your part size is 5MB. Calculate 3 MD5 checksums corresponding to each part, i.e. the checksum of the first 5MB, the second 5MB, and the last 4MB. Then take the checksum of their concatenation. MD5 checksums are often printed as hex representations of binary data, so make sure you take the MD5 of the decoded binary concatenation, not of the ASCII or UTF-8 encoded concatenation. When that's done, add a hyphen and the number of parts to get the ETag.
Here are the commands to do it on Mac OS X from the console:
$ dd bs=1m count=5 skip=0 if=someFile | md5 >>checksums.txt
5+0 records in
5+0 records out
5242880 bytes transferred in 0.019611 secs (267345449 bytes/sec)
$ dd bs=1m count=5 skip=5 if=someFile | md5 >>checksums.txt
5+0 records in
5+0 records out
5242880 bytes transferred in 0.019182 secs (273323380 bytes/sec)
$ dd bs=1m count=5 skip=10 if=someFile | md5 >>checksums.txt
2+1 records in
2+1 records out
2599812 bytes transferred in 0.011112 secs (233964895 bytes/sec)
At this point all the checksums are in checksums.txt. To concatenate them and decode the hex and get the MD5 checksum of the lot, just use
$ xxd -r -p checksums.txt | md5
And now append "-3" to get the ETag, since there were 3 parts.
Notes
If you uploaded with aws-cli via aws s3 cp then you most likely have a 8MB chunksize. According to the docs, that is the default.
If the bucket has server-side encryption (SSE) turned on, the ETag won't be the MD5 checksum (see the API documentation). But if you're just trying to verify that an uploaded part matches what you sent, you can use the Content-MD5 header and S3 will compare it for you.
md5 on macOS just writes out the checksum, but md5sum on Linux/brew also outputs the filename. You'll need to strip that, but I'm sure there's some option to only output the checksums. You don't need to worry about whitespace cause xxd will ignore it.
Code Links
A Gist I wrote with a working script for macOS.
The project at s3md5.
Based on answers here, I wrote a Python implementation which correctly calculates both multi-part and single-part file ETags.
def calculate_s3_etag(file_path, chunk_size=8 * 1024 * 1024):
md5s = []
with open(file_path, 'rb') as fp:
while True:
data = fp.read(chunk_size)
if not data:
break
md5s.append(hashlib.md5(data))
if len(md5s) < 1:
return '"{}"'.format(hashlib.md5().hexdigest())
if len(md5s) == 1:
return '"{}"'.format(md5s[0].hexdigest())
digests = b''.join(m.digest() for m in md5s)
digests_md5 = hashlib.md5(digests)
return '"{}-{}"'.format(digests_md5.hexdigest(), len(md5s))
The default chunk_size is 8 MB used by the official aws cli tool, and it does multipart upload for 2+ chunks. It should work under both Python 2 and 3.
bash implementation
python implementation
The algorithm literally is (copied from the readme in the python implementation) :
md5 the chunks
glob the md5 strings together
convert the glob to binary
md5 the binary of the globbed chunk md5s
append "-Number_of_chunks" to the end of the md5 string of the binary
Here's yet another piece in this crazy AWS challenge puzzle.
FWIW, this answer assumes you already have figured out how to calculate the "MD5 of MD5 parts" and can rebuild your AWS Multi-part ETag from all the other answers already provided here.
What this answer addresses is the annoyance of having to "guess" or otherwise "divine" the original upload part size.
We use several different tools for uploading to S3 and they all seem to have different upload part sizes, so "guessing" really wasn't an option. Also, we have a lot of files that were historically uploaded when part sizes seemed to be different. Also, the old trick of using an internal server copy to force the creation of an MD5-type ETag also no longer works as AWS has changed their internal server copies to also use multi-part (just with a fairly large part size).
So...
How can you figure out the object's part size?
Well, if you first make a head_object request and detect that the ETag is a multi-part type ETag (includes a '-<partcount>' at the end), then you can make another head_object request, but with an additional part_number attribute of 1 (the first part). This follow-on head_object request will then return you the content_length of the first part. Viola... Now you know the part size that was used and you can use that size to re-create your local ETag which should match the original uploaded S3 ETag created when the object was uploaded.
Additionally, if you wanted to be exact (perhaps some multi-part uploads were to use variable part sizes), then you could continue to call head_object requests with each part_number specified and calculate each part's MD5 from the returned parts content_length.
Hope that helps...
Not sure if it can help:
We're currently doing an ugly (but so far useful) hack to fix those wrong ETags in multipart uploaded files, which consists on applying a change to the file in the bucket; that triggers a md5 recalculation from Amazon that changes the ETag to matches with the actual md5 signature.
In our case:
File: bucket/Foo.mpg.gpg
ETag obtained: "3f92dffef0a11d175e60fb8b958b4e6e-2"
Do something with the file (rename it, add a meta-data like a fake header, among others)
Etag obtained: "c1d903ca1bb6dc68778ef21e74cc15b0"
We don't know the algorithm, but since we can "fix" the ETag we don't need to worry about it either.
Same algorithm, java version:
(BaseEncoding, Hasher, Hashing, etc comes from the guava library
/**
* Generate checksum for object came from multipart upload</p>
* </p>
* AWS S3 spec: Entity tag that identifies the newly created object's data. Objects with different object data will have different entity tags. The entity tag is an opaque string. The entity tag may or may not be an MD5 digest of the object data. If the entity tag is not an MD5 digest of the object data, it will contain one or more nonhexadecimal characters and/or will consist of less than 32 or more than 32 hexadecimal digits.</p>
* Algorithm follows AWS S3 implementation: https://github.com/Teachnova/s3md5</p>
*/
private static String calculateChecksumForMultipartUpload(List<String> md5s) {
StringBuilder stringBuilder = new StringBuilder();
for (String md5:md5s) {
stringBuilder.append(md5);
}
String hex = stringBuilder.toString();
byte raw[] = BaseEncoding.base16().decode(hex.toUpperCase());
Hasher hasher = Hashing.md5().newHasher();
hasher.putBytes(raw);
String digest = hasher.hash().toString();
return digest + "-" + md5s.size();
}
According to the AWS documentation the ETag isn't an MD5 hash for a multi-part upload nor for an encrypted object: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html
Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-S3 or plaintext, have ETags that are an MD5 digest of their object data.
Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-C or SSE-KMS, have ETags that are not an MD5 digest of their object data.
If an object is created by either the Multipart Upload or Part Copy operation, the ETag is not an MD5 digest, regardless of the method of encryption.
In an above answer, someone asked if there was a way to get the md5 for files larger than 5G.
An answer that I could give for getting the MD5 value (for files larger than 5G) would be to either add it manually to the metadata, or use a program to do your uploads which will add the information.
For example, I used s3cmd to upload a file, and it added the following metadata.
$ aws s3api head-object --bucket xxxxxxx --key noarch/epel-release-6-8.noarch.rpm
{
"AcceptRanges": "bytes",
"ContentType": "binary/octet-stream",
"LastModified": "Sat, 19 Sep 2015 03:27:25 GMT",
"ContentLength": 14540,
"ETag": "\"2cd0ae668a585a14e07c2ea4f264d79b\"",
"Metadata": {
"s3cmd-attrs": "uid:502/gname:staff/uname:xxxxxx/gid:20/mode:33188/mtime:1352129496/atime:1441758431/md5:2cd0ae668a585a14e07c2ea4f264d79b/ctime:1441385182"
}
}
It isn't a direct solution using the ETag, but it is a way to populate the metadata you want (MD5) in a way you can access it. It will still fail if someone uploads the file without metadata.
Here is the algorithm in ruby...
require 'digest'
# PART_SIZE should match the chosen part size of the multipart upload
# Set here as 10MB
PART_SIZE = 1024*1024*10
class File
def each_part(part_size = PART_SIZE)
yield read(part_size) until eof?
end
end
file = File.new('<path_to_file>')
hashes = []
file.each_part do |part|
hashes << Digest::MD5.hexdigest(part)
end
multipart_hash = Digest::MD5.hexdigest([hashes.join].pack('H*'))
multipart_etag = "#{multipart_hash}-#{hashes.count}"
Thanks to Shortest Hex2Bin in Ruby and Multipart Uploads to S3 ...
node.js implementation -
const fs = require('fs');
const crypto = require('crypto');
const chunk = 1024 * 1024 * 5; // 5MB
const md5 = data => crypto.createHash('md5').update(data).digest('hex');
const getEtagOfFile = (filePath) => {
const stream = fs.readFileSync(filePath);
if (stream.length <= chunk) {
return md5(stream);
}
const md5Chunks = [];
const chunksNumber = Math.ceil(stream.length / chunk);
for (let i = 0; i < chunksNumber; i++) {
const chunkStream = stream.slice(i * chunk, (i + 1) * chunk);
md5Chunks.push(md5(chunkStream));
}
return `${md5(Buffer.from(md5Chunks.join(''), 'hex'))}-${chunksNumber}`;
};
And here is a PHP version of calculating the ETag:
function calculate_aws_etag($filename, $chunksize) {
/*
DESCRIPTION:
- calculate Amazon AWS ETag used on the S3 service
INPUT:
- $filename : path to file to check
- $chunksize : chunk size in Megabytes
OUTPUT:
- ETag (string)
*/
$chunkbytes = $chunksize*1024*1024;
if (filesize($filename) < $chunkbytes) {
return md5_file($filename);
} else {
$md5s = array();
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = fread($handle, $chunkbytes);
$md5s[] = md5($buffer);
unset($buffer);
}
fclose($handle);
$concat = '';
foreach ($md5s as $indx => $md5) {
$concat .= hex2bin($md5);
}
return md5($concat) .'-'. count($md5s);
}
}
$etag = calculate_aws_etag('path/to/myfile.ext', 8);
And here is an enhanced version that can verify against an expected ETag - and even guess the chunksize if you don't know it!
function calculate_etag($filename, $chunksize, $expected = false) {
/*
DESCRIPTION:
- calculate Amazon AWS ETag used on the S3 service
INPUT:
- $filename : path to file to check
- $chunksize : chunk size in Megabytes
- $expected : verify calculated etag against this specified etag and return true or false instead
- if you make chunksize negative (eg. -8 instead of 8) the function will guess the chunksize by checking all possible sizes given the number of parts mentioned in $expected
OUTPUT:
- ETag (string)
- or boolean true|false if $expected is set
*/
if ($chunksize < 0) {
$do_guess = true;
$chunksize = 0 - $chunksize;
} else {
$do_guess = false;
}
$chunkbytes = $chunksize*1024*1024;
$filesize = filesize($filename);
if ($filesize < $chunkbytes && (!$expected || !preg_match("/^\\w{32}-\\w+$/", $expected))) {
$return = md5_file($filename);
if ($expected) {
$expected = strtolower($expected);
return ($expected === $return ? true : false);
} else {
return $return;
}
} else {
$md5s = array();
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = fread($handle, $chunkbytes);
$md5s[] = md5($buffer);
unset($buffer);
}
fclose($handle);
$concat = '';
foreach ($md5s as $indx => $md5) {
$concat .= hex2bin($md5);
}
$return = md5($concat) .'-'. count($md5s);
if ($expected) {
$expected = strtolower($expected);
$matches = ($expected === $return ? true : false);
if ($matches || $do_guess == false || strlen($expected) == 32) {
return $matches;
} else {
// Guess the chunk size
preg_match("/-(\\d+)$/", $expected, $match);
$parts = $match[1];
$min_chunk = ceil($filesize / $parts /1024/1024);
$max_chunk = floor($filesize / ($parts-1) /1024/1024);
$found_match = false;
for ($i = $min_chunk; $i <= $max_chunk; $i++) {
if (calculate_aws_etag($filename, $i) === $expected) {
$found_match = true;
break;
}
}
return $found_match;
}
} else {
return $return;
}
}
}
The short answer is that you take the 128bit binary md5 digest of each part, concatenate them into a document, and hash that document. The algorithm presented in this answer is accurate.
Note: the multipart ETAG form with the hyphen will change to the form without the hyphen if you "touch" the blob (even without modifying the content). That is, if you copy, or do an in-place copy of your completed multipart-uploaded object (aka PUT-COPY), S3 will recompute the ETAG with the simple version of the algorithm. i.e. the destination object will have an etag without the hyphen.
You've probably considered this already, but if your files are less than 5GB, and you already know their MD5s, and upload parallelization provides little to no benefit (e.g. you are streaming the upload from a slow network, or uploading from a slow disk), then you may also consider using a simple PUT instead of a multipart PUT, and pass your known Content-MD5 in your request headers -- amazon will fail the upload if they don't match. Keep in mind that you get charged for each UploadPart.
Furthermore, in some clients, passing a known MD5 for the input of a PUT operation will save the client from recomputing the MD5 during the transfer. In boto3 (python), you would use the ContentMD5 parameter of the client.put_object() method, for instance. If you omit the parameter, and you already knew the MD5, then the client would be wasting cycles computing it again before the transfer.
Working algorithm implemented in Node.js (TypeScript).
/**
* Generate an S3 ETAG for multipart uploads in Node.js
* An implementation of this algorithm: https://stackoverflow.com/a/19896823/492325
* Author: Richard Willis <willis.rh#gmail.com>
*/
import fs from 'node:fs';
import crypto, { BinaryLike } from 'node:crypto';
const defaultPartSizeInBytes = 5 * 1024 * 1024; // 5MB
function md5(contents: string | BinaryLike): string {
return crypto.createHash('md5').update(contents).digest('hex');
}
export function getS3Etag(
filePath: string,
partSizeInBytes = defaultPartSizeInBytes
): string {
const { size: fileSizeInBytes } = fs.statSync(filePath);
let parts = Math.floor(fileSizeInBytes / partSizeInBytes);
if (fileSizeInBytes % partSizeInBytes > 0) {
parts += 1;
}
const fileDescriptor = fs.openSync(filePath, 'r');
let totalMd5 = '';
for (let part = 0; part < parts; part++) {
const skipBytes = partSizeInBytes * part;
const totalBytesLeft = fileSizeInBytes - skipBytes;
const bytesToRead = Math.min(totalBytesLeft, partSizeInBytes);
const buffer = Buffer.alloc(bytesToRead);
fs.readSync(fileDescriptor, buffer, 0, bytesToRead, skipBytes);
totalMd5 += md5(buffer);
}
const combinedHash = md5(Buffer.from(totalMd5, 'hex'));
const etag = `${combinedHash}-${parts}`;
return etag;
}
I've published this to npm
npm install s3-etag
import { generateETag } from 's3-etag';
const etag = generateETag(absoluteFilePath, partSizeInBytes);
View project here: https://github.com/badsyntax/s3-etag
A version in Rust:
use crypto::digest::Digest;
use crypto::md5::Md5;
use std::fs::File;
use std::io::prelude::*;
use std::iter::repeat;
fn calculate_etag_from_read(f: &mut dyn Read, chunk_size: usize) -> Result<String> {
let mut md5 = Md5::new();
let mut concat_md5 = Md5::new();
let mut input_buffer = vec![0u8; chunk_size];
let mut chunk_count = 0;
let mut current_md5: Vec<u8> = repeat(0).take((md5.output_bits() + 7) / 8).collect();
let md5_result = loop {
let amount_read = f.read(&mut input_buffer)?;
if amount_read > 0 {
md5.reset();
md5.input(&input_buffer[0..amount_read]);
chunk_count += 1;
md5.result(&mut current_md5);
concat_md5.input(&current_md5);
} else {
if chunk_count > 1 {
break format!("{}-{}", concat_md5.result_str(), chunk_count);
} else {
break md5.result_str();
}
}
};
Ok(md5_result)
}
fn calculate_etag(file: &String, chunk_size: usize) -> Result<String> {
let mut f = File::open(file)?;
calculate_etag_from_read(&mut f, chunk_size)
}
See a repo with a simple implementation: https://github.com/bn3t/calculate-etag/tree/master
Regarding chunk size, I noticed that it seems to depend of number of parts.
The maximun number of parts are 10000 as AWS documents.
So starting on a default of 8MB and knowing the filesize, chunk size and parts can be calculated as follows:
chunk_size=8*1024*1024
flsz=os.path.getsize(fl)
while flsz/chunk_size>10000:
chunk_size*=2
parts=math.ceil(flsz/chunk_size)
Parts have to be up-rounded
Extending Timothy Gonzalez's answer:
Identical files will have different etag when using multipart upload.
It's easy to test it with WinSCP, because it uses multipart upload.
When I upload multiple indentical copies of the same file to S3 via WinSCP then each has different etag. When I download them and calculate md5, then they are still indentical.
So from what I tested different etags doesn't mean that files are different.
I see no alternative way to obtain any hash for S3 files without downloading them first.
This is true for multipart uploads. For not-multipart it should still be possible to calculate etag locally.
I have a solution for iOS and macOS without using external helpers like dd and xxd. I have just found it, so I report it as it is, planning to improve it at a later stage. For the moment, it relies on both Objective-C and Swift code. First of all, create this helper class in Objective-C:
AWS3MD5Hash.h
#import <Foundation/Foundation.h>
NS_ASSUME_NONNULL_BEGIN
#interface AWS3MD5Hash : NSObject
- (NSData *)dataFromFile:(FILE *)theFile startingOnByte:(UInt64)startByte length:(UInt64)length filePath:(NSString *)path singlePartSize:(NSUInteger)partSizeInMb;
- (NSData *)dataFromBigData:(NSData *)theData startingOnByte:(UInt64)startByte length:(UInt64)length;
- (NSData *)dataFromHexString:(NSString *)sourceString;
#end
NS_ASSUME_NONNULL_END
AWS3MD5Hash.m
#import "AWS3MD5Hash.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define SIZE 256
#implementation AWS3MD5Hash
- (NSData *)dataFromFile:(FILE *)theFile startingOnByte:(UInt64)startByte length:(UInt64)length filePath:(NSString *)path singlePartSize:(NSUInteger)partSizeInMb {
char *buffer = malloc(length);
NSURL *fileURL = [NSURL fileURLWithPath:path];
NSNumber *fileSizeValue = nil;
NSError *fileSizeError = nil;
[fileURL getResourceValue:&fileSizeValue
forKey:NSURLFileSizeKey
error:&fileSizeError];
NSInteger __unused result = fseek(theFile,startByte,SEEK_SET);
if (result != 0) {
free(buffer);
return nil;
}
NSInteger result2 = fread(buffer, length, 1, theFile);
NSUInteger difference = fileSizeValue.integerValue - startByte;
NSData *toReturn;
if (result2 == 0) {
toReturn = [NSData dataWithBytes:buffer length:difference];
} else {
toReturn = [NSData dataWithBytes:buffer length:result2 * length];
}
free(buffer);
return toReturn;
}
- (NSData *)dataFromBigData:(NSData *)theData startingOnByte: (UInt64)startByte length:(UInt64)length {
NSUInteger fileSizeValue = theData.length;
NSData *subData;
if (startByte + length > fileSizeValue) {
subData = [theData subdataWithRange:NSMakeRange(startByte, fileSizeValue - startByte)];
} else {
subData = [theData subdataWithRange:NSMakeRange(startByte, length)];
}
return subData;
}
- (NSData *)dataFromHexString:(NSString *)string {
string = [string lowercaseString];
NSMutableData *data= [NSMutableData new];
unsigned char whole_byte;
char byte_chars[3] = {'\0','\0','\0'};
NSInteger i = 0;
NSInteger length = string.length;
while (i < length-1) {
char c = [string characterAtIndex:i++];
if (c < '0' || (c > '9' && c < 'a') || c > 'f')
continue;
byte_chars[0] = c;
byte_chars[1] = [string characterAtIndex:i++];
whole_byte = strtol(byte_chars, NULL, 16);
[data appendBytes:&whole_byte length:1];
}
return data;
}
#end
Now create a plain swift file:
AWS Extensions.swift
import UIKit
import CommonCrypto
extension URL {
func calculateAWSS3MD5Hash(_ numberOfParts: UInt64) -> String? {
do {
var fileSize: UInt64!
var calculatedPartSize: UInt64!
let attr:NSDictionary? = try FileManager.default.attributesOfItem(atPath: self.path) as NSDictionary
if let _attr = attr {
fileSize = _attr.fileSize();
if numberOfParts != 0 {
let partSize = Double(fileSize / numberOfParts)
var partSizeInMegabytes = Double(partSize / (1024.0 * 1024.0))
partSizeInMegabytes = ceil(partSizeInMegabytes)
calculatedPartSize = UInt64(partSizeInMegabytes)
if calculatedPartSize % 2 != 0 {
calculatedPartSize += 1
}
if numberOfParts == 2 || numberOfParts == 3 { // Very important when there are 2 or 3 parts, in the majority of times
// the calculatedPartSize is already 8. In the remaining cases we force it.
calculatedPartSize = 8
}
if mainLogToggling {
print("The calculated part size is \(calculatedPartSize!) Megabytes")
}
}
}
if numberOfParts == 0 {
let string = self.memoryFriendlyMd5Hash()
return string
}
let hasher = AWS3MD5Hash.init()
let file = fopen(self.path, "r")
defer { let result = fclose(file)}
var index: UInt64 = 0
var bigString: String! = ""
var data: Data!
while autoreleasepool(invoking: {
if index == (numberOfParts-1) {
if mainLogToggling {
//print("Siamo all'ultima linea.")
}
}
data = hasher.data(from: file!, startingOnByte: index * calculatedPartSize * 1024 * 1024, length: calculatedPartSize * 1024 * 1024, filePath: self.path, singlePartSize: UInt(calculatedPartSize))
bigString = bigString + MD5.get(data: data) + "\n"
index += 1
if index == numberOfParts {
return false
}
return true
}) {}
let final = MD5.get(data :hasher.data(fromHexString: bigString)) + "-\(numberOfParts)"
return final
} catch {
}
return nil
}
func memoryFriendlyMd5Hash() -> String? {
let bufferSize = 1024 * 1024
do {
// Open file for reading:
let file = try FileHandle(forReadingFrom: self)
defer {
file.closeFile()
}
// Create and initialize MD5 context:
var context = CC_MD5_CTX()
CC_MD5_Init(&context)
// Read up to `bufferSize` bytes, until EOF is reached, and update MD5 context:
while autoreleasepool(invoking: {
let data = file.readData(ofLength: bufferSize)
if data.count > 0 {
data.withUnsafeBytes {
_ = CC_MD5_Update(&context, $0, numericCast(data.count))
}
return true // Continue
} else {
return false // End of file
}
}) { }
// Compute the MD5 digest:
var digest = Data(count: Int(CC_MD5_DIGEST_LENGTH))
digest.withUnsafeMutableBytes {
_ = CC_MD5_Final($0, &context)
}
let hexDigest = digest.map { String(format: "%02hhx", $0) }.joined()
return hexDigest
} catch {
print("Cannot open file:", error.localizedDescription)
return nil
}
}
struct MD5 {
static func get(data: Data) -> String {
var digest = [UInt8](repeating: 0, count: Int(CC_MD5_DIGEST_LENGTH))
let _ = data.withUnsafeBytes { bytes in
CC_MD5(bytes, CC_LONG(data.count), &digest)
}
var digestHex = ""
for index in 0..<Int(CC_MD5_DIGEST_LENGTH) {
digestHex += String(format: "%02x", digest[index])
}
return digestHex
}
// The following is a memory friendly version
static func get2(data: Data) -> String {
var currentIndex = 0
let bufferSize = 1024 * 1024
//var digest = [UInt8](repeating: 0, count: Int(CC_MD5_DIGEST_LENGTH))
// Create and initialize MD5 context:
var context = CC_MD5_CTX()
CC_MD5_Init(&context)
while autoreleasepool(invoking: {
var subData: Data!
if (currentIndex + bufferSize) < data.count {
subData = data.subdata(in: Range.init(NSMakeRange(currentIndex, bufferSize))!)
currentIndex = currentIndex + bufferSize
} else {
subData = data.subdata(in: Range.init(NSMakeRange(currentIndex, data.count - currentIndex))!)
currentIndex = currentIndex + (data.count - currentIndex)
}
if subData.count > 0 {
subData.withUnsafeBytes {
_ = CC_MD5_Update(&context, $0, numericCast(subData.count))
}
return true
} else {
return false
}
}) { }
// Compute the MD5 digest:
var digest = Data(count: Int(CC_MD5_DIGEST_LENGTH))
digest.withUnsafeMutableBytes {
_ = CC_MD5_Final($0, &context)
}
var digestHex = ""
for index in 0..<Int(CC_MD5_DIGEST_LENGTH) {
digestHex += String(format: "%02x", digest[index])
}
return digestHex
}
}
Now add:
#import "AWS3MD5Hash.h"
to your Objective-C Bridging header. You should be ok with this setup.
Example usage
To test this setup, you could be calling the following method inside the object that is in charge of handling the AWS connections:
func getMd5HashForFile() {
let credentialProvider = AWSCognitoCredentialsProvider(regionType: AWSRegionType.USEast2, identityPoolId: "<INSERT_POOL_ID>")
let configuration = AWSServiceConfiguration(region: AWSRegionType.APSoutheast2, credentialsProvider: credentialProvider)
configuration?.timeoutIntervalForRequest = 3.0
configuration?.timeoutIntervalForResource = 3.0
AWSServiceManager.default().defaultServiceConfiguration = configuration
AWSS3.register(with: configuration!, forKey: "defaultKey")
let s3 = AWSS3.s3(forKey: "defaultKey")
let headObjectRequest = AWSS3HeadObjectRequest()!
headObjectRequest.bucket = "<NAME_OF_YOUR_BUCKET>"
headObjectRequest.key = self.latestMapOnServer.key
let _: AWSTask? = s3.headObject(headObjectRequest).continueOnSuccessWith { (awstask) -> Any? in
let headObjectOutput: AWSS3HeadObjectOutput? = awstask.result
var ETag = headObjectOutput?.eTag!
// Here you should parse the returned Etag and extract the number of parts to provide to the helper function. Etags end with a "-" followed by the number of parts. If you don't see this format, then pass 0 as the number of parts.
ETag = ETag!.replacingOccurrences(of: "\"", with: "")
print("headObjectOutput.ETag \(ETag!)")
let mapOnDiskUrl = self.getMapsDirectory().appendingPathComponent(self.latestMapOnDisk!)
let hash = mapOnDiskUrl.calculateAWSS3MD5Hash(<Take the number of parts from the ETag returned by the server>)
if hash == ETag {
print("They are the same.")
}
print ("\(hash!)")
return nil
}
}
If the ETag returned by the server does not have "-" at the end of the ETag, just pass 0 to calculateAWSS3MD5Hash. Please comment if you encounter any problems. I am working on a swift only solution, I will update this answer as soon as I finish. Thanks
I just saw that the AWS S3 Console 'upload' uses an unusual part (chunk) size of 17,179,870 - at least for larger files.
Using that part size gave me the correct ETag hash using the methods described earlier. Thanks to #TheStoryCoder for the php version.
Thanks to #hans for his idea to use head-object to see the actual sizes of each part.
I used the AWS S3 Console (on Nov28 2020) to upload about 50 files ranging in size from 190MB to 2.3GB and all of them had the same part size of 17,179,870.
I liked Emerson's leading answer above - especially the xxd part - but I was too lazy to use dd so I went with split, guessing at an 8M chunk size because I uploaded with aws s3 cp:
$ split -b 8M large.iso XXX
$ md5sum XXX* > checksums.txt
$ sed -i 's/ .*$//' checksums.txt
$ xxd -r -p checksums.txt | md5sum
99a090df013d375783f0f0be89288529 -
$ wc -l checksums.txt
80 checksums.txt
$
It was immediately obvious that both parts of my S3 etag matched my file's calculated etag.
UPDATE:
This has been working nicely:
$ ll large.iso
-rw-rw-r-- 1 user user 669134848 Apr 12 2021 large.iso
$
$ etag large.iso
99a090df013d375783f0f0be89288529-80
$
$ type etag
etag is a function
etag ()
{
split -b 8M --filter=md5sum $1 | cut -d' ' -f1 | pee "xxd -r -p | md5sum | cut -d' ' -f1" "wc -l" | paste -d'-' - -
}
$
All the other answers assume a standard and regular part size. But that assumption may not be true. Across the console and various SDKs there are different defaults. And the low-level API does allow a lot of variety.
Complications:
S3 multi-part uploads can have parts of any size (within a min and max for non-last parts).
Even the non-last parts can be different sizes.
When you upload they don't have to be consecutive part numbers.
If you do a multi-part upload with only 1 part, the etag is the more complicated version, not the simple MD5
etags tend to be wrapped in double-quotes. I don't know why. But that's just a thing that might trip you up.
So we need find find out how many parts there are, and how big they are.
You cannot reliably get the part count from boto3's Object.parts_count attribute. I don't know if the same is true of other SDKs.
The get_object_attributes API documentation claims that it returns a list of parts and sizes. But when I tested those fields were missing. Even for multi-part uploads that were not completed.
Even if you assume equal part sizes (except the last part), you cannot deduce part size from content length and part count. e.g. if a 90MB file has 3 parts, was that 30MBx3, or 40MB+40MB+10MB?
Let's assume that you have a local file and you want to check whether it matches the content of the object in S3.
(And assume that you've already checked whether the lengths differ, because that's a faster check.)
Here's a python3 script to do that. (I chose python just because that's what I'm familiar with.)
We use head_object to get the e-tag. With the e-tag we can deduce whether it was a single-part upload or multi-part, and how many parts.
We use head_object passing in PartNumber, calling that for each part, to get the length of each part. You could use multiprocessing to speed that up. (Noting that boto3's client should not be passed between processes.)
import boto3
from hashlib import md5
def content_matches(local_path, bucket, key) -> bool:
client = boto3.client('s3')
resp = client.head_object(Bucket=bucket, Key=key)
remote_e_tag = resp['ETag']
total_length = resp['ContentLength']
if '-' not in remote_e_tag:
# it was a single-part upload
m = md5()
# you could read from the file in chunks to avoid loading the whole thing into memory
# the chunks would not have to match any SDK standard. It can be whatever you want.
# (The MD5 library will act as if you hashed in one go)
with open(file, 'rb') as f:
local_etag = f'"md5(f.read()).hexdigest()"'
return local_etag == remote_e_tag
else:
# multi-part upload
# to find the number of parts, get it from the e-tag
# e.g. 123-56 has 56 parts
num_parts = int(remote_e_tag.strip('"').split('-')[-1])
print(f"Assuming {num_parts=} from {remote_e_tag=}")
md5s = []
with open(local_path, 'rb') as f:
sz_read = 0
for part_num in range(1,num_parts+1):
resp = client.head_object(Bucket=bucket, Key=key, PartNumber=part_num)
sz_read += resp['ContentLength']
local_data_part = f.read(resp['ContentLength'])
assert len(local_data_part) == resp['ContentLength'] # sanity check
md5s.append(md5(local_data_part))
assert sz_read == total_length, "Sum of part sizes doesn't equal total file size"
digests = b''.join(m.digest() for m in md5s)
digests_md5 = md5(digests)
local_etag = f'"{digests_md5.hexdigest()}-{len(md5s)}"'
return remote_e_tag == local_etag
And a script to test it with all those edge cases:
import boto3
from pprint import pprint
from hashlib import md5
from main import content_matches
MB = 2 ** 20
bucket = 'mybucket'
key = 'test-multi-part-upload'
local_path = 'test-data'
# first upload the object
s3 = boto3.resource('s3')
obj = s3.Object(bucket, key)
mpu = obj.initiate_multipart_upload()
parts = []
part_sizes = [6 * MB, 5 * MB, 5] # deliberately non-standard and not consistent
upload_part_nums = [1,3,8] # test non-consecutive part numbers for upload
with open(local_path, 'wb') as fw:
with open('/dev/random', 'rb') as fr:
for (part_num, part_size) in zip(upload_part_nums, part_sizes):
part = mpu.Part(part_num)
data = fr.read(part_size)
print(f"Uploading part {part_num}")
resp = part.upload(Body=data)
parts.append({
'ETag': resp['ETag'],
'PartNumber': part_num
})
fw.write(data)
resp = mpu.complete(MultipartUpload={
'Parts': parts
})
obj.reload()
assert content_matches(local_path, bucket, key)
"#wim Any idea how to calculate the ETag when SSE is enabled?"
in my testing, multipart+SEE-C, the Etag is valid.
can be calculated from the individual Etag returned for each part.
and this is easy to prove.
let's say we have a multipart upload with SEE-C, with 10 parts.
take the 10 Etags, put them in a file, and run "xxd -r -p checksums.txt | md5sum", the calculdated value with match the value returned from aws
etag parts
-------------------------------
1330e1275b556ab6702bca9438f62c15 -
ae55d3ddf52e33d45140a5be6dacb925 -
16dc956e05962b84ad9cd74a05e86797 -
64be66992a5110c4b1151a8249258a1a -
4926df0200fe24499524176d6a85e347 -
2b6655c3506481eb1fae6b2e2e7c4b8b -
a02e9dbd49039eaf4d6de1fddc5e1a30 -
afb7bc1f6e0c1f23671cb7116f3b0c63 -
dddf3a1ab192f26bb483a3e2778bab13 -
adb8b2b761640418856853f3810ac45a -
-------------------------------
etag_from_aws = c68db040f8a36c164259bcca40c36410-10
etag_calculated = c68db040f8a36c164259bcca40c36410-10
No,
Till now there is not solution to match normal file ETag and Multipart file ETag and MD5 of local file.

What is the algorithm to compute the Amazon-S3 Etag for a file larger than 5GB?

Files uploaded to Amazon S3 that are smaller than 5GB have an ETag that is simply the MD5 hash of the file, which makes it easy to check if your local files are the same as what you put on S3.
But if your file is larger than 5GB, then Amazon computes the ETag differently.
For example, I did a multipart upload of a 5,970,150,664 byte file in 380 parts. Now S3 shows it to have an ETag of 6bcf86bed8807b8e78f0fc6e0a53079d-380. My local file has an md5 hash of 702242d3703818ddefe6bf7da2bed757. I think the number after the dash is the number of parts in the multipart upload.
I also suspect that the new ETag (before the dash) is still an MD5 hash, but with some meta data included along the way from the multipart upload somehow.
Does anyone know how to compute the ETag using the same algorithm as Amazon S3?
Say you uploaded a 14MB file to a bucket without server-side encryption, and your part size is 5MB. Calculate 3 MD5 checksums corresponding to each part, i.e. the checksum of the first 5MB, the second 5MB, and the last 4MB. Then take the checksum of their concatenation. MD5 checksums are often printed as hex representations of binary data, so make sure you take the MD5 of the decoded binary concatenation, not of the ASCII or UTF-8 encoded concatenation. When that's done, add a hyphen and the number of parts to get the ETag.
Here are the commands to do it on Mac OS X from the console:
$ dd bs=1m count=5 skip=0 if=someFile | md5 >>checksums.txt
5+0 records in
5+0 records out
5242880 bytes transferred in 0.019611 secs (267345449 bytes/sec)
$ dd bs=1m count=5 skip=5 if=someFile | md5 >>checksums.txt
5+0 records in
5+0 records out
5242880 bytes transferred in 0.019182 secs (273323380 bytes/sec)
$ dd bs=1m count=5 skip=10 if=someFile | md5 >>checksums.txt
2+1 records in
2+1 records out
2599812 bytes transferred in 0.011112 secs (233964895 bytes/sec)
At this point all the checksums are in checksums.txt. To concatenate them and decode the hex and get the MD5 checksum of the lot, just use
$ xxd -r -p checksums.txt | md5
And now append "-3" to get the ETag, since there were 3 parts.
Notes
If you uploaded with aws-cli via aws s3 cp then you most likely have a 8MB chunksize. According to the docs, that is the default.
If the bucket has server-side encryption (SSE) turned on, the ETag won't be the MD5 checksum (see the API documentation). But if you're just trying to verify that an uploaded part matches what you sent, you can use the Content-MD5 header and S3 will compare it for you.
md5 on macOS just writes out the checksum, but md5sum on Linux/brew also outputs the filename. You'll need to strip that, but I'm sure there's some option to only output the checksums. You don't need to worry about whitespace cause xxd will ignore it.
Code Links
A Gist I wrote with a working script for macOS.
The project at s3md5.
Based on answers here, I wrote a Python implementation which correctly calculates both multi-part and single-part file ETags.
def calculate_s3_etag(file_path, chunk_size=8 * 1024 * 1024):
md5s = []
with open(file_path, 'rb') as fp:
while True:
data = fp.read(chunk_size)
if not data:
break
md5s.append(hashlib.md5(data))
if len(md5s) < 1:
return '"{}"'.format(hashlib.md5().hexdigest())
if len(md5s) == 1:
return '"{}"'.format(md5s[0].hexdigest())
digests = b''.join(m.digest() for m in md5s)
digests_md5 = hashlib.md5(digests)
return '"{}-{}"'.format(digests_md5.hexdigest(), len(md5s))
The default chunk_size is 8 MB used by the official aws cli tool, and it does multipart upload for 2+ chunks. It should work under both Python 2 and 3.
bash implementation
python implementation
The algorithm literally is (copied from the readme in the python implementation) :
md5 the chunks
glob the md5 strings together
convert the glob to binary
md5 the binary of the globbed chunk md5s
append "-Number_of_chunks" to the end of the md5 string of the binary
Here's yet another piece in this crazy AWS challenge puzzle.
FWIW, this answer assumes you already have figured out how to calculate the "MD5 of MD5 parts" and can rebuild your AWS Multi-part ETag from all the other answers already provided here.
What this answer addresses is the annoyance of having to "guess" or otherwise "divine" the original upload part size.
We use several different tools for uploading to S3 and they all seem to have different upload part sizes, so "guessing" really wasn't an option. Also, we have a lot of files that were historically uploaded when part sizes seemed to be different. Also, the old trick of using an internal server copy to force the creation of an MD5-type ETag also no longer works as AWS has changed their internal server copies to also use multi-part (just with a fairly large part size).
So...
How can you figure out the object's part size?
Well, if you first make a head_object request and detect that the ETag is a multi-part type ETag (includes a '-<partcount>' at the end), then you can make another head_object request, but with an additional part_number attribute of 1 (the first part). This follow-on head_object request will then return you the content_length of the first part. Viola... Now you know the part size that was used and you can use that size to re-create your local ETag which should match the original uploaded S3 ETag created when the object was uploaded.
Additionally, if you wanted to be exact (perhaps some multi-part uploads were to use variable part sizes), then you could continue to call head_object requests with each part_number specified and calculate each part's MD5 from the returned parts content_length.
Hope that helps...
Not sure if it can help:
We're currently doing an ugly (but so far useful) hack to fix those wrong ETags in multipart uploaded files, which consists on applying a change to the file in the bucket; that triggers a md5 recalculation from Amazon that changes the ETag to matches with the actual md5 signature.
In our case:
File: bucket/Foo.mpg.gpg
ETag obtained: "3f92dffef0a11d175e60fb8b958b4e6e-2"
Do something with the file (rename it, add a meta-data like a fake header, among others)
Etag obtained: "c1d903ca1bb6dc68778ef21e74cc15b0"
We don't know the algorithm, but since we can "fix" the ETag we don't need to worry about it either.
Same algorithm, java version:
(BaseEncoding, Hasher, Hashing, etc comes from the guava library
/**
* Generate checksum for object came from multipart upload</p>
* </p>
* AWS S3 spec: Entity tag that identifies the newly created object's data. Objects with different object data will have different entity tags. The entity tag is an opaque string. The entity tag may or may not be an MD5 digest of the object data. If the entity tag is not an MD5 digest of the object data, it will contain one or more nonhexadecimal characters and/or will consist of less than 32 or more than 32 hexadecimal digits.</p>
* Algorithm follows AWS S3 implementation: https://github.com/Teachnova/s3md5</p>
*/
private static String calculateChecksumForMultipartUpload(List<String> md5s) {
StringBuilder stringBuilder = new StringBuilder();
for (String md5:md5s) {
stringBuilder.append(md5);
}
String hex = stringBuilder.toString();
byte raw[] = BaseEncoding.base16().decode(hex.toUpperCase());
Hasher hasher = Hashing.md5().newHasher();
hasher.putBytes(raw);
String digest = hasher.hash().toString();
return digest + "-" + md5s.size();
}
According to the AWS documentation the ETag isn't an MD5 hash for a multi-part upload nor for an encrypted object: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html
Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-S3 or plaintext, have ETags that are an MD5 digest of their object data.
Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-C or SSE-KMS, have ETags that are not an MD5 digest of their object data.
If an object is created by either the Multipart Upload or Part Copy operation, the ETag is not an MD5 digest, regardless of the method of encryption.
In an above answer, someone asked if there was a way to get the md5 for files larger than 5G.
An answer that I could give for getting the MD5 value (for files larger than 5G) would be to either add it manually to the metadata, or use a program to do your uploads which will add the information.
For example, I used s3cmd to upload a file, and it added the following metadata.
$ aws s3api head-object --bucket xxxxxxx --key noarch/epel-release-6-8.noarch.rpm
{
"AcceptRanges": "bytes",
"ContentType": "binary/octet-stream",
"LastModified": "Sat, 19 Sep 2015 03:27:25 GMT",
"ContentLength": 14540,
"ETag": "\"2cd0ae668a585a14e07c2ea4f264d79b\"",
"Metadata": {
"s3cmd-attrs": "uid:502/gname:staff/uname:xxxxxx/gid:20/mode:33188/mtime:1352129496/atime:1441758431/md5:2cd0ae668a585a14e07c2ea4f264d79b/ctime:1441385182"
}
}
It isn't a direct solution using the ETag, but it is a way to populate the metadata you want (MD5) in a way you can access it. It will still fail if someone uploads the file without metadata.
Here is the algorithm in ruby...
require 'digest'
# PART_SIZE should match the chosen part size of the multipart upload
# Set here as 10MB
PART_SIZE = 1024*1024*10
class File
def each_part(part_size = PART_SIZE)
yield read(part_size) until eof?
end
end
file = File.new('<path_to_file>')
hashes = []
file.each_part do |part|
hashes << Digest::MD5.hexdigest(part)
end
multipart_hash = Digest::MD5.hexdigest([hashes.join].pack('H*'))
multipart_etag = "#{multipart_hash}-#{hashes.count}"
Thanks to Shortest Hex2Bin in Ruby and Multipart Uploads to S3 ...
node.js implementation -
const fs = require('fs');
const crypto = require('crypto');
const chunk = 1024 * 1024 * 5; // 5MB
const md5 = data => crypto.createHash('md5').update(data).digest('hex');
const getEtagOfFile = (filePath) => {
const stream = fs.readFileSync(filePath);
if (stream.length <= chunk) {
return md5(stream);
}
const md5Chunks = [];
const chunksNumber = Math.ceil(stream.length / chunk);
for (let i = 0; i < chunksNumber; i++) {
const chunkStream = stream.slice(i * chunk, (i + 1) * chunk);
md5Chunks.push(md5(chunkStream));
}
return `${md5(Buffer.from(md5Chunks.join(''), 'hex'))}-${chunksNumber}`;
};
And here is a PHP version of calculating the ETag:
function calculate_aws_etag($filename, $chunksize) {
/*
DESCRIPTION:
- calculate Amazon AWS ETag used on the S3 service
INPUT:
- $filename : path to file to check
- $chunksize : chunk size in Megabytes
OUTPUT:
- ETag (string)
*/
$chunkbytes = $chunksize*1024*1024;
if (filesize($filename) < $chunkbytes) {
return md5_file($filename);
} else {
$md5s = array();
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = fread($handle, $chunkbytes);
$md5s[] = md5($buffer);
unset($buffer);
}
fclose($handle);
$concat = '';
foreach ($md5s as $indx => $md5) {
$concat .= hex2bin($md5);
}
return md5($concat) .'-'. count($md5s);
}
}
$etag = calculate_aws_etag('path/to/myfile.ext', 8);
And here is an enhanced version that can verify against an expected ETag - and even guess the chunksize if you don't know it!
function calculate_etag($filename, $chunksize, $expected = false) {
/*
DESCRIPTION:
- calculate Amazon AWS ETag used on the S3 service
INPUT:
- $filename : path to file to check
- $chunksize : chunk size in Megabytes
- $expected : verify calculated etag against this specified etag and return true or false instead
- if you make chunksize negative (eg. -8 instead of 8) the function will guess the chunksize by checking all possible sizes given the number of parts mentioned in $expected
OUTPUT:
- ETag (string)
- or boolean true|false if $expected is set
*/
if ($chunksize < 0) {
$do_guess = true;
$chunksize = 0 - $chunksize;
} else {
$do_guess = false;
}
$chunkbytes = $chunksize*1024*1024;
$filesize = filesize($filename);
if ($filesize < $chunkbytes && (!$expected || !preg_match("/^\\w{32}-\\w+$/", $expected))) {
$return = md5_file($filename);
if ($expected) {
$expected = strtolower($expected);
return ($expected === $return ? true : false);
} else {
return $return;
}
} else {
$md5s = array();
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = fread($handle, $chunkbytes);
$md5s[] = md5($buffer);
unset($buffer);
}
fclose($handle);
$concat = '';
foreach ($md5s as $indx => $md5) {
$concat .= hex2bin($md5);
}
$return = md5($concat) .'-'. count($md5s);
if ($expected) {
$expected = strtolower($expected);
$matches = ($expected === $return ? true : false);
if ($matches || $do_guess == false || strlen($expected) == 32) {
return $matches;
} else {
// Guess the chunk size
preg_match("/-(\\d+)$/", $expected, $match);
$parts = $match[1];
$min_chunk = ceil($filesize / $parts /1024/1024);
$max_chunk = floor($filesize / ($parts-1) /1024/1024);
$found_match = false;
for ($i = $min_chunk; $i <= $max_chunk; $i++) {
if (calculate_aws_etag($filename, $i) === $expected) {
$found_match = true;
break;
}
}
return $found_match;
}
} else {
return $return;
}
}
}
The short answer is that you take the 128bit binary md5 digest of each part, concatenate them into a document, and hash that document. The algorithm presented in this answer is accurate.
Note: the multipart ETAG form with the hyphen will change to the form without the hyphen if you "touch" the blob (even without modifying the content). That is, if you copy, or do an in-place copy of your completed multipart-uploaded object (aka PUT-COPY), S3 will recompute the ETAG with the simple version of the algorithm. i.e. the destination object will have an etag without the hyphen.
You've probably considered this already, but if your files are less than 5GB, and you already know their MD5s, and upload parallelization provides little to no benefit (e.g. you are streaming the upload from a slow network, or uploading from a slow disk), then you may also consider using a simple PUT instead of a multipart PUT, and pass your known Content-MD5 in your request headers -- amazon will fail the upload if they don't match. Keep in mind that you get charged for each UploadPart.
Furthermore, in some clients, passing a known MD5 for the input of a PUT operation will save the client from recomputing the MD5 during the transfer. In boto3 (python), you would use the ContentMD5 parameter of the client.put_object() method, for instance. If you omit the parameter, and you already knew the MD5, then the client would be wasting cycles computing it again before the transfer.
Working algorithm implemented in Node.js (TypeScript).
/**
* Generate an S3 ETAG for multipart uploads in Node.js
* An implementation of this algorithm: https://stackoverflow.com/a/19896823/492325
* Author: Richard Willis <willis.rh#gmail.com>
*/
import fs from 'node:fs';
import crypto, { BinaryLike } from 'node:crypto';
const defaultPartSizeInBytes = 5 * 1024 * 1024; // 5MB
function md5(contents: string | BinaryLike): string {
return crypto.createHash('md5').update(contents).digest('hex');
}
export function getS3Etag(
filePath: string,
partSizeInBytes = defaultPartSizeInBytes
): string {
const { size: fileSizeInBytes } = fs.statSync(filePath);
let parts = Math.floor(fileSizeInBytes / partSizeInBytes);
if (fileSizeInBytes % partSizeInBytes > 0) {
parts += 1;
}
const fileDescriptor = fs.openSync(filePath, 'r');
let totalMd5 = '';
for (let part = 0; part < parts; part++) {
const skipBytes = partSizeInBytes * part;
const totalBytesLeft = fileSizeInBytes - skipBytes;
const bytesToRead = Math.min(totalBytesLeft, partSizeInBytes);
const buffer = Buffer.alloc(bytesToRead);
fs.readSync(fileDescriptor, buffer, 0, bytesToRead, skipBytes);
totalMd5 += md5(buffer);
}
const combinedHash = md5(Buffer.from(totalMd5, 'hex'));
const etag = `${combinedHash}-${parts}`;
return etag;
}
I've published this to npm
npm install s3-etag
import { generateETag } from 's3-etag';
const etag = generateETag(absoluteFilePath, partSizeInBytes);
View project here: https://github.com/badsyntax/s3-etag
A version in Rust:
use crypto::digest::Digest;
use crypto::md5::Md5;
use std::fs::File;
use std::io::prelude::*;
use std::iter::repeat;
fn calculate_etag_from_read(f: &mut dyn Read, chunk_size: usize) -> Result<String> {
let mut md5 = Md5::new();
let mut concat_md5 = Md5::new();
let mut input_buffer = vec![0u8; chunk_size];
let mut chunk_count = 0;
let mut current_md5: Vec<u8> = repeat(0).take((md5.output_bits() + 7) / 8).collect();
let md5_result = loop {
let amount_read = f.read(&mut input_buffer)?;
if amount_read > 0 {
md5.reset();
md5.input(&input_buffer[0..amount_read]);
chunk_count += 1;
md5.result(&mut current_md5);
concat_md5.input(&current_md5);
} else {
if chunk_count > 1 {
break format!("{}-{}", concat_md5.result_str(), chunk_count);
} else {
break md5.result_str();
}
}
};
Ok(md5_result)
}
fn calculate_etag(file: &String, chunk_size: usize) -> Result<String> {
let mut f = File::open(file)?;
calculate_etag_from_read(&mut f, chunk_size)
}
See a repo with a simple implementation: https://github.com/bn3t/calculate-etag/tree/master
Regarding chunk size, I noticed that it seems to depend of number of parts.
The maximun number of parts are 10000 as AWS documents.
So starting on a default of 8MB and knowing the filesize, chunk size and parts can be calculated as follows:
chunk_size=8*1024*1024
flsz=os.path.getsize(fl)
while flsz/chunk_size>10000:
chunk_size*=2
parts=math.ceil(flsz/chunk_size)
Parts have to be up-rounded
Extending Timothy Gonzalez's answer:
Identical files will have different etag when using multipart upload.
It's easy to test it with WinSCP, because it uses multipart upload.
When I upload multiple indentical copies of the same file to S3 via WinSCP then each has different etag. When I download them and calculate md5, then they are still indentical.
So from what I tested different etags doesn't mean that files are different.
I see no alternative way to obtain any hash for S3 files without downloading them first.
This is true for multipart uploads. For not-multipart it should still be possible to calculate etag locally.
I have a solution for iOS and macOS without using external helpers like dd and xxd. I have just found it, so I report it as it is, planning to improve it at a later stage. For the moment, it relies on both Objective-C and Swift code. First of all, create this helper class in Objective-C:
AWS3MD5Hash.h
#import <Foundation/Foundation.h>
NS_ASSUME_NONNULL_BEGIN
#interface AWS3MD5Hash : NSObject
- (NSData *)dataFromFile:(FILE *)theFile startingOnByte:(UInt64)startByte length:(UInt64)length filePath:(NSString *)path singlePartSize:(NSUInteger)partSizeInMb;
- (NSData *)dataFromBigData:(NSData *)theData startingOnByte:(UInt64)startByte length:(UInt64)length;
- (NSData *)dataFromHexString:(NSString *)sourceString;
#end
NS_ASSUME_NONNULL_END
AWS3MD5Hash.m
#import "AWS3MD5Hash.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define SIZE 256
#implementation AWS3MD5Hash
- (NSData *)dataFromFile:(FILE *)theFile startingOnByte:(UInt64)startByte length:(UInt64)length filePath:(NSString *)path singlePartSize:(NSUInteger)partSizeInMb {
char *buffer = malloc(length);
NSURL *fileURL = [NSURL fileURLWithPath:path];
NSNumber *fileSizeValue = nil;
NSError *fileSizeError = nil;
[fileURL getResourceValue:&fileSizeValue
forKey:NSURLFileSizeKey
error:&fileSizeError];
NSInteger __unused result = fseek(theFile,startByte,SEEK_SET);
if (result != 0) {
free(buffer);
return nil;
}
NSInteger result2 = fread(buffer, length, 1, theFile);
NSUInteger difference = fileSizeValue.integerValue - startByte;
NSData *toReturn;
if (result2 == 0) {
toReturn = [NSData dataWithBytes:buffer length:difference];
} else {
toReturn = [NSData dataWithBytes:buffer length:result2 * length];
}
free(buffer);
return toReturn;
}
- (NSData *)dataFromBigData:(NSData *)theData startingOnByte: (UInt64)startByte length:(UInt64)length {
NSUInteger fileSizeValue = theData.length;
NSData *subData;
if (startByte + length > fileSizeValue) {
subData = [theData subdataWithRange:NSMakeRange(startByte, fileSizeValue - startByte)];
} else {
subData = [theData subdataWithRange:NSMakeRange(startByte, length)];
}
return subData;
}
- (NSData *)dataFromHexString:(NSString *)string {
string = [string lowercaseString];
NSMutableData *data= [NSMutableData new];
unsigned char whole_byte;
char byte_chars[3] = {'\0','\0','\0'};
NSInteger i = 0;
NSInteger length = string.length;
while (i < length-1) {
char c = [string characterAtIndex:i++];
if (c < '0' || (c > '9' && c < 'a') || c > 'f')
continue;
byte_chars[0] = c;
byte_chars[1] = [string characterAtIndex:i++];
whole_byte = strtol(byte_chars, NULL, 16);
[data appendBytes:&whole_byte length:1];
}
return data;
}
#end
Now create a plain swift file:
AWS Extensions.swift
import UIKit
import CommonCrypto
extension URL {
func calculateAWSS3MD5Hash(_ numberOfParts: UInt64) -> String? {
do {
var fileSize: UInt64!
var calculatedPartSize: UInt64!
let attr:NSDictionary? = try FileManager.default.attributesOfItem(atPath: self.path) as NSDictionary
if let _attr = attr {
fileSize = _attr.fileSize();
if numberOfParts != 0 {
let partSize = Double(fileSize / numberOfParts)
var partSizeInMegabytes = Double(partSize / (1024.0 * 1024.0))
partSizeInMegabytes = ceil(partSizeInMegabytes)
calculatedPartSize = UInt64(partSizeInMegabytes)
if calculatedPartSize % 2 != 0 {
calculatedPartSize += 1
}
if numberOfParts == 2 || numberOfParts == 3 { // Very important when there are 2 or 3 parts, in the majority of times
// the calculatedPartSize is already 8. In the remaining cases we force it.
calculatedPartSize = 8
}
if mainLogToggling {
print("The calculated part size is \(calculatedPartSize!) Megabytes")
}
}
}
if numberOfParts == 0 {
let string = self.memoryFriendlyMd5Hash()
return string
}
let hasher = AWS3MD5Hash.init()
let file = fopen(self.path, "r")
defer { let result = fclose(file)}
var index: UInt64 = 0
var bigString: String! = ""
var data: Data!
while autoreleasepool(invoking: {
if index == (numberOfParts-1) {
if mainLogToggling {
//print("Siamo all'ultima linea.")
}
}
data = hasher.data(from: file!, startingOnByte: index * calculatedPartSize * 1024 * 1024, length: calculatedPartSize * 1024 * 1024, filePath: self.path, singlePartSize: UInt(calculatedPartSize))
bigString = bigString + MD5.get(data: data) + "\n"
index += 1
if index == numberOfParts {
return false
}
return true
}) {}
let final = MD5.get(data :hasher.data(fromHexString: bigString)) + "-\(numberOfParts)"
return final
} catch {
}
return nil
}
func memoryFriendlyMd5Hash() -> String? {
let bufferSize = 1024 * 1024
do {
// Open file for reading:
let file = try FileHandle(forReadingFrom: self)
defer {
file.closeFile()
}
// Create and initialize MD5 context:
var context = CC_MD5_CTX()
CC_MD5_Init(&context)
// Read up to `bufferSize` bytes, until EOF is reached, and update MD5 context:
while autoreleasepool(invoking: {
let data = file.readData(ofLength: bufferSize)
if data.count > 0 {
data.withUnsafeBytes {
_ = CC_MD5_Update(&context, $0, numericCast(data.count))
}
return true // Continue
} else {
return false // End of file
}
}) { }
// Compute the MD5 digest:
var digest = Data(count: Int(CC_MD5_DIGEST_LENGTH))
digest.withUnsafeMutableBytes {
_ = CC_MD5_Final($0, &context)
}
let hexDigest = digest.map { String(format: "%02hhx", $0) }.joined()
return hexDigest
} catch {
print("Cannot open file:", error.localizedDescription)
return nil
}
}
struct MD5 {
static func get(data: Data) -> String {
var digest = [UInt8](repeating: 0, count: Int(CC_MD5_DIGEST_LENGTH))
let _ = data.withUnsafeBytes { bytes in
CC_MD5(bytes, CC_LONG(data.count), &digest)
}
var digestHex = ""
for index in 0..<Int(CC_MD5_DIGEST_LENGTH) {
digestHex += String(format: "%02x", digest[index])
}
return digestHex
}
// The following is a memory friendly version
static func get2(data: Data) -> String {
var currentIndex = 0
let bufferSize = 1024 * 1024
//var digest = [UInt8](repeating: 0, count: Int(CC_MD5_DIGEST_LENGTH))
// Create and initialize MD5 context:
var context = CC_MD5_CTX()
CC_MD5_Init(&context)
while autoreleasepool(invoking: {
var subData: Data!
if (currentIndex + bufferSize) < data.count {
subData = data.subdata(in: Range.init(NSMakeRange(currentIndex, bufferSize))!)
currentIndex = currentIndex + bufferSize
} else {
subData = data.subdata(in: Range.init(NSMakeRange(currentIndex, data.count - currentIndex))!)
currentIndex = currentIndex + (data.count - currentIndex)
}
if subData.count > 0 {
subData.withUnsafeBytes {
_ = CC_MD5_Update(&context, $0, numericCast(subData.count))
}
return true
} else {
return false
}
}) { }
// Compute the MD5 digest:
var digest = Data(count: Int(CC_MD5_DIGEST_LENGTH))
digest.withUnsafeMutableBytes {
_ = CC_MD5_Final($0, &context)
}
var digestHex = ""
for index in 0..<Int(CC_MD5_DIGEST_LENGTH) {
digestHex += String(format: "%02x", digest[index])
}
return digestHex
}
}
Now add:
#import "AWS3MD5Hash.h"
to your Objective-C Bridging header. You should be ok with this setup.
Example usage
To test this setup, you could be calling the following method inside the object that is in charge of handling the AWS connections:
func getMd5HashForFile() {
let credentialProvider = AWSCognitoCredentialsProvider(regionType: AWSRegionType.USEast2, identityPoolId: "<INSERT_POOL_ID>")
let configuration = AWSServiceConfiguration(region: AWSRegionType.APSoutheast2, credentialsProvider: credentialProvider)
configuration?.timeoutIntervalForRequest = 3.0
configuration?.timeoutIntervalForResource = 3.0
AWSServiceManager.default().defaultServiceConfiguration = configuration
AWSS3.register(with: configuration!, forKey: "defaultKey")
let s3 = AWSS3.s3(forKey: "defaultKey")
let headObjectRequest = AWSS3HeadObjectRequest()!
headObjectRequest.bucket = "<NAME_OF_YOUR_BUCKET>"
headObjectRequest.key = self.latestMapOnServer.key
let _: AWSTask? = s3.headObject(headObjectRequest).continueOnSuccessWith { (awstask) -> Any? in
let headObjectOutput: AWSS3HeadObjectOutput? = awstask.result
var ETag = headObjectOutput?.eTag!
// Here you should parse the returned Etag and extract the number of parts to provide to the helper function. Etags end with a "-" followed by the number of parts. If you don't see this format, then pass 0 as the number of parts.
ETag = ETag!.replacingOccurrences(of: "\"", with: "")
print("headObjectOutput.ETag \(ETag!)")
let mapOnDiskUrl = self.getMapsDirectory().appendingPathComponent(self.latestMapOnDisk!)
let hash = mapOnDiskUrl.calculateAWSS3MD5Hash(<Take the number of parts from the ETag returned by the server>)
if hash == ETag {
print("They are the same.")
}
print ("\(hash!)")
return nil
}
}
If the ETag returned by the server does not have "-" at the end of the ETag, just pass 0 to calculateAWSS3MD5Hash. Please comment if you encounter any problems. I am working on a swift only solution, I will update this answer as soon as I finish. Thanks
I just saw that the AWS S3 Console 'upload' uses an unusual part (chunk) size of 17,179,870 - at least for larger files.
Using that part size gave me the correct ETag hash using the methods described earlier. Thanks to #TheStoryCoder for the php version.
Thanks to #hans for his idea to use head-object to see the actual sizes of each part.
I used the AWS S3 Console (on Nov28 2020) to upload about 50 files ranging in size from 190MB to 2.3GB and all of them had the same part size of 17,179,870.
I liked Emerson's leading answer above - especially the xxd part - but I was too lazy to use dd so I went with split, guessing at an 8M chunk size because I uploaded with aws s3 cp:
$ split -b 8M large.iso XXX
$ md5sum XXX* > checksums.txt
$ sed -i 's/ .*$//' checksums.txt
$ xxd -r -p checksums.txt | md5sum
99a090df013d375783f0f0be89288529 -
$ wc -l checksums.txt
80 checksums.txt
$
It was immediately obvious that both parts of my S3 etag matched my file's calculated etag.
UPDATE:
This has been working nicely:
$ ll large.iso
-rw-rw-r-- 1 user user 669134848 Apr 12 2021 large.iso
$
$ etag large.iso
99a090df013d375783f0f0be89288529-80
$
$ type etag
etag is a function
etag ()
{
split -b 8M --filter=md5sum $1 | cut -d' ' -f1 | pee "xxd -r -p | md5sum | cut -d' ' -f1" "wc -l" | paste -d'-' - -
}
$
All the other answers assume a standard and regular part size. But that assumption may not be true. Across the console and various SDKs there are different defaults. And the low-level API does allow a lot of variety.
Complications:
S3 multi-part uploads can have parts of any size (within a min and max for non-last parts).
Even the non-last parts can be different sizes.
When you upload they don't have to be consecutive part numbers.
If you do a multi-part upload with only 1 part, the etag is the more complicated version, not the simple MD5
etags tend to be wrapped in double-quotes. I don't know why. But that's just a thing that might trip you up.
So we need find find out how many parts there are, and how big they are.
You cannot reliably get the part count from boto3's Object.parts_count attribute. I don't know if the same is true of other SDKs.
The get_object_attributes API documentation claims that it returns a list of parts and sizes. But when I tested those fields were missing. Even for multi-part uploads that were not completed.
Even if you assume equal part sizes (except the last part), you cannot deduce part size from content length and part count. e.g. if a 90MB file has 3 parts, was that 30MBx3, or 40MB+40MB+10MB?
Let's assume that you have a local file and you want to check whether it matches the content of the object in S3.
(And assume that you've already checked whether the lengths differ, because that's a faster check.)
Here's a python3 script to do that. (I chose python just because that's what I'm familiar with.)
We use head_object to get the e-tag. With the e-tag we can deduce whether it was a single-part upload or multi-part, and how many parts.
We use head_object passing in PartNumber, calling that for each part, to get the length of each part. You could use multiprocessing to speed that up. (Noting that boto3's client should not be passed between processes.)
import boto3
from hashlib import md5
def content_matches(local_path, bucket, key) -> bool:
client = boto3.client('s3')
resp = client.head_object(Bucket=bucket, Key=key)
remote_e_tag = resp['ETag']
total_length = resp['ContentLength']
if '-' not in remote_e_tag:
# it was a single-part upload
m = md5()
# you could read from the file in chunks to avoid loading the whole thing into memory
# the chunks would not have to match any SDK standard. It can be whatever you want.
# (The MD5 library will act as if you hashed in one go)
with open(file, 'rb') as f:
local_etag = f'"md5(f.read()).hexdigest()"'
return local_etag == remote_e_tag
else:
# multi-part upload
# to find the number of parts, get it from the e-tag
# e.g. 123-56 has 56 parts
num_parts = int(remote_e_tag.strip('"').split('-')[-1])
print(f"Assuming {num_parts=} from {remote_e_tag=}")
md5s = []
with open(local_path, 'rb') as f:
sz_read = 0
for part_num in range(1,num_parts+1):
resp = client.head_object(Bucket=bucket, Key=key, PartNumber=part_num)
sz_read += resp['ContentLength']
local_data_part = f.read(resp['ContentLength'])
assert len(local_data_part) == resp['ContentLength'] # sanity check
md5s.append(md5(local_data_part))
assert sz_read == total_length, "Sum of part sizes doesn't equal total file size"
digests = b''.join(m.digest() for m in md5s)
digests_md5 = md5(digests)
local_etag = f'"{digests_md5.hexdigest()}-{len(md5s)}"'
return remote_e_tag == local_etag
And a script to test it with all those edge cases:
import boto3
from pprint import pprint
from hashlib import md5
from main import content_matches
MB = 2 ** 20
bucket = 'mybucket'
key = 'test-multi-part-upload'
local_path = 'test-data'
# first upload the object
s3 = boto3.resource('s3')
obj = s3.Object(bucket, key)
mpu = obj.initiate_multipart_upload()
parts = []
part_sizes = [6 * MB, 5 * MB, 5] # deliberately non-standard and not consistent
upload_part_nums = [1,3,8] # test non-consecutive part numbers for upload
with open(local_path, 'wb') as fw:
with open('/dev/random', 'rb') as fr:
for (part_num, part_size) in zip(upload_part_nums, part_sizes):
part = mpu.Part(part_num)
data = fr.read(part_size)
print(f"Uploading part {part_num}")
resp = part.upload(Body=data)
parts.append({
'ETag': resp['ETag'],
'PartNumber': part_num
})
fw.write(data)
resp = mpu.complete(MultipartUpload={
'Parts': parts
})
obj.reload()
assert content_matches(local_path, bucket, key)
"#wim Any idea how to calculate the ETag when SSE is enabled?"
in my testing, multipart+SEE-C, the Etag is valid.
can be calculated from the individual Etag returned for each part.
and this is easy to prove.
let's say we have a multipart upload with SEE-C, with 10 parts.
take the 10 Etags, put them in a file, and run "xxd -r -p checksums.txt | md5sum", the calculdated value with match the value returned from aws
etag parts
-------------------------------
1330e1275b556ab6702bca9438f62c15 -
ae55d3ddf52e33d45140a5be6dacb925 -
16dc956e05962b84ad9cd74a05e86797 -
64be66992a5110c4b1151a8249258a1a -
4926df0200fe24499524176d6a85e347 -
2b6655c3506481eb1fae6b2e2e7c4b8b -
a02e9dbd49039eaf4d6de1fddc5e1a30 -
afb7bc1f6e0c1f23671cb7116f3b0c63 -
dddf3a1ab192f26bb483a3e2778bab13 -
adb8b2b761640418856853f3810ac45a -
-------------------------------
etag_from_aws = c68db040f8a36c164259bcca40c36410-10
etag_calculated = c68db040f8a36c164259bcca40c36410-10
No,
Till now there is not solution to match normal file ETag and Multipart file ETag and MD5 of local file.