I am having a problem manipulating the image with imagemagick then upload it to S3. The resulting object has a different (bigger) size and seems to be corrupt. If I make an intermediate step and save the output to local tmp file first and read it back, then upload the result everything seems fine. This the code that does NOT work.
im.resize({
srcData: imageObject.Body,
width: variant.width,
height: variant.height,
customArgs: ['-auto-orient']
}, function(err, stdout, stderr) {
if (err) {
// This resize completed successfully
log.err('Failed calling imageMagick, bail out', err);
callback(err);
return;
}
var fileName = cfg.aws.s3.uploadDir +
photo.imageId + '/' +
variant.width + 'x' + variant.height + '.jpg';
log.info('Storing image at S3 ' + fileName);
//fs.writeFileSync('/tmp/xxx.jpg', stdout, 'binary');
//stdout = fs.readFileSync('/tmp/xxx.jpg');
var x = new Buffer(stdout);
console.log(x);
s3.putObject(
{
Bucket: cfg.aws.s3.bucket,
Key: fileName,
Body: x,
ContentType: 'image/jpeg',
ACL: 'public-read'
},
function(err, data) {
if (err) {
// Failed saving to S3
log.error('Failed saving to S3', err);
}
callback(err);
}
);
});
Uncomment the fileWriteSync and fileReadSync and it works properly.
The output of the console.log(x) command in two cases: BAD:
Buffer c3 bf c3 98 c3 bf c3 a0 00 10 4a 46 49 46 00 01 01 01 00 01 00 01 00 00 c3 bf c3 9b 00 43 00 06 04 05 06 05 04 06 06 05 06 07 07 06 08 0a 10 0a 0a 09 09 ...>
GOOD:
Buffer ff d8 ff e0 00 10 4a 46 49 46 00 01 01 01 00 01 00 01 00 00 ff db 00 43 00 06 04 05 06 05 04 06 06 05 06 07 07 06 08 0a 10 0a 0a 09 09 0a 14 0e 0f 0c 10 ...>
as you can see good one is a proper jpeg, the bad one though contains similar sequences like 4a 46 49 46 = JFIF, however some bytes are off, and there are shifts, the whole file is bigger by about 20% in the bad case.
Something to do with encoding? I've tried several things but I am lost at this point.
Thanks!
Update #1: Apparently it is related to UTF encoding but I still don't completely understand what happens in this case. Apparently c3 bf c3 98 c3 bf c3 a0 00 10 4a 46 49 46 00 01 is UTF encoding of:
U+00FF LATIN SMALL LETTER Y WITH DIAERESIS character (ÿ)
U+00D8 LATIN CAPITAL LETTER O WITH STROKE character (Ø)
U+00FF LATIN SMALL LETTER Y WITH DIAERESIS character (ÿ)
U+00E0 LATIN SMALL LETTER A WITH GRAVE character (à)
U+0000 <control> character
U+0010 <control> character
U+004A LATIN CAPITAL LETTER J character
U+0046 LATIN CAPITAL LETTER F character
U+0049 LATIN CAPITAL LETTER I character
U+0046 LATIN CAPITAL LETTER F character
U+0000 <control> character
U+0001 <control> character
when FF D8 FF .. is exactly what I was expecting.
I know how to make the code work without temporary files (replace var x = new Buffer(stdout); with var x = new Buffer(stdout, 'binary') )
However I still can't say I completely understand what happened here, this should without the Buffer() wrapping, which component has the problem? imagemagick? Buffer?
Not sure if the OP solved the problem, but I too had a similar problem where the images uploaded to S3 were bigger than the original files. Could you tell me what stdout contains (i.e., is it a raw byte stream?)
I solved the problem by setting my Body: parameter to use a base64 encoded buffer instead of the binary buffer. I'm not entirely sure why this solved the problem but I suspect it has something to do with this:
source: MDN