Google Cloud Storage - Error during upload: gcs-resumable-upload.json renaming operation not permitted - gcloud-node

I'm simply trying to follow this tutorial on how to upload files to gcs with Node and Express. But the following error keep causing my app to crash. Usually, I am able to upload one file without a problem in the first run. But I will get this error after running a few request, even with different file. When I try to upload, say 5, files at a time, this error cause my app to crash even in the first run. I see the process is trying to rename a file in the .config folder. Is it a normal behavior? If so, is there a work-around?
Window: v10.0.10586
Node: v4.3.1
Express: v4.13.1
Error: EPERM: operation not permitted, rename 'C:\Users\James Wang.config\configstore\gcs-resumable-upload.json.2873606827' -> 'C:\Users\James Wang.config\configstore\gcs-resumable-upload.json'
at Error (native)
at Object.fs.renameSync (fs.js:681:18)
at Function.writeFileSync as sync
at Object.create.all.set (C:\Users\James Wang\gi-cms-backend\node_modules\configstore\index.js:62:21)
at Object.Configstore.set (C:\Users\James Wang\gi-cms-backend\node_modules\configstore\index.js:93:11)
at Upload.set (C:\Users\James Wang\gi-cms-backend\node_modules\gcs-resumable-upload\index.js:264:20)
at C:\Users\James Wang\gi-cms-backend\node_modules\gcs-resumable-upload\index.js:60:14
at C:\Users\James Wang\gi-cms-backend\node_modules\gcs-resumable-upload\index.js:103:5
at Request._callback (C:\Users\James Wang\gi-cms-backend\node_modules\gcs-resumable-upload\index.js:230:7)
at Request.self.callback (C:\Users\James Wang\gi-cms-backend\node_modules\request\request.js:199:22)
at emitTwo (events.js:87:13)
at Request.emit (events.js:172:7)
at Request. (C:\Users\James Wang\gi-cms-backend\node_modules\request\request.js:1036:10)
at emitOne (events.js:82:20)
at Request.emit (events.js:169:7)
at IncomingMessage. (C:\Users\James Wang\gi-cms-backend\node_modules\request\request.js:963:12)
[nodemon] app crashed - waiting for file changes before starting...
UPDATE:
After setting {resumable: false} as suggested by #stephenplusplus in this post, I am no longer getting the "EPERM: operation not permitted" error.But, I start running into the { [ERROR:ETIMEDOUT] code: 'ETIMEDOUT', connection: false } error while trying to upload multiple files at a time with the largest file greater than 1.5mb. Other files get uploaded successfully.
For more information, I am able to upload files one by one when the files are no greater than ~2.5mb. If I try to upload 3 files at a time, I can only do so with files no greater than ~1.5mb.
Is the "Operation not permitted" issue as specified in the question a window specific thing, and does the timeout issue happen only after i set resumable = false?
I'm using express and multer with node.
This is the code I'm using now:
// Express middleware that will handle an array of files. req.files is an array of files received from
// filemulter.fields([{field: name, maxCount}]) function. This function should handle
// the upload process of files asychronously
function sendFilesToGCS(req, res, next) {
if(!req.files) { return next(); }
function stream(file, key, folder) {
var gcsName = Date.now() + file.originalname;
var gcsFile = bucket.file(gcsName);
var writeStream = gcsFile.createWriteStream({ resumable: false });
console.log(key);
console.log('Start uploading: ' + file.originalname);
writeStream.on('error', function(err) {
console.log(err);
res.status(501).send(err);
});
writeStream.on('finish', function() {
folder.incrementFinishCounter();
req.files[key][0].cloudStorageObject = gcsName;
req.files[key][0].cloudStoragePublicUrl = getPublicUrl(gcsName);
console.log('Finish Uploading: ' + req.files[key][0].cloudStoragePublicUrl);
folder.beginUploadNext();
});
writeStream.end(file.buffer);
};
var Folder = function(files) {
var self = this;
self.files = files;
self.reqFilesKeys = Object.keys(files); // reqFilesKeys is an array of keys parsed from req.files
self.nextInQuene = 0; // Keep track of the next file to be uploaded, must be less than reqFilesKeys.length
self.finishCounter = 0; // Keep track of how many files have been uploaded, must be less than reqFilesKeys.length
console.log(this.reqFilesKeys.length + ' files to upload');
};
// This function is used to initiate the upload process.
// It's also called in the on-finish listener of a file's write-stream,
// which will start uploading the next file in quene
Folder.prototype.beginUploadNext = function() {
// If there's still file left to upload,
if(this.finishCounter < this.reqFilesKeys.length) {
// and if there's still file left in quene
if(this.nextInQuene < this.reqFilesKeys.length) {
// upload the file
var fileToUpload = this.files[this.reqFilesKeys[this.nextInQuene]][0];
stream(fileToUpload, this.reqFilesKeys[this.nextInQuene], this);
// Increment the nextInQuene counter, and get the next one ready
this.nextInQuene++;
}
} else {
console.log('Finish all upload!!!!!!!!!!!!!!!!!!!!!!');
next();
}
};
Folder.prototype.incrementFinishCounter = function() {
this.finishCounter++;
console.log('Finished' + this.finishCounter + ' files');
};
var folder = new Folder(req.files);
// Begin upload with 3 streams
/*for(var i=0; i<3; i++) {
folder.beginUploadNext();
}*/
//Upload file one by one
folder.beginUploadNext();
}

I had the same issue with bower .. Run the following command: bower cache clean --allow-root
if this does not solve the problem, try after disabling anti virus.

Related

React-Native AWS-SDK : Can't upload image to S3 buckets Error: "SignatureDoesNotMatch"

I'm using the AWS-SDK with react-native to upload an image to S3 Bucket.
First of all, I want to say that my access and connectivity works well, I tried uploading plain text it works, I tried listing the objects and the buckets it works too.
Here is my code:
async function handleImage(capturedImage) {
setImage(capturedImage);
setScreenState(ScreenStates.LOADING);
try {
const result = await classifyImage(capturedImage);
console.log(result.tensor_)
// {dtype:"float32",shape:[…]}
// dtype:"float32"
// shape:[1,3,273,224]
const blob_jpeg = new Blob([result.tensor_], {type: "image/jpeg"});
console.log(typeof blob_jpeg._data)
// object
console.log(blob_jpeg._data)
// {blobId:"e7a667ad-4363-4a2e-9850-8695f103e9e0",offset:0,size:1489546,type:"image/jpeg",__collector:{}}
try {
const keyName = 'image.jpeg';
const putCommand = new PutObjectCommand({
Bucket: "mybucket",
ContentType:"image/jpeg",
Key: "myimage",
Body: blob_jpeg._data,
});
await s3.send(putCommand);
console.log(
'Successfully uploaded data to ' + bucketName + '/' + keyName);
} catch (e) {
console.log(e,e);
}
My error:
Error: "The request signature we calculated does not match the signature you provided. Check your key and signing method." in SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method. << at construct (native) << at apply (native) << at i (#aws-sdk/client-s3.js:3:461197)
Any ideas about how can I solve this problem and succesffully upload my image ?

Mapbox - Failed to resolve tileset descriptors: Loading request canceled

I have the following code for download map regions. This function is called on multiple regions asynchronously. In debug/development mode I rarely get this error. But in production release app I get this issue sometimes and the whole offline user experience is ruined. Notice that for users and fields same function is being used to download regions but it fails mostly for fields and not for users.
const downloadPack = async (packName, bounds) => {
console.log("bounds", packName, bounds);
// Delete old pack
await MapboxGL.offlineManager.deletePack(packName);
const progressListener = (offlineRegion, status) => {
if (status.state === "complete") {
console.log(
`Pack: ${packName}`,
formatBytes(status.completedResourceSize),
status.percentage,
status.state,
);
}
};
const errorListener = (offlineRegion, err) => {
console.log("pack error", offlineRegion, err);
};
const packConfig = {
name: packName,
styleURL: styleURL,
minZoom: MAP_CONFIG.MIN_ZOOM_ALLOWED,
maxZoom: MAP_CONFIG.MAX_ZOOM_ALLOWED,
bounds,
};
await MapboxGL.offlineManager.createPack(
packConfig,
progressListener,
errorListener,
);
// console.log("packConfig", packConfig);
};
I am getting the following errors for following regions:
Regions:
LOG bounds field-region-74 [[67.05473691050432, 24.859774187835313], [67.05879400202377, 24.864996599470516]]
LOG bounds field-region-73 [[67.05603555964123, 24.863974062407152], [67.06011966003771, 24.86902696726922]]
LOG bounds field-region-72 [[67.05860259878298, 24.868365914318616], [67.06177770104551, 24.871585472853567]]
LOG bounds field-region-71 [[67.0527601437771, 24.865385527088804], [67.06248793623786, 24.878936670953124]]
LOG bounds field-region-70 [[67.0575708775574, 24.859917106865936], [67.06827409941002, 24.868984873055723]]
Errors:
LOG pack error {"pack":{"metadata":"{\n \"name\" : \"field-region-74\"\n}","bounds":"{\"type\":\"FeatureCollection\",\"features\":[{\"type\":\"Feature\",\"properties\":{},\"geometry\":{\"type\":\"Point\",\"coordinates\":[67.05473691050432,24.859774187835313]}},{\"type\":\"Feature\",\"properties\":{},\"geometry\":{\"type\":\"Point\",\"coordinates\":[67.05879400202377,24.864996599470516]}}]}"},"_metadata":null} {"message":"Failed to resolve tileset descriptors: Loading request canceled","name":"field-region-74"}
LOG pack error {"pack":{"metadata":"{\n \"name\" : \"field-region-73\"\n}","bounds":"{\"type\":\"FeatureCollection\",\"features\":[{\"type\":\"Feature\",\"properties\":{},\"geometry\":{\"type\":\"Point\",\"coordinates\":[67.05603555964123,24.863974062407152]}},{\"type\":\"Feature\",\"properties\":{},\"geometry\":{\"type\":\"Point\",\"coordinates\":[67.06011966003771,24.86902696726922]}}]}"},"_metadata":null} {"name":"field-region-73","message":"Failed to resolve tileset descriptors: Loading request canceled"}
LOG pack error {"pack":{"metadata":"{\n \"name\" : \"field-region-72\"\n}","bounds":"{\"type\":\"FeatureCollection\",\"features\":[{\"type\":\"Feature\",\"properties\":{},\"geometry\":{\"type\":\"Point\",\"coordinates\":[67.05860259878298,24.868365914318616]}},{\"type\":\"Feature\",\"properties\":{},\"geometry\":{\"type\":\"Point\",\"coordinates\":[67.06177770104551,24.871585472853567]}}]}"},"_metadata":null} {"name":"field-region-72","message":"Failed to resolve tileset descriptors: Loading request canceled"}
LOG pack error {"pack":{"metadata":"{\n \"name\" : \"field-region-71\"\n}","bounds":"{\"type\":\"FeatureCollection\",\"features\":[{\"type\":\"Feature\",\"properties\":{},\"geometry\":{\"type\":\"Point\",\"coordinates\":[67.0527601437771,24.865385527088804]}},{\"type\":\"Feature\",\"properties\":{},\"geometry\":{\"type\":\"Point\",\"coordinates\":[67.06248793623786,24.878936670953124]}}]}"},"_metadata":null} {"name":"field-region-71","message":"Failed to resolve tileset descriptors: Loading request canceled"}
LOG Pack: field-region-70 2.18 MB 100 complete
Notice that last pack is downloaded successfully. I even tried to await the function to make them synchronous but still I get the error.

Cypress AWS S3 List/Upload/Download Objects

I am trying to list objects and if this works later download/upload files to AWS S3. The code below throws an error. What am I doing incorrectly that this doesn't work? I've passed the accessKeyId and accessSecretKey in all possible ways below. I have a config and credentials file on mac and on windows I have just one awscredentials file and also set this on my windows
setx AWS_SDK_LOAD_CONFIG=1
CODE
const AWS = require('aws-sdk');
function listS3Objects(file, name, type) {
const s3bucket = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
accessSecretKey: process.env.AWS_SECRET_ACCESS_KEY,
// accessKeyId: 'my actual key in credentials file', //aws_access_key_id
// accessSecretKey: 'my actual secret key in credentials file', //aws_secret_access_key
region: "ap-southeast-1"
});
const params = {
Bucket: 'testbucketName',
};
s3bucket.listObjects(params, (err, data) => {
if (err) { throw err; }
/* eslint-disable no-console */
console.log('Success!');
console.log(data);
return data;
/* eslint-enable no-console */
});
}
const objs = listS3Objects()
//Test AWS Credentials
it('Tests', () => {
cy.log(objs)
})
ERROR
The following error originated from your test code, not from Cypress.
Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
When Cypress detects uncaught errors originating from your test code it will automatically fail the current test.
Cypress could not associate this error to any specific test.
We dynamically generated a new test to display this failure.
node_modules/aws-sdk/lib/config.js:400:1
398 |
399 | function credError(msg, err) {
400 | return new AWS.util.error(err || new Error(), {
| ^
401 | code: 'CredentialsError',
402 | message: msg,
403 | name: 'CredentialsError'
I'd troubleshoot it this way:
Can you run the script to upload or download separately? If not then its with the credentials.
if not credentials and this works perfectly can this script be imported into your spec and run? Let the script resolve and return a promise, then use the return value in your spec.
Refer this blog post.
Other options you could consider and cy.exec("aws command goes here")

s3Zip getting error - cannot read property of null

I have an expressjs back end and I'm trying to download/stream pdfs from s3 to a zip file. This is the error in the Heroku log:
TypeError: Cannot read property 'on' of null
at Object.s3Zip.archiveStream (/app/node_modules/s3-zip/s3-zip.js:49:6)
at Object.s3Zip.archive (/app/node_modules/s3-zip/s3-zip.js:33:24)
It was working on a localhost with "s3-zip": "^2.0.4" however I never deployed that version to heroku. Now I'm using "s3-zip": "^3.1.2"
also note, in my IDE the .archive statement is crossed out with a hint comment that a: Depreciated symbol used
public getZipFiles(req, res) {
const get_file = req.headers['pdfs'].split(',');
console.log("get file: ", get_file);
// first param is the zip file name
const zip_file_name = get_file.shift() + '.zip';
const user_downloads = downloadsFolder();
console.log(downloadsFolder()); // value is /tmp/
const output = fs.createWriteStream(join(user_downloads, zip_file_name));
// note: get_file is an array - ['file1.pdf', 'file2.pdf']
s3Zip
.archive({ region: aws_region, bucket: aws_bucket, debug: true }, '', get_file)
.pipe(output)
};
I expected to see the downloaded zip file, but got the error instead

Loopback error - value is not an object

I am using loopback in backend. I am getting this error
Unhandled error for request POST /api/meetups/auth: Error: Value is not an object.
at errorNotAnObject (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/types/object.js:80:13)
at Object.validate (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/types/object.js:51:14)
at Object.fromTypedValue (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/types/object.js:14:22)
at Object.fromSloppyValue (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/types/object.js:41:17)
at HttpContext.buildArgs (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/http-context.js:193:22)
at new HttpContext (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/http-context.js:59:20)
at restStaticMethodHandler (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/rest-adapter.js:457:15)
at Layer.handle [as handle_request] (/Users/ankursharma/Documents/projects/meetupz/node_modules/express/lib/router/layer.js:95:5)
at next (/Users/ankursharma/Documents/projects/meetupz/node_modules/express/lib/router/route.js:137:13)
at Route.dispatch (/Users/ankursharma/Documents/projects/meetupz/node_modules/express/lib/router/route.js:112:3)
at Layer.handle [as handle_request] (/Users/ankursharma/Documents/projects/meetupz/node_modules/express/lib/router/layer.js:95:5)
at /Users/ankursharma/Documents/projects/meetupz/node_modules/express/lib/router/index.js:281:22
at Function.process_params (/Users/ankursharma/Documents/projects/meetupz/node_modules/express/lib/router/index.js:335:12)
at next (/Users/ankursharma/Documents/projects/meetupz/node_modules/express/lib/router/index.js:275:10)
at Function.handle
(/Users/ankursharma/Documents/projects/meetupz/node_modules/express/lib/router/index.js:174:3)
at router (/Users/ankursharma/Documents/projects/meetupz/node_modules/express/lib/router/index.js:47:12)
I have already searched stackoverflow, but I didnt find answer. Basically, i was trying to use body-parser . I went through one of the stackoverflow thread and implemented its solution. I was able to use body-parser successfully. So, that error has been solved. But, now this error is giving me tough time.
server.js file
'use strict';
var loopback = require('loopback');
var boot = require('loopback-boot');
var bodyParser = require('body-parser');
var multer = require('multer');
var app = module.exports = loopback();
//code for body parsing
app.use(bodyParser.json()); // for parsing application/json
app.use(bodyParser.urlencoded({ extended: true })); // for parsing application/x-www-form-urlencoded
//app.use(multer()); // for parsing multipart/form-data
//code for body parsing ends
app.start = function() {
// start the web server
return app.listen(function() {
app.emit('started');
var baseUrl = app.get('url').replace(/\/$/, '');
console.log('Web server listening at: %s', baseUrl);
if (app.get('loopback-component-explorer')) {
var explorerPath = app.get('loopback-component-explorer').mountPath;
console.log('Browse your REST API at %s%s', baseUrl, explorerPath);
}
});
};
// Bootstrap the application, configure models, datasources and middleware.
// Sub-apps like REST API are mounted via boot scripts.
boot(app, __dirname, function(err) {
if (err) throw err;
// start the server if `$ node server.js`
if (require.main === module)
app.start();
});
In middleware.json, I have updated parse property as well
"parse": {"body-parser#json": {},
"body-parser#urlencoded": {"params": { "extended": true }}},
For some reason, that error has gone. Not sure, may be it will come again. But now, this is the error, I am seeing
Unhandled error for request POST /api/meetups/auth: TypeError: cb is not a function
at Function.Meetups.auth (/Users/ankursharma/Documents/projects/meetupz/common/models/meetups.js:117:3)
at SharedMethod.invoke (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/shared-method.js:270:25)
at HttpContext.invoke (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/http-context.js:297:12)
at phaseInvoke (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/remote-objects.js:677:9)
at runHandler (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/lib/phase.js:135:5)
at iterate (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/node_modules/async/lib/async.js:146:13)
at Object.async.eachSeries (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/node_modules/async/lib/async.js:162:9)
at runHandlers (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/lib/phase.js:144:13)
at iterate (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/node_modules/async/lib/async.js:146:13)
at /Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/node_modules/async/lib/async.js:157:25
at /Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/node_modules/async/lib/async.js:154:25
at execStack (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/remote-objects.js:522:7)
at RemoteObjects.execHooks (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/remote-objects.js:526:10)
at phaseBeforeInvoke (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/remote-objects.js:673:10)
at runHandler (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/lib/phase.js:135:5)
at iterate (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/node_modules/async/lib/async.js:146:13)
check if your filters or your URL has black spaces i.e. %20, or anything wrong with the filter or url.
Loopback 3 makes a difference between array and object. You have to check your data type. See https://github.com/strongloop/strong-remoting/issues/360 for more information.
I was facing the same error but in my case it was due to a model property data length.The property was type object and a small dataLength which caused faulty record in my sql Model table.I have to Manually delete those faulty records and increase the dataLength of that property too.
Restart the app