I am trying to list objects and if this works later download/upload files to AWS S3. The code below throws an error. What am I doing incorrectly that this doesn't work? I've passed the accessKeyId and accessSecretKey in all possible ways below. I have a config and credentials file on mac and on windows I have just one awscredentials file and also set this on my windows
setx AWS_SDK_LOAD_CONFIG=1
CODE
const AWS = require('aws-sdk');
function listS3Objects(file, name, type) {
const s3bucket = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
accessSecretKey: process.env.AWS_SECRET_ACCESS_KEY,
// accessKeyId: 'my actual key in credentials file', //aws_access_key_id
// accessSecretKey: 'my actual secret key in credentials file', //aws_secret_access_key
region: "ap-southeast-1"
});
const params = {
Bucket: 'testbucketName',
};
s3bucket.listObjects(params, (err, data) => {
if (err) { throw err; }
/* eslint-disable no-console */
console.log('Success!');
console.log(data);
return data;
/* eslint-enable no-console */
});
}
const objs = listS3Objects()
//Test AWS Credentials
it('Tests', () => {
cy.log(objs)
})
ERROR
The following error originated from your test code, not from Cypress.
Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
When Cypress detects uncaught errors originating from your test code it will automatically fail the current test.
Cypress could not associate this error to any specific test.
We dynamically generated a new test to display this failure.
node_modules/aws-sdk/lib/config.js:400:1
398 |
399 | function credError(msg, err) {
400 | return new AWS.util.error(err || new Error(), {
| ^
401 | code: 'CredentialsError',
402 | message: msg,
403 | name: 'CredentialsError'
I'd troubleshoot it this way:
Can you run the script to upload or download separately? If not then its with the credentials.
if not credentials and this works perfectly can this script be imported into your spec and run? Let the script resolve and return a promise, then use the return value in your spec.
Refer this blog post.
Other options you could consider and cy.exec("aws command goes here")
Related
I'm using the AWS-SDK with react-native to upload an image to S3 Bucket.
First of all, I want to say that my access and connectivity works well, I tried uploading plain text it works, I tried listing the objects and the buckets it works too.
Here is my code:
async function handleImage(capturedImage) {
setImage(capturedImage);
setScreenState(ScreenStates.LOADING);
try {
const result = await classifyImage(capturedImage);
console.log(result.tensor_)
// {dtype:"float32",shape:[…]}
// dtype:"float32"
// shape:[1,3,273,224]
const blob_jpeg = new Blob([result.tensor_], {type: "image/jpeg"});
console.log(typeof blob_jpeg._data)
// object
console.log(blob_jpeg._data)
// {blobId:"e7a667ad-4363-4a2e-9850-8695f103e9e0",offset:0,size:1489546,type:"image/jpeg",__collector:{}}
try {
const keyName = 'image.jpeg';
const putCommand = new PutObjectCommand({
Bucket: "mybucket",
ContentType:"image/jpeg",
Key: "myimage",
Body: blob_jpeg._data,
});
await s3.send(putCommand);
console.log(
'Successfully uploaded data to ' + bucketName + '/' + keyName);
} catch (e) {
console.log(e,e);
}
My error:
Error: "The request signature we calculated does not match the signature you provided. Check your key and signing method." in SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method. << at construct (native) << at apply (native) << at i (#aws-sdk/client-s3.js:3:461197)
Any ideas about how can I solve this problem and succesffully upload my image ?
I'm trying to generate a presigned URL from within a Lambda function, to get an existing S3 object .
(The Lambda function runs an ExpressJS app, and the code to generate the URL is called on one of its routes.)
I'm getting an error "The AWS Access Key Id you provided does not exist in our records." when I visit the generated URL, though, and Google isn't helping me:
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>The AWS Access Key Id you provided does not exist in our records.</Message>
<AWSAccessKeyId>AKIAJ4LNLEBHJ5LTJZ5A</AWSAccessKeyId>
<RequestId>DKQ55DK3XJBYGKQ6</RequestId>
<HostId>IempRjLRk8iK66ncWcNdiTV0FW1WpGuNv1Eg4Fcq0mqqWUATujYxmXqEMAFHAPyNyQQ5tRxto2U=</HostId>
</Error>
The Lambda function is defined via AWS SAM and given bucket access via the predefined S3CrudPolicy template:
ExpressLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: ExpressJSApp
Description: Main website request handler
CodeUri: ../lambda.zip
Handler: lambda.handler
[SNIP]
Policies:
- S3CrudPolicy:
BucketName: my-bucket-name
The URL is generated via the AWS SDK:
const router = require('express').Router();
const AWS = require('aws-sdk');
router.get('/', (req, res) => {
const s3 = new AWS.S3({
region: 'eu-west-1',
signatureVersion: 'v4'
});
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
s3.getSignedUrl('getObject', params, (error, url) => {
res.send(`<p>${url}</p>`)
});
});
What's going wrong? Do I need to pass credentials explicitly when calling getSignedUrl() from within a Lambda function? Doesn't the function's execute role supply those? Am I barking up the wrong tree?
tldr; Go sure, to have the correct order of signature_v4 headers/formdata, in your request.
I had the same exact issue.
I am not sure if this is the solution for everyone who is encountering the problem, but I learned the following:
The error message, and other misleading error messages can occur, if you don't use the correct order of security headers. In my case I was using the endpoint to create a presigned url, for posting a file, to upload it. In this case, you need to go sure, that you are having the correct order of security relevant data in your form-data. For signatureVersion 's3v3' it is:
key
x-amz-algorithm
x-amz-credential
x-amz-date
policy
x-amz-security-token
x-amz-signature
In the special case of a POST-Request to a presigned url, to upload a file, it's important to have your file, AFTER the security data.
After that, the request works as expected.
I can't say for certain but I'm guessing this may have something to do with you using the old SDK. Here it is w/ v3 of the SDK. You may need to massage it a little more.
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
// ...
const client = new S3Client({ region: 'eu-west-1' });
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
const command = new GetObjectCommand(params);
getSignedUrl(client, command(error, url) => {
res.send(`<p>${url}</p>`)
});
I am using scp2 to copy a file to targetPath. config contains host, username, privateKey, path and port.
const client = require('scp2');
export function scpAsync(config, targetPath) {
return new Promise((resolve, reject) => {
client.scp(config, targetPath, err => {
if (!err){
resolve();
} else {
const errorMessage = err;
reject(errorMessage);
}
});
});
}
When doing so I am getting the error:
Error: Timed out while waiting for handshake
I tried to pass also
promptForPass: false
but it did not change anything. Besides that I used debug mode which told me that I am connected to the server and I put a higher setTimeout but then the error is just coming later. I was checking the documentation of scp2 and their GitHub. I use the function like explained there (https://www.npmjs.com/package/scp2) and regarding the error they could fix it with an higher setTimeout (https://github.com/spmjs/node-scp2/issues/107). I tried with a local ftp server, ngrok and ftp on ec2 instance. All with the same problem.
I would be happy to get help. I asked this question also on superuser but did not get an answer:
https://superuser.com/questions/1576964/error-timed-out-while-waiting-for-handshake
Issue:
I am trying to get TestCafe to open an internal website that triggers a windows authentication pop up on opening, but the TestCafe built in Authentication feature doesn't work for some weird reason and the website complains with "401 - Unauthorized: Access is denied due to invalid credentials."
Note manually I can open the website with the same login credentials.
Also note the Authentication feature does work for other websites, and I am doing this at work, so there is a work proxy.
The site that TestCafe is failing to open up:
Is made up of a server name & port only & lives internally eg.
http://[ServerName]:[Port]
The Code:
Made up of 2 files....
The Test Script:
import {Selector} from 'testcafe';
fixture('My Test')
.page('http://[ServerIP]:[Port]/') // Also tried ('http://[ServerName]:[Port]')
.httpAuth({
username: 'domain\name',
password: 'password_here'
})
test ('Opening this internal site', async t => {
await t
.debug()
});
The Runner file:
const createTestCafe = require('testcafe');
let testcafe = null;
createTestCafe('localhost', '8081')
.then(tc => {
testcafe = tc;
const runner = testcafe.createRunner();
return runner
.src([
'My_Test.js'
])
.browsers('chrome')
.useProxy('webproxy01:8080', '[ServerIP]:[Port]') // I tried including the website that I want to test incase it needs to be ByPassed
.run({
skipJsErrors: true,
concurrency: 1
})
})
.then(failedCount => {
console.log(`Tests failed: ` + failedCount);
testcafe.close();
});
Just to add I've tried this too and it doesn't work either:
.httpAuth({
username: 'name',
password: 'password_here',
domain: 'domain_here',
workstation: 'computer_name'
})
Many thanks!
I'm trying to execute API.AI tutorial for building a weather bot for Google Assistant (the one here: https://dialogflow.com/docs/getting-started/basic-fulfillment-conversation)
I made everything successfully, created the bot within API, created the Fulfillments, installed NodeJS on my pc, connected Google Cloud Platform, etc.
Then I created the index.js file by copying it exactly how it's stated on API.ai tutorial with my API key from World Weather Organisation (see below).
But when I use the bot, it doesn't work. On the Google Cloud Platform the error is always the same:
Error: getaddrinfo ENOTFOUND api.worldweatheronline.com
api.worldweatheronline.com:80
at errnoException (dns.js:28)
at GetAddrInfoReqWrap.onlookup (dns.js:76)
No matter how often I do it I get the same error. So I don't actually reach the API. I tried to see if anything changed from WWO side (URL, etc.) but apparently no. I updated NodeJS and still same issue. I refreshed the Google Cloud platform completely and didn't help.
That one I really can't debug. Could anyone help?
Here's the code from API.ai:
'use strict';
const http = require('http');
const host = 'api.worldweatheronline.com';
const wwoApiKey = '[YOUR_API_KEY]';
exports.weatherWebhook = (req, res) => {
// Get the city and date from the request
let city = req.body.result.parameters['geo-city']; // city is a required param
// Get the date for the weather forecast (if present)
let date = '';
if (req.body.result.parameters['date']) {
date = req.body.result.parameters['date'];
console.log('Date: ' + date);
}
// Call the weather API
callWeatherApi(city, date).then((output) => {
// Return the results of the weather API to Dialogflow
res.setHeader('Content-Type', 'application/json');
res.send(JSON.stringify({ 'speech': output, 'displayText': output }));
}).catch((error) => {
// If there is an error let the user know
res.setHeader('Content-Type', 'application/json');
res.send(JSON.stringify({ 'speech': error, 'displayText': error }));
});
};
function callWeatherApi (city, date) {
return new Promise((resolve, reject) => {
// Create the path for the HTTP request to get the weather
let path = '/premium/v1/weather.ashx?format=json&num_of_days=1' +
'&q=' + encodeURIComponent(city) + '&key=' + wwoApiKey + '&date=' + date;
console.log('API Request: ' + host + path);
// Make the HTTP request to get the weather
http.get({host: host, path: path}, (res) => {
let body = ''; // var to store the response chunks
res.on('data', (d) => { body += d; }); // store each response chunk
res.on('end', () => {
// After all the data has been received parse the JSON for desired data
let response = JSON.parse(body);
let forecast = response['data']['weather'][0];
let location = response['data']['request'][0];
let conditions = response['data']['current_condition'][0];
let currentConditions = conditions['weatherDesc'][0]['value'];
// Create response
let output = `Current conditions in the ${location['type']}
${location['query']} are ${currentConditions} with a projected high of
${forecast['maxtempC']}°C or ${forecast['maxtempF']}°F and a low of
${forecast['mintempC']}°C or ${forecast['mintempF']}°F on
${forecast['date']}.`;
// Resolve the promise with the output text
console.log(output);
resolve(output);
});
res.on('error', (error) => {
reject(error);
});
});
});
}
Oh boy, in fact the reason was most stupid ever. I didn't enable "billing" on Google Cloud Platform and that's why it blocked everything (even though I'm using a free test of the API). They just wanted my credit card number. It works now
I had the same issue trying to hit my db. Billing wasn't the fix as I had billing enabled already.
For me it was knexfile.js setup for MySql - specifically the connection object. In that object, you should replace the host key with socketPath; and prepend /cloudsql/ to the value. Here's an example:
connection: {
// host: process.env.APP_DB_HOST, // The problem
socketPath: `/cloudsql/${process.env.APP_DB_HOST}`, // The fix
database: process.env.APP_DB_NAME,
user: process.env.APP_DB_USR,
password: process.env.APP_DB_PWD
}
Where process.env.APP_DB_HOST is your Instance connection name.
PS: I imagine that even if you're not using Knex, the host or server parameter of a typical DB connectionstring will have to be called socketPath when connecting to Google Cloud SQL.