Where can I find the key.json when running a BigQuery job in the terminal? - google-bigquery

I'm trying to run the following code but I get this error
{ [Error: ENOENT: no such file or directory, open '/mypath/key.json'] }
I know it has something to do with no key.json file in the directory I'm running the code from, but where can I find this file?
I've tried searching find / -name "key.json" and using some paths there but I still get the same error. Thanks
const BigQuery = require('#google-cloud/bigquery');
const bigquery = new BigQuery({
projectId: 'XXXXX',
keyFilename: 'key.json'
});
const query = `SELECT total_amount, pickup_datetime, trip_distance
FROM \`nyc-tlc.yellow.trips\`
ORDER BY total_amount DESC
LIMIT 1;`
bigquery.createQueryJob(query).then((data) => {
const job = data[0];
return job.getQueryResults({timeoutMs: 10000});
}).then((data) => {
const rows = data[0];
console.log(rows[0]);
}).catch(e=>{
//handle exception
console.log(e)
})
;

The key.json file is the resulting file that you download in order to authenticate against GCP services. There are several methods to authenticate and one of those is using a service account. The key.json is the file that contains the credentials needed.
How you create and use the key.json file is explained here1
This is also a good guide for creating the credentials via the Console UI 2

Related

Cypress AWS S3 List/Upload/Download Objects

I am trying to list objects and if this works later download/upload files to AWS S3. The code below throws an error. What am I doing incorrectly that this doesn't work? I've passed the accessKeyId and accessSecretKey in all possible ways below. I have a config and credentials file on mac and on windows I have just one awscredentials file and also set this on my windows
setx AWS_SDK_LOAD_CONFIG=1
CODE
const AWS = require('aws-sdk');
function listS3Objects(file, name, type) {
const s3bucket = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
accessSecretKey: process.env.AWS_SECRET_ACCESS_KEY,
// accessKeyId: 'my actual key in credentials file', //aws_access_key_id
// accessSecretKey: 'my actual secret key in credentials file', //aws_secret_access_key
region: "ap-southeast-1"
});
const params = {
Bucket: 'testbucketName',
};
s3bucket.listObjects(params, (err, data) => {
if (err) { throw err; }
/* eslint-disable no-console */
console.log('Success!');
console.log(data);
return data;
/* eslint-enable no-console */
});
}
const objs = listS3Objects()
//Test AWS Credentials
it('Tests', () => {
cy.log(objs)
})
ERROR
The following error originated from your test code, not from Cypress.
Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
When Cypress detects uncaught errors originating from your test code it will automatically fail the current test.
Cypress could not associate this error to any specific test.
We dynamically generated a new test to display this failure.
node_modules/aws-sdk/lib/config.js:400:1
398 |
399 | function credError(msg, err) {
400 | return new AWS.util.error(err || new Error(), {
| ^
401 | code: 'CredentialsError',
402 | message: msg,
403 | name: 'CredentialsError'
I'd troubleshoot it this way:
Can you run the script to upload or download separately? If not then its with the credentials.
if not credentials and this works perfectly can this script be imported into your spec and run? Let the script resolve and return a promise, then use the return value in your spec.
Refer this blog post.
Other options you could consider and cy.exec("aws command goes here")

Can't upload to Azure Storage after deploy Koa app to Azure App Service

When App run on local, I can upload file to Azure Blob Storage.
But when I deploy app to Azure App Service, it always responses 502 Bad Gateway
I use library koa-body to handle upload file
const koaBody = require('koa-body')({ multipart: true, uploadDir: '.' });
const blobServiceClient = BlobServiceClient.fromConnectionString(AZURE_STORAGE_CONNECTION_STRING);
const videoContainerClient = blobServiceClient.getContainerClient(AZURE_VIDEO_STORAGE_CONTAINER);
const blobVideoName = `${fileName}.${fileExtension}`;
const blockBlobVideo = videoContainerClient.getBlockBlobClient(blobVideoName);
const uploadVid = blockBlobVideo.uploadFile(filePath);
I think the problem is I don't have permission to write file to os.tmpDir(), so I try to go to Kudu Debug Console and set chmod -R 0755 to home/site/wwwroot/public/tmp, the folder I think is os.tmpDir(). But it said I don't have permission
Any help is appreciated
You can refer to my sample code.(Download my samle code)
Test Result.
POST test url : https://yourappname.azurewebsites.net/user/upload

Botkit slackbot error "Could not load team while processing webhook"

I have created a simple express server and added a /slack/receive route to handle webhook requests from the Slack events API:
// routes.js (which is used by my app defined in server.js)
...
let slack = require('./controllers/slack');
router.post('/slack/receive', slack.receive);
...
I then use Botkit to create a simple Slack application:
// controllers/slack.js
'use strict';
const logger = require('../config/winston');
// initialise firebase storage for botkit
const admin = require('firebase-admin');
var serviceAccount = require('../config/firebase.json');
admin.initializeApp({
credential: admin.credential.cert(serviceAccount)
});
var db = admin.firestore();
db.settings({
timestampsInSnapshots: true
})
// initialise botkit for slack
const botkit = require('botkit');
const controller = botkit.slackbot({
storage: require('botkit-storage-firestore')({ database: db }),
clientId: process.env.SLACK_CLIENT_ID,
clientSecret: process.env.SLACK_CLIENT_SECRET,
clientSigningSecret: process.env.SLACK_SIGNING_SECRET,
redirectUri: process.env.SLACK_REDIRECT,
disable_startup_messages: true,
send_via_rtm: false,
debug: true,
scopes: ['bot', 'chat:write:bot'],
})
controller.hears('Hello', 'direct_mention,direct_message', (bot, message) => {
logger.info(message);
bot.reply(message, 'I heard a message!');
})
exports.receive = (req, res, next) => {
res.sendStatus(200);
logger.debug(req.body);
controller.handleWebhookPayload(req, res);
};
The server initialises correctly, but as soon as the slack webhook receives a request the following error happens:
Could not load team while processing webhook: Error: could not find team T5VDRMWKX
at E:\Documents\upper-revolutions\node_modules\botkit\lib\SlackBot.js:169:24
at firebaseRef.doc.get.then.catch.err (E:\Documents\upper-revolutions\node_modules\botkit-storage-firestore\src\index.js:86:13)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:118:7)
So far I have found that:
Having/not having storage in the botkit slackbot makes no difference
The error happens within the handleWebhookPayload method as code within controller.hears() does not get executed
This error occurs because botkit needs some form of storage where it can store all the teams (channels and users too) and retrive it later on.
So, When your method handleWebhookPayload gets executed it calls another method called
findAppropriateTeam that will query for the specified team record in the storage provided by you (It might be mongoDB or a JSON file or other). The error is saying that you do not have any record in the storage with the id provided.
So this might implicate two things:
You did not provide a storage for botkit to work
You did not save the team id in the storage
The solution to the first problem is quite simple. You just need to install mongodb in your machine and then pass to botkit the MONGO_URL.
NOTE: I see that you are using the botkit simple storage and this might be the problem since I also have experieced some troubles with this kind of storage not saving records.
const controller = botkit.slackbot({
storage: 'mongodb//localhost:27017:/yourdb',
})
//OR
const controller = botkit.slackbot({
storage: process.env.MONGO_URL,
})
The possible solution to the second problem:
I will assume you are using botkit locally, so you must be using some tunneling like ngrok or localtunnel. In that case make sure:
You provided the redirect URL to Slack (Ex, https://your_url/oauth)
You accessed the https://your_url/login page
Botkit saves your team id on the provided storage when you access the /login route and authorizes the app. So if you skipped that part then botkit won't save your team id and therfore will throw an error when you receive events later on.
Check this like [https://github.com/howdyai/botkit/issues/938] for discutions on the topic
I hope this helps!

How to send multiple images in a Expressjs api get request with sendFIle()

I'm looking for away to send multiple images in one GET request from an Expressjs server through an api.
I want to create an image gallery of each users uploaded images in a MEAN stack. When images are uploaded using multer, the image information is saved to mongodb, including the userid of whoever uploaded it.
When on angularjs, I want user to have access to any of the images they have previously uploaded. Currently I'm sending one file on a GET request based on user id. Is there anyway of sending multiple files in one json. I'm currently using Expressjs's res.sendFile, but haven't found any info about sending multiple back yet.
https://expressjs.com/en/api.html#res.sendFile
Here is my current get request:
exports.getUpload = function(req, res) {
Upload.find({createdby: req.params.Id}).exec(function(err, upload) {
errorhandle.errorconsole(err, 'file found');
console.log(upload[0]);
var options = {
root: '/usr/src/app/server/public/uploads/images'
};
var name = "" + upload[0].storedname +"";
console.log(name);
res.sendFile(name, options,function(err) {
errorhandle.errorconsole(err, 'file sent');
});
});
};
You can't with res.sendFile. In fact I don't think you can at all. Maybe with HTTP/2 Server Push
, but I'm not sure.
What you can do is send a JSON response with a link to all the images:
exports.getUpload = async (req, res) => {
const uploads = await Upload.find({ createdby: req.params.Id }).exec()
const response = uploads.map(image => {name: `https://example.com/uploads/images/${image.storedname}`})
res.json(response)
}
Note error handling omitted.

getaddrinfo ENOTFOUND API Google Cloud

I'm trying to execute API.AI tutorial for building a weather bot for Google Assistant (the one here: https://dialogflow.com/docs/getting-started/basic-fulfillment-conversation)
I made everything successfully, created the bot within API, created the Fulfillments, installed NodeJS on my pc, connected Google Cloud Platform, etc.
Then I created the index.js file by copying it exactly how it's stated on API.ai tutorial with my API key from World Weather Organisation (see below).
But when I use the bot, it doesn't work. On the Google Cloud Platform the error is always the same:
Error: getaddrinfo ENOTFOUND api.worldweatheronline.com
api.worldweatheronline.com:80
at errnoException (dns.js:28)
at GetAddrInfoReqWrap.onlookup (dns.js:76)
No matter how often I do it I get the same error. So I don't actually reach the API. I tried to see if anything changed from WWO side (URL, etc.) but apparently no. I updated NodeJS and still same issue. I refreshed the Google Cloud platform completely and didn't help.
That one I really can't debug. Could anyone help?
Here's the code from API.ai:
'use strict';
const http = require('http');
const host = 'api.worldweatheronline.com';
const wwoApiKey = '[YOUR_API_KEY]';
exports.weatherWebhook = (req, res) => {
// Get the city and date from the request
let city = req.body.result.parameters['geo-city']; // city is a required param
// Get the date for the weather forecast (if present)
let date = '';
if (req.body.result.parameters['date']) {
date = req.body.result.parameters['date'];
console.log('Date: ' + date);
}
// Call the weather API
callWeatherApi(city, date).then((output) => {
// Return the results of the weather API to Dialogflow
res.setHeader('Content-Type', 'application/json');
res.send(JSON.stringify({ 'speech': output, 'displayText': output }));
}).catch((error) => {
// If there is an error let the user know
res.setHeader('Content-Type', 'application/json');
res.send(JSON.stringify({ 'speech': error, 'displayText': error }));
});
};
function callWeatherApi (city, date) {
return new Promise((resolve, reject) => {
// Create the path for the HTTP request to get the weather
let path = '/premium/v1/weather.ashx?format=json&num_of_days=1' +
'&q=' + encodeURIComponent(city) + '&key=' + wwoApiKey + '&date=' + date;
console.log('API Request: ' + host + path);
// Make the HTTP request to get the weather
http.get({host: host, path: path}, (res) => {
let body = ''; // var to store the response chunks
res.on('data', (d) => { body += d; }); // store each response chunk
res.on('end', () => {
// After all the data has been received parse the JSON for desired data
let response = JSON.parse(body);
let forecast = response['data']['weather'][0];
let location = response['data']['request'][0];
let conditions = response['data']['current_condition'][0];
let currentConditions = conditions['weatherDesc'][0]['value'];
// Create response
let output = `Current conditions in the ${location['type']}
${location['query']} are ${currentConditions} with a projected high of
${forecast['maxtempC']}°C or ${forecast['maxtempF']}°F and a low of
${forecast['mintempC']}°C or ${forecast['mintempF']}°F on
${forecast['date']}.`;
// Resolve the promise with the output text
console.log(output);
resolve(output);
});
res.on('error', (error) => {
reject(error);
});
});
});
}
Oh boy, in fact the reason was most stupid ever. I didn't enable "billing" on Google Cloud Platform and that's why it blocked everything (even though I'm using a free test of the API). They just wanted my credit card number. It works now
I had the same issue trying to hit my db. Billing wasn't the fix as I had billing enabled already.
For me it was knexfile.js setup for MySql - specifically the connection object. In that object, you should replace the host key with socketPath; and prepend /cloudsql/ to the value. Here's an example:
connection: {
// host: process.env.APP_DB_HOST, // The problem
socketPath: `/cloudsql/${process.env.APP_DB_HOST}`, // The fix
database: process.env.APP_DB_NAME,
user: process.env.APP_DB_USR,
password: process.env.APP_DB_PWD
}
Where process.env.APP_DB_HOST is your Instance connection name.
PS: I imagine that even if you're not using Knex, the host or server parameter of a typical DB connectionstring will have to be called socketPath when connecting to Google Cloud SQL.