Google Cloud Pub/Sub - Cloud Function & Bigquery - Data insert is not happening - google-bigquery

I am using a Google Cloud Platform Function that listens to a Pub/SubTopic and inserts the data in BigQuery.
The input data which I am passing from pub/sub console is in JSON format {"NAME", "ABCD"}, but from the console log, I could see that message is coming as {NAME, ABCD}, and during execution, it error as well. 2 common errors I faced
SyntaxError: Unexpected token n in JSON at position 1 at Object.parse (native) at exports.helloPubSub"
"ERROR: { Error: Invalid value at 'rows[0].json' "
Input given:
gcloud pubsub topics publish pubsubtopic1 --message {"name":"ABCD"}
Tried various formats of input data with single quotes and square brackets and other possible options as well, nothing helps
Workarounds tried like using JSON.parse, JSON.stringfy which helps to avoid the 1st issue which mentioned above but ends up with row[0] issue
When I pass the JSON input data as hard-coded values inside the cloud function like {"NAME", "ABCD"}, data is getting inserted properly.
/**This is working code since i hardcoded the data in JSON format, commented the lines which i tried and did not helped**/
/**
* Triggered from a message on a Cloud Pub/Sub topic.
*
* #param {!Object} event Event payload and metadata.
* #param {!Function} callback Callback function to signal completion.
*/
exports.helloPubSub = (event, callback) => {
const pubsubMessage = event.data;
console.log(Buffer.from(pubsubMessage.data, 'base64').toString());
const {BigQuery} = require('#google-cloud/bigquery');
const bigquery = new BigQuery();
//console.log(Buffer.from(pubsubMessage.data, 'base64').toString());
//console.log(JSON.parse(Buffer.from(pubsubMessage.data, 'base64').toString()));
var myjson='{"NAME":"ABCD","STATE":"HHHH","AGE":"12"}';
console.log(myjson);
bigquery
.dataset("DEMO")
.table("EMP")
.insert(JSON.parse(myjson),
{'ignoreUnknownValues':true, 'raw':false})
//.insert(JSON.parse(Buffer.from(pubsubMessage.data, 'base64').toString()),
.then ((data) => {
console.log('Inserted 1 rows');
console.log(data);
})
.catch(err => {
if (err && err.name === 'PartialFailureError') {
if (err.errors && err.errors.length > 0) {
console.log('Insert errors:');
err.errors.forEach(err => console.error(err));
}
} else {
console.error('ERROR`enter code here`:', err);
}
});
};

I ran a quick test using gcloud to publish and to pull the message as well.
Using the syntax you mentioned I get the following result:
gcloud pubsub topics publish pubsubtopic1 --message {"name":"ABCD"}
gcloud pubsub subscriptions pull pubsubsubscription1
The result is:
DATA │ {name:ABCD}
If you use this syntax instead:
gcloud pubsub topics publish pubsubtopic1 --message "{\"name\":\"ABCD\"}"
gcloud pubsub subscriptions pull pubsubsubscription1
The result is:
DATA | {"name":"ABCD"}
EDIT 2019-04-01
The workaround above is for test purposes,the need to use escape characters is a caveat of using the command line. To publish from your real application, you may use a REST call or a client library as listed here.Please note the Pub/Sub API expects the message to be base64 encoded. For example:
POST https://pubsub.googleapis.com/v1/projects/{YOUR_PROJECT_ID}/topics/{YOUR_TOPIC}:publish?key={YOUR_API_KEY}
{
"messages": [
{
"data": "eyJuYW1lIjoiQUJDRCJ9"
}
]
}

Related

Google Cloud Function -- Convert BigQuery Data to Gzip (Compressed) Json then Load to Cloud Storage

*For context, this script is largely based on the one found in this guide from Google: https://cloud.google.com/bigquery/docs/samples/bigquery-extract-table-json#bigquery_extract_table_json-nodejs
I have the below script which is functioning. However, it writes a normal JSON file to cloud storage. To be a bit more optimized for file transfer and storage,I wanted to use const {pako} = require('pako'); to compress the files before loading.
I haven't been able to figure out how to accomplish this, unfortunately, after numerous attempts.
Anyone have any ideas?
**I'm assuming it has something to do with the options in .extract(storage.bucket(bucketName).file(filename), options);, but again, pretty lost in how to figure this out unfortunately...
Any help would be appreciated! :)
**The intent of this function is:
It is a Google Cloud function
It gets data from BigQuery
It writes that data in JSON format to Cloud Storage
My goal is to integrate Pako (or another means of compression) to compress the JSON files to gzip format prior to moving into storage.
const {BigQuery} = require('#google-cloud/bigquery');
const {Storage} = require('#google-cloud/storage');
const functions = require('#google-cloud/functions-framework');
const bigquery = new BigQuery();
const storage = new Storage();
functions.http('extractTableJSON', async (req, res) => {
// Exports my_dataset:my_table to gcs://my-bucket/my-file as JSON.
// https://cloud.google.com/bigquery/docs/samples/bigquery-extract-table-json#bigquery_extract_table_json-nodejs
const DateYYYYMMDD = new Date().toISOString().slice(0,10).replace(/-/g,"");
const datasetId = "dataset-1";
const tableId = "example";
const bucketName = "domain.appspot.com";
const filename = `/cache/${DateYYYYMMDD}/example.json`;
// Location must match that of the source table.
const options = {
format: 'json',
location: 'US',
};
// Export data from the table into a Google Cloud Storage file
const [job] = await bigquery
.dataset(datasetId)
.table(tableId)
.extract(storage.bucket(bucketName).file(filename), options);
console.log(`Job ${job.id} created.`);
res.send(`Job ${job.id} created.`);
// Check the job's status for errors
const errors = job.status.errors;
if (errors && errors.length > 0) {
res.send(errors);
}
});
If you want to gzip compress the result, simply use that option
// Location must match that of the source table.
const options = {
format: 'json',
location: 'US',
gzip: true,
};
Job done ;)
From guillaume blaquiere Ah, you are looking for an array of rows!!! Ok, you can't have it out of the box. BigQuery export JSONL file (JSON Line, with 1 valid JSON per line, representing a row in BQ) – guillaume blaquiere
Turns out that I had a misunderstanding of the expected output. I was expecting a JSON Array, whereas the output is individual JSON lines, as Guillaume mentioned above.
So, if you're looking for a JSON Array output, you can still use the helper found below to convert the output, but turns out, that was in fact the expected output and I was mistakenly thinking it was inaccurate (sorry - I'm new ...)
// step #1: Use the below options to export to compressed JSON (as per guillaume blaquiere's note)
const options = {
format: 'json',
location: 'US',
gzip: true,
};
// step #2 (if you're looking for a JSON Array): you can use the below helper function to convert the response.
function convertToJsonArray(text: string): any {
// wrap in array and add comma at end of each line and remove last comma
const wrappedText = `[${text.replace(/\r?\n|\r/g, ",").slice(0, -1)}]`;
const jsonArray = JSON.parse(wrappedText);
return jsonArray;
}
For reference / in case it's helpful, i created this function that'll handle both compressed and uncompressed JSON that's returned.
The application of this is that i'm writing the BigQuery table to JSON in cloud storage to act as a "cache" then requesting that file from a React app and using the below to parse the file in the React app for use on frontend.
import pako from 'pako';
function convertToJsonArray(text: string): any {
const wrappedText = `[${text.replace(/\r?\n|\r/g, ",").slice(0, -1)}]`;
const jsonArray = JSON.parse(wrappedText);
return jsonArray;
}
async function getJsonData(arrayBuffer: ArrayBuffer): Promise<any> {
try {
const Uint8Arr = pako.inflate(arrayBuffer);
const arrayBuf = new TextDecoder().decode(Uint8Arr);
const jsonArray = convertToJsonArray(arrayBuf);
return jsonArray;
} catch (error) {
console.log("Error unzipping file, trying to parse as is.", error)
const parsedBuffer = new TextDecoder().decode(arrayBuffer);
const jsonArray = convertToJsonArray(parsedBuffer);
return jsonArray;
}
}

How does node-redis method setex() convert Buffer to string?

I'm using node-redis and I was hoping that someone could help me to figure out how this library converts Buffer to string. I gzip my data before I store it in redis with node-gzip and this call returns Promise<Buffer>
const data = JSON.stringify({ data: 'test' });
const compressed = await gzip(data, { level: 9 });
I tested following 2 approaches of saving buffer data into redis
Without .toString() - I pass the Buffer to the library and it will take care of the conversion
const result = await redisClient.setex('testKey', 3600, compressed);
and with .toString()
const result = await redisClient.setex('testKey', 3600, compressed.toString());
When I try these 2 approaches I don't get the same value saved in redis. I tried to use different params for .toString() to match the output of 1) but it didn't work
Reason why I need the saved value in 1) format is that I'm matching value format that what one of php pages generates
My code is working fine without .toString() but I would like to know how node-redis handles it internally
I've tried to find the answer in the source code and to debug and step into library calls but I didn't find the answer that I was looking for and I hope that someone can help me with this
It looks like happens in utils.js file:
utils.js
if (reply instanceof Buffer) {
return reply.toString();
}
Also use the proper options (i.e. return_buffers):
node-redis README
redis.createClient({ return_buffers: true });

Google Apps Script/URLFetchApp and using returned data

I am very new to this, so please bear with me-- I have currently have an operational google apps script on the backend of a google sheet that is generated from Google Form answers. I am essentially setting up a ticket form in google forms that will trigger the data in the corresponding sheet to be sent via api call to our ticketing system. It works great, but I am trying to optimize it currently. The goal is to take the json response I get using:
Logger.log(response.getContentText());
which provides me the following info:
Aug 9, 2020, 11:44:40 AM Info {"_url":"https://testticketingsystem.com/REST/2.0/ticket/123456","type":"ticket","id":"123456"}
and send another API call to send data to that new ticket.
Here's a code snippet:
var payload = {
"Subject": String(su),
"Content": String(as),
"Requestor": String(em),
"Queue": String(qu),
"CustomFields": {"CustomField1": String(vn), "CustomField2": String(vb), "CustomField3":
String(vg), "CustomField4": String(av), "CustomField5": String(ov), "CustomField6":
String(sd)}
}
var options = {
'method': 'post',
"contentType" : "application/json",
'payload': JSON.stringify(payload),
'muteHttpExceptions': true
}
var url = "https://testticketingsystem.com/REST/2.0/ticket?token=****************";
var response = UrlFetchApp.fetch(url,options);
Logger.log(response.getContentText());
} catch (error) {
Logger.log(error.toString());
}
}
After the ticket is created, how do I script the use of that ID number as a variable into my next api call?
Thank you!
UrlFetchApp.fetch returns a HTTPResponse, and if you expect JSON then you should be able to just use JSON.parse() to create an object from the text. (The JSON object is a standard JavaScript global object like Math; it is not Google Apps Script specific.)
If all goes well, you should just be able to use
var response = UrlFetchApp.fetch(url,options);
var data = JSON.parse(response.getContentText());
var id = data.id;
and then use that id for your next fetch().
Notes
If your literal response is indeed
Aug 9, 2020, 11:44:40 AM Info {"_url":"https://testticketingsystem.com/REST/2.0/ticket/123456","type":"ticket","id":"123456"}
you will run into trouble as everything until the { is invalid JSON (use a linter if you need to check yourself). But I'm assuming that was added by the console when you logged JSON, and not in the actual response itself.
JSON.parse() throws an error with invalid JSON, so you can use try/catch if needed.
You can also check the headers before you try to JSON.parse().
Here's an example that checks and handles issues, should they arise.
var type = response.getHeaders()["Content-Type"];
var text = response.getContentText();
if (type === "application/json") {
try {
var data = JSON.parse(text);
} catch (error) {
return Logger.log("Invalid JSON: " + response.getContentText(text));
}
} else {
return Logger.log("expected JSON, but got response of type: " + type);
}
// if we get to this line, data is an object we can use

Converge API Error Code 4000

I am attempting to POST to the Converge Demo API and I am getting a 4000 error. Message is "The VirtualMerchant ID was not supplied in the authorization request."
I am using axios inside Vuex. I am attempting to make the post from Vuex for now since it's demo. I am throwing it up https with TLSv1.2_2018.
Here's the simplified version of the code I am using.
let orderDetails = {
ssl_merchant_id:'******',
ssl_user_id:'***********',
ssl_pin: '****...',
ssl_transaction_type: 'ccsale',
ssl_amount: '5.47',
ssl_card_number: '4124939999999990',
ssl_cvv2cvc2: '123',
ssl_exp_date: '1219',
ssl_first_name: 'No Named Man',
ssl_test_mode: true
}
let orderJSON = JSON.stringify(orderDetails)
let config = {
headers: {
'Access-Control-Allow-Methods': 'PUT, POST, PATCH, DELETE, GET',
'Content-Type': 'application/x-www-form-urlencoded'
}
}
axios.post('https://api.demo.convergepay.com/VirtualMerchantDemo/process.do', orderJSON, config)
.then(res => {
console.log('res', res.data)
})
.catch(e => {
console.log('e', e)
})
Has anyone solved this and/or able to share some wisdom?
I think you are sending the values the wrong way and that's why you receive the message of a missing parameter. The endpoing process.do expects to receive a key value pairs formatted request
ssl_merchant_id=******&ssl_user_id=***********&ssl_pin=****&ssl_transaction_type=ccsale&ssl_amount=5.47&ssl_card_number=4124939999999990&ssl_cvv2cvc2=123&ssl_exp_date=1219&ssl_first_name=No Named Man&ssl_test_mode=true
From Converge website (https://developer.elavon.com)
Converge currently supports two different ways to integrate:
Key value pairs formatted request using process.do (for a single transaction) or processBatch.do (for a batch file) with the following
syntax: ssl_name_of_field = value of field (example: ssl_amount =
1.00)
Or
XML formatted request using processxml.do (for a single transaction) or accountxml.do (for a Admin request), the transaction
data formatted in XML syntax must include all supported transaction
elements nested between one beginning and ending element , the
data is contained within the xmldata variable.

Meteor.http.get issue with Twitter API

I am using Meteor and the Twitter API for a project. I want to get information on a user from Twitter. I wrote a function that for example returns only the location of a user from Twitter. I believe this is the proper way to do a request on Meteor. Here it is :
Meteor.methods({getTwitterLocation: function (username) {
Meteor.http.get("https://api.twitter.com/1/users/show.json?screen_name="+ username +"&include_entities=true", function(error, result) {
if (result.statusCode === 200) {
var respJson = JSON.parse(result.content);
console.log(respJson.location);
console.log("location works");
return (respJson.location)
}else {
return ( "Unknown user ")
}
});
}});
Now this function will log what's in the console on my Git Bash. I get someones Location by doing a Meteor.call. But I want to post what that function returns on a page. In my case, I want to post in on a user's profile. This doesn't work. But the console.log(respJson.location) returns the location in my Git Bash but it won't display anything on the profile page. This is what I did on my profile page:
profile.js :
Template.profile.getLocation= function(){
return Meteor.call("getTwitterLocation","BillGates");
}
profile.html :
<template name="profile">
from {{getLocation}}
</template>
With that I get "Seattle, WA" and " "location works" on my Git Bash but nothing on the profile page. If anyone knows what I can do, that'd be really appreciated. Thanks.
Firstly when data is returned from the server you need to use a synchronous call, as the callback will return the data when the server already thinks the meteor method has completed. (the callback will be fired at a later time, when the data is returned from the server, by which time the meteor client would have already got a response)
var result = Meteor.http.get("https://api.twitter.com/1/users/show.json?screen_name="+ username +"&include_entities=true");
if (result.statusCode === 200) {
var respJson = JSON.parse(result.content);
console.log(respJson.location);
console.log("location works");
return (respJson.location)
}else {
return ( "Unknown user ")
}
The second is you need to use a Session hash to return the data from the template. This is because it will take time to get the response and the getLocation would expect an instant result (without a callback). At the moment client side javascript can't use synchronous api calls like on the server.
Template.profile.getLocation= function(){
return Session.get("twitterlocation");
}
Use the template created event to fire the meteor call:
Template.profile.created = function() {
Meteor.call("getTwitterLocation","BillGates", function(err,result) {
if(result && !err) {
Session.set("twitterlocation", result);
}
else
{
Session.set("twitterlocation", "Error");
}
});
});
Update:
Twitter has since updated its API to 1.1 a few modifications are required:
You now need to swap over to the 1.1 api by using 1.1 instead of 1. In addition you need to OAuth your requests. See https://dev.twitter.com/docs/auth/authorizing-request. Below contains sample data but you need to get proper keys
var authkey = "OAuth oauth_consumer_key="xvz1evFS4wEEPTGEFPHBog",
oauth_nonce="kYjzVBB8Y0ZFabxSWbWovY3uYSQ2pTgmZeNu2VS4cg",
oauth_signature="tnnArxj06cWHq44gCs1OSKk%2FjLY%3D",
oauth_signature_method="HMAC-SHA1",
oauth_timestamp=""+(new Date().getTime()/1000).toFixed(0)+"",
oauth_token="370773112-GmHxMAgYyLbNEtIKZeRNFsMKPR9EyMZeS9weJAEb",
oauth_version="1.0"";
Be sure to remove the newlines, I've wrapped it to make it easy to read.
var result = Meteor.http.get("https://api.twitter.com/1.1/users/show.json?screen_name="+ username +"&include_entities=true",{headers:{Authorization : authkey});
If you find this a bit troublesome it might be easier to just use a package like https://github.com/Sewdn/meteor-twitter-api via meteorite to OAuth your requests for you.