how to use BigQueryHook to read data from another project - google-bigquery

we have airflow composer instance in GCP project A and our BQ dataset is present in Project B. When we are trying to read data from BQ we getting error. we are writing hook as hook = BigQueryHook(bigquery_conn_id='bigquery_default', delegate_to=None, use_legacy_sql=False)
how can we pass the project name B in this BigQueryHook function

First bigquery_conn_id is deprecated. You should use gcp_conn_id.
Project Id is a property of GCP connection as you can see in the docs. If you set it on the connection level Airflow uses it. However if needed you can override it on in the hook.
BigQueryHook(gcp_conn_id='bigquery_default', project_id='other_project')
You can check the source code that is doing the override here.

Related

How to use two different types of storage at the same time

I'm using latest vue2 and latest vue2-storage.
I'd like to have a local storage and session storage at the same time.
The goal is: store app config into local storage, store tmp user data in session storage.
As far as I get, the plugin will initialize only one storage for the whole app and - beside enforcing options each time you need it - there's no way to use a second storage on demand.
I've checked the source code and done some manual test but it seems impossible to import the constructor class to initialize a second storage.
Disclaimer 1: bare local/sessionStorage is not enough.
Disclaimer 2: I'm pretty new to vue2.
Hi)) I have updated the package to version 6.0.0, which implements the ability to separately import the class and plugin. issues/49
Is it easier if you just use plain javascript ?
you can easily use localStorage.setItem("test", "value") and localStorage.getItem("test") , sessionStorage.setItem("test", "value") and sessionStorage.getItem("test")

How do I keep Flow annotations intact through Babel/Webpack?

I have a private NPM module for utility functions with Flow type annotations.
I develop in Node v7 and, prior to npm publish, I use Babel/Webpack to transform it to earlier Node versions to run in environments like AWS Lambda.
I use transform-flow-strip-types plugin for Babel to compile, but as I understand it, that means I lose static type checking of my exported functions when I import the module into another project.
I tried babel-plugin-syntax-flow, but it throws unexpected token errors, so I'm assuming this isn't its intended use.
Can I transform my src/ with Babe while keeping Flow types intact?
The type annotations are simple (string, number, mostly), so I'd like to avoid writing typedefs to export with every function.
I came across an article explaining how to achieve what you are describing:
http://javascriptplayground.com/blog/2017/01/npm-flowjs-javascript/
By publishing both javascript where flow type is stripped together with the original flow typed file, you will get proper flow type checking when using your library.
This is achieved by publishing the original file as file.js.flow, and the babelified file as file.js.

Is it possible to add additional logic to the Parse Server?

I want to perform some actions on X value depending on input from received in Y value. Can I perform such actions writing server side code in parse server?
Any pointers will be helpful.
Thanks.
Custom server side code can be achieved via cloud code. Cloud code allows you to create custom functions that are written in NodeJS and those functions can do various operations like: query from database, integrate with other solutions like: social, sending emails and more. The big advantage in parse-server is that you can use any npm module that you like from within the cloud code function and because there are millions of modules out there you have unlimited options.
Another very cool features of cloud code is the server side hooks
server side hooks allows you to write a code that will be triggered by parse-server core when an object is being saved or deleted. such events can be:
beforeSave - do something before the object is being saved to the database
afterSave - do something after the object is being saved
beforeDelete - do something before deleting
and more and more..
in order to define new cloud code function you will need to use the following code:
Parse.Cloud.define("{YOUR_FUNCTION_NAME}", function (request, response) {
// write your code, require some npm module and more...
});
In order to create server side hook you can write the following code:
Parse.Cloud.beforeSave("{PARSE_OBJECT_NAME}", function (request, response) {
// write your code and handle before saving an object
});
Triggering cloud code functions can be done easily via parse-server REST API or via parse-server client SDK's (iOS,Android,JavaScript and more)
There is a great guide on cloud code in here:
http://parseplatform.github.io/docs/cloudcode/guide/
Good Luck :)

Update globalChannelMap in Mirth Connect

I inherited a Mirth Connect (v2.2.1) instance and am learning how it works. I'm now learning how globalChannelMap variables work, and I'm stumped by a misbehaving filter on a source connector.
In theory I can edit a csv text file in the Mirth Connect folders directory to update the globalChannelMap that is called by the filter.
But in practice the csv file is updated yet the source connector filter continues to call a prior globalChannelMap for the txt file. What step am I missing to update the globalChannelMap? Is there a simple way to output the current contents of a globalChannelMap?
You may need to redeploy. If you're seeing that you're using an old global channel map (using calKno's method), it means you need to redeploy the channel.
Channel's need to be redeployed anytime their code content is changed, be it an internal library (such as a code template), a transformer, or a global channel map.
You can get the map at the beginning of your filter and update it at the end or wherever it makes sense.
//get map
var map = globalChannelMap.get('mapName');
//log map value
logger.info('This is your map content: '+map);
//update map value
globalChannelMap.put('mapName', value);

Calling A Custom API From Table's Insert Function Windows Azure

I'm using windows azure in order to manage my application's data.
I have a custom API called 'shared' that contains app the code handles push notifications.
from another API, I can call this method using this code:
var operations = require('./shared').operations;
operations["sendPush"](/*parameters*/);
When I call the same code from a table's 'insert' script I get this error:
Error in callback for table '*****'. Error: Cannot find module './shared'
[external code]
at Object.sendPush [as success] (</table/*****.insert.js>:57:30)
[external code]
Somebody knows how to fix it?
I think the secret as at the url './shared', cause from an API, it on the same path but from table the path is different.
Does anyone knows what is the path for URL requests to add a table row?
I suceed.
By creating a GIT repo I could access to the shared folder.
This folder used for things just like this.
You can see a documentation in the readme file inside shared folder.
I wonder if it could be a scoping issue because you are in the callback for your insert script?
You could try moving var operations = require('./shared').operations; to the start of your script, before the insert operation.
Shared scripts should reside in the service/shared folder. Then you can require them from other scripts using a relative path, like so:
require('../shared/mysharedscript.js')