Token management in Karate parallel execution - karate

Scenario : All the endpoints in my API test need authentication and hence authorization header needs to be passed. I have Authentication.feature file where I read refresh token from a file, generate new access token, write the new refresh token back to the file. After running each scenario, I need to update the refresh token back to the file and it will be consumed by next feature. Authentication.feature file is called from karate-config.js file and authentication header is set as shown below
var response = karate.call('classpath:Test/features/Authentication.feature',config).response;
var token = response.access_token
karate.configure('headers',{Authorization: 'Bearer '+token});
Everything till now is working fine, but when I use junit5 parallel runner, it causes issues with the authentication token. Not the latest refresh token is written to the file. I tried by making the file read/write part synchronized, but it does not solve the problem. Also I tried #parallel=false annotation in Authentication.feature, still no luck. How can I make my test run parallel at the same time it correctly update the file with latest refresh token

The recommended way to do this is to use karate.callSingle() - please read about it if you haven't already: https://github.com/karatelabs/karate#hooks
Note that this code example below is JS in karate-config.js:
var result = karate.callSingle('classpath:some/package/my.feature');
Also see this answer for some other ideas: https://stackoverflow.com/a/53516885/143475

Related

Headers in Azure Data Factory HTTP Copy data source

We are using Azure Data Factory to source data from an On-Premise JIRA installation. I've managed to get a number of pipelines to work using the JIRA API, but am hitting a wall when trying to source the Organization object.
There's an undocumented API call that can be made, though:
/jira/rest/servicedeskapi/organization
This will display the following message when attempting to run from a browser:
"This API is experimental. Experimental APIs are not guaranteed to be stable within the preview period. You must set the header 'X-ExperimentalApi: opt-in' to opt into using this API."
Using Postman, I set things up with the additional header, and I manage to get a resultset:
Using the same ADF copy data job I used for all my other API Calls, however, does not seem to work. I'm using the "Additional Headers" field to add a Bearer token we retrieve from our keyvault, like so:
#{concat(
'Authorization: Bearer '
, activity('Get Bearer token from Keyvault').output.value
)}
This works fine for all other API calls. I figured adding the extra header would be as simple as simply appending another line like so:
#{concat(
'Authorization: Bearer '
, activity('Get Bearer token from Keyvault').output.value,
', X-ExperimentalApi: opt-in')
}
However, that ends up throwing an error:
"ErrorCode=UserErrorInvalidHttpRequestHeaderFormat,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Failed to set addtional http header,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.ArgumentException,Message=Specified value has invalid HTTP Header characters.
Parameter name: name,Source=System,'"
I tried wrapping double quotes (and escaping them) around the key/value pairs, but that did not work. I tried removing the comma, but somehow that leads to the REST API thinking the extra header is part of the Bearer token, as it then throws an "Unauthorized" exception.
I can get the API to return data if I use a WEB component without any issues, but it'd be nice if I somehow would get this to work within the Copy data activity.
Any help is greatly appreciated!
The approaches that are tried to achieve this might be the incorrect way to provide multiple headers while using copy data activity.
I have used HTTP source with a sample URL which accepts Authorization: Bearer token. However, giving additional header (even though it is not required) is working same as using just Authorization header.
To pass multiple headers, pass each header separated by a new line. I have used the dynamic content with string interpolation (#{...}), instead of using #concat.
Authorization: Bearer #{pipeline().parameters.token}
X-ExperimentalApi: opt-in
You can see the following debug input how the additional headers are passed.
As an alternative, since there are only 2 headers to be given, you can configure the Authorization in the linked service itself and use the X-ExperimentalApi as a single additional header in Additional Headers section of the copy data activity.

Is it possible to set the environment variable "GOOGLE_APPLICATION_CREDENTIALS" to an uploaded JWT File in Flowground?

I try to use the "google-api-nodejs-client" (https://github.com/googleapis/google-api-nodejs-client) with a JSON Web Token in a flowground connector implementation. Is there a possibility to get the environment variable "GOOGLE_APPLICATION_CREDENTIALS" point to a configurable JWT file that the user can upload into a flow?
Example of client usage from the library page:
// This method looks for the GCLOUD_PROJECT and GOOGLE_APPLICATION_CREDENTIALS
// environment variables.
const auth = new google.auth.GoogleAuth({
scopes: ['https://www.googleapis.com/auth/cloud-platform']
});
Lets see if I understand correctly what you want to do:
create a flow that can be triggered from outside and accesses any Google API via google-api-nodejs-client module.
every time you trigger the flow you will post a valid JWT for accessing any Google API
you want to store the JWT in the local file-system; the mentioned environment variables contains the path to the persisted JWT.
Generally spoken this is a valid approach for the moment.
You can create a file in the local file-system:
fs.writeFile(process.env.HOME + '/jwt.token', ...)
Sebastian already explained how to define the needed environment variables.
Please keep in mind that writing and reading the JWT file must take place in the same step of flow execution. There is no persistence of this file after finishing execution of this step.
Why is this a valid approach for the moment only?
I assume that we will prevent writing in the local file-system in the near future. This will prevent the described solution as well.
From my point of view the better solution would be using the OAuth2 mechanism build in flowground.
For more information regarding this approach
https://github.com/googleapis/google-api-nodejs-client#oauth2-client
https://doc.flowground.net/getting-started/credential.html
You can set environment variables in flowground following on the "ENV vars" page for your connector:

Ember.js Authentication Token for Ember-Data + AMS => JSON or HTTP Header?

CONTEXT:
I have an Ember.js 1.1.0-beta.1 application that exchanges JSON data with a Rails-API server (Rails 4). JSON data exchange is accomplished with Ember-Data 1.0.0-beta.2 and Active Model Serializers 0.8.1 (AMS). I'm using the default recommended configurations for both Ember-Data and AMS, and am compliant with the JSON-API spec.
On any given RESTful call, the client passes the current authentication token to the server. The authentication token is verified and retired, and a new authentication token is generated and sent back to the client. Thus, every RESTful call accepts an authentication token in the request, and provides a new authentication token in the response that the client can cache and use for the next RESTful call.
QUESTION:
Where do I put the authentication token in each request and response?
Should it be part of each object's JSON in request and response? If so, where is the token placed in the existing object's JSON structure (which has nothing to do with authentication)?
Or should they be placed in the HTTP header for each request and response object?
What is "The Ember Way" that one might eventually expect to find in the new Ember Guides Cookbook?
MORE CONTEXT:
I'm already familiar with the following links:
#machty 2 Embercasts: http://www.embercasts.com/episodes/client-side-authentication-part-2
#wycats tweet: https://twitter.com/wycats/status/376495062709854209
#cavneb 3 blog posts: http://coderberry.me/blog/2013/07/08/authentication-with-emberjs-part-1
#simplabs blog post: http://log.simplabs.com/post/53016599611/authentication-in-ember-js
...and am looking for answers that go beyond these, and are specific to Ember-Data + AMS.
With the exception of the need to pass a new token back to the client in the response via Ember-Data, assume my client code is otherwise similar to the #machty Embercast example on GitHub: https://github.com/embercasts/authentication-part-2/blob/master/public/js/app.js
Thank you very much!
I've got a similar stack - ember, ember-data and rails-api with AMS. Right now, I'm just passing the authentication token (which I store in localStorage) in a header (though you could pass it on the query string) by modifying the RESTAdapter's ajax method.
My initial thought would be to avoid resetting the token on every request. If you're particularly concerned about the token being sniffed, it might be easier to just reset the token on the server at a regular interval (say, 10 minutes). Then, if any request from the client fails due to an old token, just fetch the new token (by passing a'reset token' that your server gives you at login) and replay the initial request.
As for where to put the token, there isn't really an "Ember Way" - I prefer passing it in a header since passing it in the query string can mess with caching and is also more likely to be logged somewhere along the way. I'd definitely avoid passing it in the request body - that would go against what ember-data expects, I'd imagine.
I have built something similar, although I do not reset the token unless the user signs out.
I would not put it in the request body itself - you are just going to pollute your models. There probably is no Ember way since this is more of a transport issue. I pass the token using a custom HTTP header and/or a cookie. The cookie is needed to authorize file downloads, which can not be done through ajax, although the cookie works for ajax calls too. In your case I would use a cookie and have the server set it to the new value each time. However, your scheme of resetting the token on each JSON request is not going to work on simultaneous requests. Is this really necessary? If you use TLS you probably don't need to worry so much. You could also timeout the token so that if there are no requests for 10 minutes a new token is generated.

SignedJwtAssertionCredential refresh

Each thread in my client initializes with
self.credentials = oauth2client.client.SignedJwtAssertionCredentials(...)
http = httplib2.Http()
http = self.credentials.authorize(http)
self.http = http
This works fine initially and each client is able do appropriate work.
As the hour approaches and the token nears expiration what is the best way to refresh the credential so that each thread can continue to make progress? I tried
self.credentials.refresh(self.http)
just before the hour but am seeing
File "/usr/lib64/python2.6/httplib.py", line 355, in _read_status
raise BadStatusLine(line)
BadStatusLine
OAuth 2.0 Service Account Access Tokens cannot be refreshed in the same way that regular OAuth 2.0 access tokens are. Instead, you need to rebuild the credentials from scratch and request another access token.
So effectively, you would just reuse your initialization code.
Regarding threads, please read this: https://developers.google.com/api-client-library/python/guide/thread_safety
There is no need for manual refresh since it's automagically done in the apiclient library (here).
The code should be as follows:
{Shared code}
self.credentials = oauth2client.client.SignedJwtAssertionCredentials(...)
.
{Thread code}
service = self.credentials.authorize(httplib2.Http())
Hope it helps.

Phantomjs write scraped data to database

I have written a phantomjs script to scrap Hoover.
Following is my flow:
1:Get data from database using Nodejs API .
2:At a time I fetch 10 rows,pass these rows one at a time to Website,scrap it(the prob is here. I somehow want to store results from Scrapped into a array or something then pass this data back to node API to update database in Azure).
Right now I am able to get data from azure using nodejs API and also able to scrap using phantomjs my only prob is how do I store the results in tempopary storage or array, which then can be passsed to nodejs API for updating database in azure.
(I'm using CasperJS - it adds a layer on PhantomJS, but I think it might also work in PhantomJS)
You can have CasperJS do an AJAX call to your backend with the data you want to store.
Make CasperJS include a content script to each page it visits:
var casper = require('casper').create({ clientScripts: ['content.js'] });
Then, in content.js:
function sendToServer(theData){
var xhr2 = new XMLHttpRequest();
xhr2.open('POST', your_server_url, true);
xhr2.send(theData);
}
Now you can call sendToServer with casper.evaluate from your script.
And remember to include this in your receiving app (or see this module):
res.writeHead(200, {
'Access-Control-Allow-Origin': '*'
});
otherwise your ajax will fail. It is possible that you would have to add OPTIONS route that returns CORS headers as well. Another solution for this is disabling cross-origin checks on PhantomJS with command line switch.