I am copying data from a JSON script into my SQLDW. I am using the Copy Activity from DF2 where my datasets are HTTP-Source and SQLDW-Target
Everything is working fine until I set up a URL parameter for my HTTP LinkedService connection and then I want to feed it with a LookUp Activity.
Is it possible to parameterize LinkedServices?
The ‘Linked Service parameterization’ feature is currently under preview.
Related
I have developed a script which executes against one DB instance e.g.: db1. The code to connect to DB is written in Background section. Now what i want to do is, i have to execute same test script against diffrent db instance e.g.:db2
Feature:Execution against multiple DB instance.
##############################################
Background:
* def db_properties = {db_username,db_password,db_connection_string,driver}
* def createConnection = path to read .java file
* def readFromDB = new createConnection(db_properties)
##############################################
In * def db_properties, i have hard coded the actual values of username, password, conenction string and driver.What exactly i want to do is, i have to validate my API response agains't another DB instance e.g. build is deployed in another environment, and db properties which i have mentioned is diffrent environment. How can i do it?
This has nothing to do with Karate. Maybe the solution is to have 2 sets of DB connection values in your karate-config.js. Please figure out a solution that is appropriate for your situation.
I have a static data in config.properties file (ex : email and password) and I have used data table in feature file for email and password. I am passing the email and password in the data table.
So I need to use the data from config file instead of directly using in the data table. I have created a propertyreader file and able to use some data from config file like url but unable to use the email and password.
Is there any way that I can use data table + data from config file?
You can take a look to cucumber extension for BDD2 which has support properties from configuration as well as external examples.
You can use properties in step cal as well as can have external examples. You also can use properties in Examples with BDD2.
I am using following url to get the iteration data from rally.
I then parse the json data received.
def query = URLEncoder.encode("(Project.Name contains \"1 Prime Infrastructure\")", "UTF-8")
def rallyURL = "https://us1.rallydev.com/slm/webservice/v2.0/iteration?query="+query+"&fetch=true&start=1&pagesize=200"
The issue is it giving 0 records. But when i change the name to some other project the data comes.
Probably it is because of the default workspace for my username and password. I want project data from different workspace.
I have access to all this workspace.
can someone tell how to set the workspace before making an api call so that i can get the iteration data ??
Thanks,
You can simply include a workspace parameter in your url to override the default:
&workspace=/workspace/12345
You can also always further refine your results to a specific project:
&project=/project/12345
Or to a specific hierarchy:
&projectScopeUp=true
&projectScopeDown=true
I have data in Spark which I want to save to S3. The recommended method is to save is using the saveAsTextFile method on the SparkContext, which is successful. I expect that the data will be saved as 'parts'.
My problem is that when I go to S3 to look at my data it has been saved in a folder name _temporary, with a subfolder 0 and then each part or task saved in its own folder.
For example,
data.saveAsTextFile("s3:/kirk/data");
results in file likes
s3://kirk/data/_SUCCESS
s3://kirk/data/_temporary/0/_temporary_$folder$
s3://kirk/data/_temporary/0/task_201411291454_0001_m_00000_$folder$
s3://kirk/data/_temporary/0/task_201411291454_0001_m_00000/part-00000
s3://kirk/data/_temporary/0/task_201411291454_0001_m_00001_$folder$
s3://kirk/data/_temporary/0/task_201411291454_0001_m_00001/part-00001
and so on. I would expect and have seen something like
s3://kirk/data/_SUCCESS
s3://kirk/data/part-00000
s3://kirk/data/part-00001
Is this a configuration setting, or do I need to 'commit' the save to resolve the temporary files?
I had the same problem with spark streaming, that was because my Sparkmaster was set up with conf.setMaster("local") instead of conf.SetMaster("local[*]")
Without the [*], spark can't execute saveastextfile during the stream.
Try using coalesce() to reduce the rdd to 1 partition before you export.
Good luck!
I know that we can normally give the parameters while running the jar file in EC2 instance
But how do we give inputs through code?
I am trying this because I am trying to call my java code from a jsp, so in the java code ,I want to directly pick up data from s3 and proceed , I tries like this but in vain:
DataExtractor.getRelevantData("s3n://syamk/revanthinput/", "999999", "94645", "20120606",
"s3n://revanthufl/gen/testoutput" + "interm");
here s3n://syamk/revanthinput/ I was using input and instead of s3n://revanthufl/gen/testoutput.
I was using output and in the parameters I am using the same strings(s3n://syamk/revanthinput/ and s3n://revanthufl/gen/testoutput) to run the jar.But doing like this from code is throwing and exception,
[java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties (respectively).] with root cause
Based on my usage of flume, it would appear that you need to format your URL like s3n://AWS_ACCESS_KEY:AWS_SECRET_KEY#syamk/revanthinput/ when calling s3 from within code.