Read JSON Key Value and store it XSLT Variable - xslt-1.0

Please help me for XSLT code which will work in DataPower for following input
Input: {
"Timestamp": "2018-12-19T10:52:21.0870605-05:00",
"ResponseType": "Success",
"Name": [
{
"Code": "1001",
"Description": "ABC",
"Number": "123"
},
{
"Code": "1002",
"Description": "XYZ",
"Number": "123"
},
{
"Code": "1003",
"Description": "PQA",
"Number": "123"
},
{
"Code": "1004",
"Description": "MNO",
"Number": "123"
}
]
}
Output:
XSLT Variable
xsl:variable_code = 1001,1002,1003,1004
xsl:variable_Name : ABC,XYZ,PQA,MNO

XSLT will not work with this format natively (XSLT input is always XML, but output can be whatever).
There are ways to get around this.
1 - Use Gatewayscript transformation instead. You can find example on your own Datapower "sample" folders". The files are ending with ".js"
2 - You can still do it in XSLT, but need to auto-convert the JSON to XML using the input settings and a special, hidden, magic variable.
How-to:
In your object (XML Firewall, or Multi-Protocol Gwy), Specify the input as "JSON"
At the step in the rule where you want to use XSLT to interpret this input, do not use the vairable "PIPE" or "INPUT" as input, but "__JSONASJSONX". MORE INFO HERE.
This will allow you to navigate the JSON file after conversion to XML.
Here is an example of the conversion.
The rest is just normal XSLT programming on the Datapower... you can create a JSON or XML output... your choice !

Related

JSON Schema Array Validation Woes Using oneOf

Hope I might find some help with this validation issue: I have a JSON array that can have multiple object types (video, image). Within that array, the objects have a rel field value. The case I'm working on is that there can only be one object with "rel": "primaryMedia" allowed — either a video or an image.
Here are the object representations for the image and video case, both showing the "rel": "primaryMedia".
{
"media": [
{
"caption": "Caption goes here",
"id": "ncim87659842",
"rel": "primaryMedia",
"type": "image"
},
{
"description": "Shaima Swileh arrived in San Francisco after fighting for 17 months to get a waiver from the U.S. government to be allowed into the country to visit her son.",
"headline": "Yemeni mother arrives in U.S. to be with dying 2-year-old son",
"id": "mmvo1402810947621",
"rel": "primaryMedia",
"type": "video"
}
]
}
Here's a stripped-down version of the schema I created to validate this using oneOf (assuming this will handle my case). It doesn't however, work as intended.
{
"$id": "http://example.com/schema/rockcms/article.json",
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {},
"properties": {
"media": {
"items": {
"oneOf": [
{
"additionalProperties": false,
"properties": {
"caption": {
"type": "string"
},
"id": {
"type": "string"
},
"rel": {
"enum": [
"primaryMedia"
],
"type": "string"
},
"type": {
"enum": [
"image"
],
"type": "string"
}
},
"required": [
"caption",
"id",
"rel",
"type"
],
"type": "object"
},
{
"additionalProperties": false,
"properties": {
"description": {
"type": "string"
},
"headline": {
"type": "string"
},
"id": {
"type": "string"
},
"rel": {
"enum": [
"primaryMedia"
],
"type": "string"
},
"type": {
"enum": [
"video"
],
"type": "string"
}
},
"required": [
"description",
"headline",
"id",
"rel",
"type"
],
"type": "object"
}
]
},
"type": "array"
}
}
}
Using the JSON Schema validator at https://www.jsonschemavalidator.net, the schema validates when data is correctly presented, but the doesn't work when trying to catch errors.
In the case below, headline is added for a video, and it's missing id. This should fail because headline isn't allowed on video, and id is required.
{
"media": [
{
"headline": "Yemeni mother arrives in U.S. to be with dying 2-year-old son",
"rel": "primaryMedia",
"type": "video"
}
]
}
The results I get from the validator, however, aren't entirely expected. Seems it's conflating the two object schemas in its response.
In addition to this, I've separately found that the schema will allow the population of BOTH a video and image object in media, which isn't expected.
Been trying to figure out what I've done wrong, but am stumped. Would very much appreciate some feedback, if anyone has to offer. Thanks in advance!
Validator Response:
Message: JSON is valid against no schemas from 'oneOf'.
Schema path: #/properties/media/items/oneOf
Message: Property 'headline' has not been defined and the schema does not allow additional properties.
Schema path: #/properties/media/items/oneOf/0/additionalProperties
Message: Value "video" is not defined in enum.
Schema path: #/properties/media/items/oneOf/0/properties/type/enum
Message: Required properties are missing from object: description, id.
Schema path: #/properties/media/items/oneOf/1/required
Message: Required properties are missing from object: caption, id.
Schema path: #/properties/media/items/oneOf/0/required
Your question is formed of three parts, but the first two are linked, so I'll address those, although they don't have a "solution" as such.
How can I validate that only one object in an array has a specific key, and the others do not.
You cannot do this with JSON Schema.
The set of JSON Schema keywords which are applicable to arrays do not have a means for expressing "one of the values must match a schema", but rather are applicable to either ALL of the items in a the array, or A SPECIFIC item in the array (if items is an array as opposed to an object).
https://datatracker.ietf.org/doc/html/draft-handrews-json-schema-validation-01#section-6.4
The validation output is not what I expect. I expect to only see the failing branch of oneOf which relates to my object type, which
is defined by the type key.
The JSON Schema specification (as of draft-7) does not specify any format for returning errors, however the error structure you get is pretty "complete" in terms of what it's telling you (and is similar to how we are specifying errors should be returned for draft-8).
Consider, the validator knows nothing about your schema or your JSON instance in terms of your business logic.
When validating a JSON instance, a validator may step through all values, and test validation rules against all applicable subschemas. Looking at your errors, both of the schemas in oneOf are applicable to all items in your array, and so all are tested for validation. If one does not satisfy the condition, the others will be tested also. The validator cannot know, when using a oneOf, what your intent was.
You MAY be able to get around this issue, by using an if / then combination. If your if schema is simply a const of the type, and your then schema is the full object type, you may get a cleaner error response, but I haven't tested this theory.
I want ALL items in the array to be one of the types. What's going on here?
From the spec...
items:
If "items" is a schema, validation succeeds if all elements in the
array successfully validate against that schema.
oneOf:
An instance validates successfully against this keyword if it
validates successfully against exactly one schema defined by this
keyword's value.
What you have done is say: Each item in the array should be valid according to [schemaA]. SchemA: The object should be valid according to on of these schemas: [the schemas inside the oneOf].
I can see how this is confusing when you think "items must be one of the following", but items applies the value schema to each of the items in the array independantly.
To make it do what you mean, move the oneOf above items, and then refactor one of the media types to another items schema.
Here's a sudo schema.
oneOf: [items: properties: type: const: image], [items: properties: type: image]
Let me know if any of this isn't clear or you have any follow up questions.

Multistorage with avro?

I have a single file containing multiple avro records. Each record contains a unique "name". How do I load and store files such that each file represents a record that corresponds with a given name?
Here is my avro schema:
{
"type": "records",
"name": "XXItem",
"namespace": "com.xxx.xxx",
"fields": [
{
"name": "data",
"type": {"type": "map", "values" : ["string", "long", "int"]}
}
]
}
A quick check seems to indicate that avro, is simply using JSON for data storage.
By looking for solutions for handling JSON in general, you should be able to come up with something that works for you.
This could be a starting point: Hadoop for JSON files

Error loading file stored in Google Cloud Storage to Big Query

I have been trying to create a job to load a compressed json file from Google Cloud Storage to a Google BigQuery table. I have read/write access in both Google Cloud Storage and Google BigQuery. Also, the uploaded file belongs in the same project as the BigQuery one.
The problem happens when I access to the resource behind this url https://www.googleapis.com/upload/bigquery/v2/projects/NUMERIC_ID/jobs by means of a POST request. The content of the request to the abovementioned resource can be found as follows:
{
"kind" : "bigquery#job",
"projectId" : NUMERIC_ID,
"configuration": {
"load": {
"sourceUris": ["gs://bucket_name/document.json.gz"],
"schema": {
"fields": [
{
"name": "id",
"type": "INTEGER"
},
{
"name": "date",
"type": "TIMESTAMP"
},
{
"name": "user_agent",
"type": "STRING"
},
{
"name": "queried_key",
"type": "STRING"
},
{
"name": "user_country",
"type": "STRING"
},
{
"name": "duration",
"type": "INTEGER"
},
{
"name": "target",
"type": "STRING"
}
]
},
"destinationTable": {
"datasetId": "DATASET_NAME",
"projectId": NUMERIC_ID,
"tableId": "TABLE_ID"
}
}
}
}
However, the error doesn't make any sense and can also be found below:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalid",
"message": "Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0: "
}
],
"code": 400,
"message": "Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0: "
}
}
I know the problem doesn't lie either in the project id or in the access token placed in the authentication header, because I have successfully created an empty table before. Also I specify the content-type header to be application/json which I don't think is the issue here, because the body content should be json encoded.
Thanks in advance
Your HTTP request is malformed -- BigQuery doesn't recognize this as a load job at all.
You need to look into the POST request, and check the body you send.
You need to ensure that all the above (which seams correct) is the body of the POST call. The above Json should be on a single line, and if you manually creating the multipart message, make sure there is an extra newline between the headers and body of each MIME type.
If you are using some sort of library, make sure the body is not expected in some other form, like resource, content, or body. I've seen libraries that use these differently.
Try out the BigQuery API explorer: https://developers.google.com/bigquery/docs/reference/v2/jobs/insert and ensure your request body matches the one made by the API.

AWS Data pipeline CSV data from S3 to DynamoDB

I am trying to transfer CSV data from S3 bucket to DynamoDB using AWS pipeline, following is my pipe line script, it is not working properly,
CSV file structure
Name, Designation,Company
A,TL,C1
B,Prog, C2
DynamoDb : N_Table, with Name as hash value
{
"objects": [
{
"id": "Default",
"scheduleType": "cron",
"name": "Default",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"id": "DynamoDBDataNodeId635",
"schedule": {
"ref": "ScheduleId639"
},
"tableName": "N_Table",
"name": "MyDynamoDBData",
"type": "DynamoDBDataNode"
},
{
"emrLogUri": "s3://onlycsv/error",
"id": "EmrClusterId636",
"schedule": {
"ref": "ScheduleId639"
},
"masterInstanceType": "m1.small",
"coreInstanceType": "m1.xlarge",
"enableDebugging": "true",
"installHive": "latest",
"name": "ImportCluster",
"coreInstanceCount": "1",
"logUri": "s3://onlycsv/error1",
"type": "EmrCluster"
},
{
"id": "S3DataNodeId643",
"schedule": {
"ref": "ScheduleId639"
},
"directoryPath": "s3://onlycsv/data.csv",
"name": "MyS3Data",
"dataFormat": {
"ref": "DataFormatId1"
},
"type": "S3DataNode"
},
{
"id": "ScheduleId639",
"startDateTime": "2013-08-03T00:00:00",
"name": "ImportSchedule",
"period": "1 Hours",
"type": "Schedule",
"endDateTime": "2013-08-04T00:00:00"
},
{
"id": "EmrActivityId637",
"input": {
"ref": "S3DataNodeId643"
},
"schedule": {
"ref": "ScheduleId639"
},
"name": "MyImportJob",
"runsOn": {
"ref": "EmrClusterId636"
},
"maximumRetries": "0",
"myDynamoDBWriteThroughputRatio": "0.25",
"attemptTimeout": "24 hours",
"type": "EmrActivity",
"output": {
"ref": "DynamoDBDataNodeId635"
},
"step": "s3://elasticmapreduce/libs/script-runner/script-runner.jar,s3://elasticmapreduce/libs/hive/hive-script,--run-hive-script,--hive-versions,latest,--args,-f,s3://elasticmapreduce/libs/hive/dynamodb/importDynamoDBTableFromS3,-d,DYNAMODB_OUTPUT_TABLE=#{output.tableName},-d,S3_INPUT_BUCKET=#{input.directoryPath},-d,DYNAMODB_WRITE_PERCENT=#{myDynamoDBWriteThroughputRatio},-d,DYNAMODB_ENDPOINT=dynamodb.us-east-1.amazonaws.com"
},
{
"id": "DataFormatId1",
"name": "DefaultDataFormat1",
"column": [
"Name",
"Designation",
"Company"
],
"columnSeparator": ",",
"recordSeparator": "\n",
"type": "Custom"
}
]
}
Out of four steps while executing the pipeline, two are getting finished, but it is not executing completely
Currently (2015-04) default import pipeline template does not support importing CSV files.
If your CSV file is not too big (under 1GB or so) you can create a ShellCommandActivity to convert CSV to DynamoDB JSON format first and the feed that to EmrActivity that imports the resulting JSON file into your table.
As a first step you can create sample DynamoDB table including all the field types you need, populate with dummy values and then export the records using pipeline (Export/Import button in DynamoDB console). This will give you the idea about the format that is expected by Import pipeline. The type names are not obvious, and the Import activity is very sensitive about the correct case (e.g. you should have bOOL for boolean field).
Afterwards it should be easy to create an awk script (or any other text converter, at least with awk you can use the default AMI image for your shell activity), which you can feed to your shellCommandActivity. Don't forget to enable "staging" flag, so your output is uploaded back to S3 for the Import activity to pick it up.
If you are using the template data pipeline for Importing data from S3 to DynamoDB, these dataformats won't work. Instead, use the format in the link below to store the input S3 data file http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-importexport-ddb-pipelinejson-verifydata2.html
This format of the output file generated by the template data pipeline that exports data from DynamoDB to S3.
Hope that helps.
I would recommend using the CSV data format provided by datapipeline instead of custom.
For debugging the errors on cluster, you can lookup the jobflow in EMR console and look at the log files for the tasks that failed.
See below link for a solution that works (in the question section), albeit EMR 3.x. Just change the delimiter to "columnSeparator": ",". Personally, I wouldn't do CSV unless you are certain the data is sanitized correctly.
How to upgrade Data Pipeline definition from EMR 3.x to 4.x/5.x?

How to parse JSON String?

In my app, I have send some Request to server using POST Method and in response following JSON string printed on console :
[
{
"title": "American Heart Association",
"link": "www.americanheart.org/"
},
{
"title": "EverydayHealth.com",
"link": "www.everydayhealth.com"
},
{
"title": "GetFitSlowly.com",
"link": "www.getfitslowly.com"
}
]
so, how to parse it, and take title and link from this string
Yes as Jhaliya posted above...
Tutorial: JSON Over HTTP On The iPhone download from code here copy JSON library in your code and simple
object = [yourResponseInString JSONValue];
in object you'll get Array or dictionary..;)