KQL Regex Extraction for a string - kql

I am trying to pull description value from below parameter.
"Condition":null,"ConditionVersion":null,"Description":"ASR 2729"}}","eventCategory":"Administrative"
I only need to extract the Value = ASR 2729
I tried extract using different condition. Could not crack the correct regex.

if the input is a string and isn't a valid JSON payload, you can use the parse operator.
see:
parse operator`
for example:
print input = '"Condition":null,"ConditionVersion":null,"Description":"ASR 2729"}}","eventCategory":"Administrative"'
| parse input with * '"Description":"' description '"' *
input
description
"Condition":null,"ConditionVersion":null,"Description":"ASR 2729"}}","eventCategory":"Administrative"
ASR 2729
if the input is dynamic, or is a string that is a valid JSON payload, you can use dynamic object accessors.
see:
the dynamic data type
the parse_json() function
for example:
print input = '{"Condition":null,"ConditionVersion":null,"Description":"ASR 2729","eventCategory":"Administrative"}'
| extend input = parse_json(input)
| extend description = tostring(input.Description)
input
description
{ "Condition": null, "ConditionVersion": null, "Description": "ASR 2729", "eventCategory": "Administrative"}
ASR 2729
print input = dynamic({"Condition":null,"ConditionVersion":null,"Description":"ASR 2729","eventCategory":"Administrative"})
| extend description = tostring(input.Description)
input
description
{ "Condition": null, "ConditionVersion": null, "Description": "ASR 2729", "eventCategory": "Administrative"}
ASR 2729

Related

Karate - String concatenation of JSON value with a variable [duplicate]

The embedded expressions are not replaced when appended, prepended or surrounded by characters in the following simplified and very basic scenario:
* def jobId = '0001'
* def out =
"""
{
"jobId": "#(jobId)",
"outputMetadata": {
"fileName_OK": "#(jobId)",
"fileName_Fail_1": "some_text_#(jobId)",
"fileName_Fail_2": "#(jobId)-and-some-more-text",
"fileName_Fail_3": "prepend #(jobId) and append"
}
}
"""
* print out
Executing the scenario returns:
{
"jobId": "0001",
"outputMetadata": {
"fileName_OK": "0001",
"fileName_Fail_1": "some_text_#(jobId)",
"fileName_Fail_2": "#(jobId)-and-some-more-text",
"fileName_Fail_3": "prepend #(jobId) and append"
}
}
Is it a feature, a limitation, or a bug? Or, did I miss something?
This is as designed ! You can do this:
"fileName_Fail_2": "#(jobId + '-and-some-more-text')"
Any valid JS expression can be stuffed into an embedded expression, so this is not a limitation. And this works only within JSON string values or when the entire RHS is a string within quotes and keeps the parsing simple. Hope that helps !

Pentaho - Generate UUID based on input fields

Is there a way to generate UUID in pentaho step using input fields?
Example:
Input: Name, Address.
Output: UUID = UUID(Name + Address)
You can add a user defined java class and use a code similar to this:
String input = "Some name" + "Some address";
byte[] serialized = input.getBytes("UTF8");
UUID yourId = UUID.nameUUIDFromBytes(serialized);
This will generate a deterministic UUID based on the given input you have.
you can use Add checkup step of pentaho data integration, it will create a unique code for combination of fields.
The UUID.nameUUIDFromBytes() generates MD5 UUIDs. SHA1 is preferred over MD5. You can create SHA1 UUIDs with UuidCreator.getNameBasedSha1().
In this example, the variables name and address are concatenated to generate a SHA1 UUID:
// Create a name based UUID
String name = "localhost";
String address = "127.0.0.1";
UUID uuid = UuidCreator.getNameBasedSha1(name + address);
In this other example, a custom name space called "network" is used along with name and address:
// Create a custom namespace called 'network'
UUID namespace = UuidCreator.getNameBasedSha1("network");
// Create a name based UUID inside the 'network'
String name = "localhost";
String address = "127.0.0.1";
UUID uuid = UuidCreator.getNameBasedSha1(namespace, name + address);
Project page: https://github.com/f4b6a3/uuid-creator

Dataweave check if a value is contained within a YAML list

I want to check if the value present in the YAML list.
I have product.yaml
intGrp:
- "A"
- "CD"
- "EF"
- "ABC"
- "CDEF"
From transform message I want to check
If (intGrp contains payload.myvalue) this else that
Tried
%dw 2.0
var prop = Mule::p('intGrp')
output application/json
---
{
a: prop contains ("A")
}
But that doesn't solve my problem. Because I want to do an exact string match. i.e if I give
a: prop contains ("AB") I should get a false as there is no product as "AB".
Any help would be highly appreciated.
Thank you
The problem is that the YAML array is interpreted as a comma separated string in the property. The contains() function works differently in strings than in array. In strings it searches for a matching substring, hence 'AB' returns true. You could convert the string it back to an array using the splitBy() DataWeave function. I'm showing both side by side to highlight the difference:
%dw 2.0
var prop = Mule::p('intGrp')
var propArray = Mule::p('intGrp') splitBy ','
output application/json
---
{
raw: prop,
array: propArray,
a: propArray contains ("A"),
ab: propArray contains ("AB")
}
The output is:
{
"raw": "A,CD,EF,ABC,CDEF",
"array": [
"A",
"CD",
"EF",
"ABC",
"CDEF"
],
"a": true,
"ab": false
}
Note that if any of the entries contains a comma it will be split too.

How to Validate variable can have a vale either NULL or in String format

I'm starting to use karate for testing. I need to validate one json response.
JSON Schema Design:
response{
id* Integer Not null
Name* String can be null
}
now i need to verified id and name with below constraints,
id should be integer and should not be null.
Name can either in string or can be null.
what equation we can use in Karate.
Thanks in Advances
def jsonValidate = {name: '#integer',Name: '#present'}
so if i use Present here ,it means Name can be null or can have value of any data type. but i need to check Name can be either String or Null Value only
Read the docs, and try this: https://github.com/intuit/karate#optional-fields
* def jsonValidate = { id: '#number', name: '##string' }

whats a good way to parse the incoming url in nifi?

When using HandleHttpRequest, i want to setup a structure to operate on different objects through the same handler:
/api/foo/add/1/2..
how do i easily parse that out into
object = foo
operation = add
arg1 = [1,2,...]
?
Why not to use ExpressionLanguage getDelimitedField ?
From the Expression Language documentation:
getDelimitedField
Description: Parses the Subject as a delimited line of text and returns just a single field from that delimited text.
Subject Type: String
Arguments:
index : The index of the field to return. A value of 1 will return the first field, a value of 2 will return the second field, and so on.
delimiter : Optional argument that provides the character to use as a field separator. If not specified, a comma will be used. This value must be exactly 1 character.
quoteChar : Optional argument that provides the character that can be used to quote values so that the delimiter can be used within a single field. If not specified, a double-quote (") will be used. This value must be exactly 1 character.
escapeChar : Optional argument that provides the character that can be used to escape the Quote Character or the Delimiter within a field. If not specified, a backslash (\) is used. This value must be exactly 1 character.
stripChars : Optional argument that specifies whether or not quote characters and escape characters should be stripped. For example, if we have a field value "1, 2, 3" and this value is true, we will get the value 1, 2, 3, but if this value is false, we will get the value "1, 2, 3" with the quotes. The default value is false. This value must be either true or false.
This code is just an example you can try sticking a executeScript processor on nifi's workbench. You can use this as example.
from urlparse import parse_qs, urlparse
def parse ( uri2parse ) :
o = urlparse( uri2parse )
d = parse_qs( o.query )
return ( o.path[1:], d['year'][0], d['month'][0], d['day'][0] )
# get the flow file from the incoming queue
flowfile = session.get()
if flowfile is not None:
source_URI = flowfile.getAttribute( 'source_URI' )
destination_URI = flowfile.getAttribute( 'destination_URI' )
current_time = flowfile.getAttribute( 'current_time' )
# expand the URI into smaller pieces
src_table, src_year, src_month, src_day = parse( source_URI )
dst_table, dst_year, dst_month, dst_day = parse( destination_URI )
flowfile = session.putAllAttributes( flowfile, { 'src_table' : src_table, 'src_year': src_year, 'src_month' :src_month, 'src_day': src_day })
flowfile = session.putAllAttributes( flowfile, { 'dst_table' : dst_table, 'dst_year': dst_year, 'dst_month' :dst_month, 'dst_day': dst_day })
session.transfer( flowfile, REL_SUCCESS )
else:
flowfile = session.create()
session.transer( flowfile, REL_FAILURE )