What format should a REST API use for ULIDs? i.e. Base32 or RFC4122 - api

ULID Base32 01GMEX2SA207FNV8E19QM8EJ4M is the same as RFC4122 01851dd1-6542-01df-5da1-c14de8874894.
Postgresql appears to store the ULIDs in RFC4122 format.
Both GET /assets/01GMEX2SA207FNV8E19QM8EJ4M/resource_acl and GET /assets/01851dd1-6542-01df-5da1-c14de8874894/resource_acl return the same successful response:
{
"id": "/assets/01GMEX2SA207FNV8E19QM8EJ4M/resource_acl",
"asset": "/assets/01GMEX2SA207FNV8E19QM8EJ4M",
"members": [
"/resource_acl_members/acl=01GMEX2SA207FNV8E19QM8EJ4M;user=01GMEX2SA1FEZHVQJYTW48P7FE"
]
}
GET /resource_acl_members/acl=01851dd1-6542-01df-5da1-c14de8874894;user=01851dd1-6541-7bbf-1dde-5ed7088b1dee returns the desired successful response.
{
"id": "/resource_acl_members/acl=01GMEX2SA207FNV8E19QM8EJ4M;user=01GMEX2SA1FEZHVQJYTW48P7FE",
"acl": "/assets/01GMEX2SA207FNV8E19QM8EJ4M/resource_acl",
"user": "/tenant_users/01GMEX2SA1FEZHVQJYTW48P7FE",
"roles": [
"ROLE_MANAGE_PROJECT"
]
}
GET /resource_acl_members/acl=01GMEX2SA207FNV8E19QM8EJ4M;user=01GMEX2SA1FEZHVQJYTW48P7FE, however, returns Postgresql error: Invalid text representation: 7 ERROR: invalid input syntax for type uuid: \"01GMEX2SA207FNV8E19QM8EJ4M\"\nCONTEXT: unnamed portal parameter $1 = '...'",.
Should I be consistent and use either solely Base32 or RFC4122? If so, which format?

Related

How to load a jsonl file into BigQuery when the file has mix data fields as columns

During my work flow, after extracting the data from API, the JSON has the following structure:
[
{
"fields":
[
{
"meta": {
"app_type": "ios"
},
"name": "app_id",
"value": 100
},
{
"meta": {},
"name": "country",
"value": "AE"
},
{
"meta": {
"name": "Top"
},
"name": "position",
"value": 1
}
],
"metrics": {
"click": 1,
"price": 1,
"count": 1
}
}
]
Then it is store as .jsonl and put on GCS. However, when I load it onto BigQuery for further extraction, the automatic schema inference return the following error:
Error while reading data, error message: JSON parsing error in row starting at position 0: Could not convert value to string. Field: value; Value: 100
I want to convert it in to the following structure:
app_type
app_id
country
position
click
price
count
ios
100
AE
Top
1
1
1
Is there a way to define manual schema on BigQuery to achieve this result? Or do I have to preprocess the jsonl file before put it to BigQuery?
One of the limitations in loading JSON data from GCS to BigQuery is that it does not support maps or dictionaries in JSON.
A invalid example would be:
"metrics": {
"click": 1,
"price": 1,
"count": 1
}
Your jsonl file should be something like this:
{"app_type":"ios","app_id":"100","country":"AE","position":"Top","click":"1","price":"1","count":"1"}
I already tested it and it works fine.
So wherever you process the conversion of the json files to jsonl files and storage to GCS, you will have to do some preprocessing.
Probably you have to options:
precreate target table with an app_id field as an INTEGER
preprocess jsonfile and enclose 100 into quotes like "100"

How to extract the field from JSON object with QueryRecord

I have been struggling with this problem for a long time. I need to create a new JSON flowfile using QueryRecord by taking an array (field ref) from input JSON field refs and skip the object field as shown in example below:
Input JSON flowfile
{
"name": "name1",
"desc": "full1",
"refs": {
"ref": [
{
"source": "source1",
"url": "url1"
},
{
"source": "source2",
"url": "url2"
}
]
}
}
QueryRecord configuration
JSONTreeReader setup as Infer Schema and JSONRecordSetWriter
select name, description, (array[rpath(refs, '//ref[*]')]) as sources from flowfile
Output JSON (need)
{
"name": "name1",
"desc": "full1",
"references": [
{
"source": "source1",
"url": "url1"
},
{
"source": "source2",
"url": "url2"
}
]
}
But got error:
QueryRecord Failed to write MapRecord[{references=[Ljava.lang.Object;#27fd935f, description=full1, name=name1}] with schema ["name" : "STRING", "description" : "STRING", "references" : "ARRAY[STRING]"] as a JSON Object due to java.lang.ClassCastException: null
Try the following approach, in your case it shoud work:
1) Read your JSON field fully (I imitated it with GenerateFlowFile processor with your example)
2) Add EvaluateJsonPath processor which will put 2 header fileds (name, desc) into the attributes:
3) Add SplitJson processor which will split your JSON byt refs/ref/ groups (split by "$.refs.ref"):
4) Add ReplaceText processor which will add you header fields (name, desc) to the split lines (replace "[{]" value with "{"name":"${json.name}","desc":"${json.desc}","):
5) It`s done:
Full process in my demo case:
Hope this helps.
Solution!: use JoltTransformJSON to transform JSON by Jolt specification. About this specification.

Problems matching a long value in Rest Assured json body

I have the following response:
[
{
"id": 53,
"fileUri": "abc",
"filename": "abc.jpg",
"fileSizeBytes": 578466,
"createdDate": "2018-10-15",
"updatedDate": "2018-10-15"
},
{
"id": 54,
"fileUri": "xyz",
"filename": "xyz.pdf",
"fileSizeBytes": 88170994,
"createdDate": "2018-10-15",
"updatedDate": "2018-10-15"
}
]
and I am trying to match the id value to the object in JUnit like so:
RestAssured.given() //
.expect() //
.statusCode(HttpStatus.SC_OK) //
.when() //
.get(String.format("%s/%s/file", URL_BASE, id)) //
.then() //
.log().all() //
.body("", hasSize(2)) //
.body("id", hasItems(file1.getId(), file2.getId()));
But when the match occurs it tries to match an int to a long. Instead I get this output:
java.lang.AssertionError: 1 expectation failed.
JSON path id doesn't match.
Expected: (a collection containing <53L> and a collection containing <54L>)
Actual: [53, 54]
How does one tell Rest Assured that the value is indeed a long even though it might be short enough to fit in an int? I can cast the file's id to an int and it works, but that seems sloppy.
The problem is that when converting from json to java type, int type selected,
one solution is to compare int values.
instead of
.body("id", hasItems(file1.getId(), file2.getId()));
use
.body("id", hasItems(new Long(file1.getId()).intValue(), new Long(file2.getId()).intValue()));

Error when running query in backand: not a valid constant

Hi when working in Backand I try to run the following query:
{
"object": "dr_persons",
"q": {
"person_type" : "4"
},
"fields": ["first_name", "last_name"]
}
person_type is a table in mysql db with "4" as a value.
When I run it I get this error:
Errors in Query
Please fix the following errors in the query:
not a valid constant for field person_type of object dr_persons
The only thing I can see is that when I sync my db it makes it a "float" which I can't change. Can anyone give me some direction on this?
The error message is due to the constant "4" being a string. According to the field type, float, it should be a number. Hence your query should be:
{
"object": "dr_persons",
"q": {
"person_type" : 4
},
"fields": ["first_name", "last_name"]
}

Keen-io: i can't delete special event using extraction query filter

using extraction query (which used url decoded for reading):
https://api.keen.io/3.0/projects/xxx/queries/extraction?api_key=xxxx&event_collection=dispatched-orders&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800
return
{
result: [
{
mobile: "13185716746",
keen : {
timestamp: "2015-02-10T07:10:07.816Z",
created_at: "2015-02-10T07:10:08.725Z",
id: "54d9aed03bc6964a7d311f9e"
},
data : {
itemId: 2130,
num: 1
},
features: {
communityId: 2000,
dispatcherId: 39,
tradeId: 8581
}
}
]
}
but if i use the same filters in my delete query url (which used url decoded for reading):
https://api.keen.io/3.0/projects/xxxxx/events/dispatched-orders?api_key=xxxxxx&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800
return
{
properties: {
data.num: "num",
keen.created_at: "datetime",
mobile: "string",
keen.id: "string",
features.communityId: "num",
features.dispatcherId: "num",
keen.timestamp: "datetime",
features.tradeId: "num",
data.itemId: "num"
}
}
plz help me ...
It looks like you are issuing a GET request for the delete comment. If you perform a GET on a collection you get back the schema that Keen has inferred for that collection.
You'll want to issue the above as a DELETE request. Here's the cURL command to do that:
curl -X DELETE "https://api.keen.io/3.0/projects/xxxxx/events/dispatched-orders?api_key=xxxxxx&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800"
Note that you'll probably need to URL encode that JSON as you mentioned in your above post!