Transcoding with dynamic arguments and validator problems - mule

I'm receiving the following csv
0000595182;3
0000504290;
0000710842;3
0000754608;1
0000489193;6
0000793814;4
0000580308;5
0000399045;
0000068910;
0000210986;2
0000908352;5
0000130721;
0000097876;2
0000185893;
0000218924;
0000456669;4
0000671520;4
0000796097;1
0000709024;3
0000203990;
0000205763;8
0000218211;4
0000409543;
0000994506;5
I'm trying to transcode the one digit number on the right (2nd Column) to the CodeOT on the following json table.
[
{
"CodeSap": null,
"CodeOT": 1
},
{
"CodeSap": 0,
"CodeOT": 1
},
{
"CodeSap": 1,
"CodeOT": 2
},
{
"CodeSap": 2,
"CodeOT": 2
},
{
"CodeSap": 3,
"CodeOT": 2
},
{
"CodeSap": 4,
"CodeOT": 2
},
{
"CodeSap": 5,
"CodeOT": 2
},
{
"CodeSap": 6,
"CodeOT": 3
},
{
"CodeSap": 7,
"CodeOT": 3
},
{
"CodeSap": 8,
"CodeOT": 3
},
{
"CodeSap": 9,
"CodeOT": 3
}
]
So I need to match the digit on the right of the csv to the CodeSAP and then change it to the CodeOT that matches to get this output:
{
"listeMaj": [
{
"refSig": "0000595182",
"risqueFinancier": "2"
},
{
"refSig": "0000710842",
"risqueFinancier": "2"
}
]
}
To get there I tried this but i seem to be getting an unusual error as if I can't pass dynamic arguments.
Here's my code :
%dw 2.0
input payload application/csv header=false,separator=";"
output application/json
var pldFix= payload map ((value,key)->
{refSig: value.column_0,
indicateur: value.column_1}
)
fun Find(val) = (trTable.CodeSap find (if(val == "" or val == null) null else val as Number))
var test = pldFix map ((value, key) ->
if (value.refSig == null or sizeOf(value.refSig) == 10)
okLines :{
refSig: value.refSig,
risqueFinancierBF: (trTable[Find(value.indicateur)[0]].CodeOT)
}
else
{
koLines: {
refSig: value.refSig,
risqueFinancierBF: if(value.indicateur == "") null else value.indicateur as Number
}
}
)
var koLines = flatten([test.*koLines])
var okLines = flatten([test.*okLines])
---
okLines
And the error is about dynamic arguments passed to the function, which I don't understand why?
You called the function 'Dynamic Selector' with these arguments:
1: Array ([{CodeSap: null,Libelle: "",CodeOT: 1}, {CodeSap: 0,Libelle: "Elle a demandé ...)
2: Null (null)
But it expects one of these combinations:
(Array, Range)
(Array, Number)
(Array, Name)
(Array, String)
(Binary, Range)
(Binary, Number)
(Date, Name)
(DateTime, Name)
(LocalDateTime, Name)
(LocalTime, Name)
(Object, Name)
(Object, Number)
(Object, String)
(Period, Name)
(String, Range)
(String, Number)
(Time, Name)
14| risqueFinancierBF: (trTable[Find(value.indicateur)[0]].CodeOT)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Trace:
at main::main (line: 14, column: 41)
The constraint is I also have to get the data from a database so it might change over time. And I have to check for the refSig to be at 10 characters exactly. I didn't get successfull as I keep getting "Cannot coerce Null(null) to Number" errors too. Any advice or help ?

The code that you provided is working with the inputs that you have shared. Probably you are getting a indicateur that does not exists as any CodeSap and due to which your Find function is returning a blank array [] and therefore your expression trTable[Find(value.indicateur)[0]] results as trTable[null] due to which you are getting the error.
As an additional suggestion you can try to use join and avoid writing all the extra functions. Join will join the two arrays based on the parameter that you will provide and give a joined array as output that you can easily map as per your output.
For details on the output structure you can refer to the official doc. The output can be confusing initially but it works the same way as join works in SQL. You can use leftJoin or outerJoin if you want to including the records that does not have corresponding CodeSap
%dw 2.0
import join from dw::core::Arrays
input payload application/csv header=false,separator=";"
output application/json
---
join(
payload filter (sizeOf($.column_0) == 10),
vars.trTable,
(record) -> record.column_1,
(code) -> code.CodeSap default "" // in csv there are no null, so null needs to be treated as blank string for matching with corresponding csv record
)
map {
refSig: $.l.column_0,
risqueFinancierBF: $.r.CodeOT
}

Related

How to remove object by value from a JSONB type array?

I want to remove a JSONB object by their unique 'id' value from a JSONB array. I am no expert at writing SQL code, but I managed to write the concatenate function.
For an example: Remove this object from an array below.
{
"id": "ad26e2be-19fd-4862-8f84-f2f9c87b582e",
"title": "Wikipedia",
"links": [
"https://en.wikipedia.org/1",
"https://en.wikipedia.org/2"
]
},
Schema:
CREATE TABLE users (
url text not null,
user_id SERIAL PRIMARY KEY,
name VARCHAR,
list_of_links jsonb default '[]'
);
list_of_links format:
[
{
"id": "ad26e2be-19fd-4862-8f84-f2f9c87b582e",
"title": "Wikipedia",
"links": [
"https://en.wikipedia.org/1",
"https://en.wikipedia.org/2"
]
},
{
"id": "451ac172-b93e-4158-8e53-8e9031cfbe72",
"title": "Russian Wikipedia",
"links": [
"https://ru.wikipedia.org/wiki/",
"https://ru.wikipedia.org/wiki/"
]
},
{
"id": "818b99c8-479b-4846-ac15-4b2832ec63b5",
"title": "German Wikipedia",
"links": [
"https://de.wikipedia.org/any",
"https://de.wikipedia.org/any"
]
},
...
]
The concatenate function:
update users set list_of_links=(
list_of_links || (select *
from jsonb_array_elements(list_of_links)
where value->>'id'='ad26e2be-19fd-4862-8f84-f2f9c87b582e'
)
)
where url='test'
returning *
;
Your json data is structured so you have to unpack it, operate on the unpacked data, and then repack it again:
SELECT u.url, u.user_id, u.name,
jsonb_agg(
jsonb_build_object('id', l.id, 'title', l.title, 'links', l.links)
) as list_of_links
FROM users u
CROSS JOIN LATERAL jsonb_to_recordset(u.list_of_links) AS l(id uuid, title text, links jsonb)
WHERE l.id != 'ad26e2be-19fd-4862-8f84-f2f9c87b582e'::uuid
GROUP BY 1, 2, 3
The function jsonb_to_recordset is a set-returning function so you have to use it as a row source, joined to its originating table with the LATERAL clause so that the list_of_links column is available to the function to be unpacked. Then you can delete the records you are not interested in using the WHERE clause, and finally repack the structure by building the record fields into a jsonb structure and then aggregating the individual records back into an array.
I wrote this on JS but that does not matter to how it works. Essentially, its getting all the items from the array, then finding the matching id which returns an index. And using that index, I use "-" operator which takes the index and removes it from the array. Sorry if my grammar is bad.
//req.body is this JSON object
//{"url":"test", "id": "ad26e2be-19fd-4862-8f84-f2f9c87b582e"}
var { url, id } = req.body;
pgPool.query(
`
select list_of_links
from users
where url=$1;
`,
[url],
(error, result) => {
//block code executing further if error is true
if (error) {
res.json({ status: "failed" });
return;
}
if (result) {
// this function just returns the index of the array element where the id matches from request's id
// 0, 1, 2, 3, 4, 5
var index_of_the_item = result.rows.list_of_links
.map(({ id: db_id }, index) =>
db_id === id ? index : false
)
.filter((x) => x !== false)[0];
//remove the array element by it's index
pgPool.query(
`
update users
set list_of_links=(
list_of_links - $1::int
)
where url=$2
;
`,
[index_of_the_item, url], (e, r) => {...}
);
}
}
);

'match each' one element in the array [duplicate]

This question already has an answer here:
Using match each contains for json array items assertion
(1 answer)
Closed 1 year ago.
My question about selective asserting with 'match each'.
Below is a sample json body:
* def data =
"""
{
"companies": [
{
"companyDetails": {
"name": "companyName",
"location": {
"address": "companyAddress",
"street": "companyStreet"
}
}
},
{
"companyDetails": {
"name": "companyName",
"location": {
"address": "companyAddress",
"street": "companyStreet"
}
}
}
]
}
"""
How can we assert whether each 'companyDetails' object in the response contains a certain element, e.g. name only?
* match each data.companies contains { companyDetails: { name: '#notnull' } }
When I use above step, this returns below error:
$[0] | actual does not contain expected | all key-values did not match, expected has un-matched keys
So, is there any way we can assert only one field inside this object? For example, assert whether each companyDetails object contains name, but ignores the other elements such as location? Thanks
This will work:
* match each data.companies == { companyDetails: { name: '#notnull', location: '#notnull' } }
This will also work:
* match data.companies contains deep { companyDetails: { name: '#notnull' } }
Sometimes it is better to transform the JSON before a match:
* match each data..companyDetails contains { name: '#notnull' }

How to create a function that will add a a value to array in object

I want to create a function that will add a grade to specific student and subject
This is how my document looks
"_id" : ObjectId("5b454b545b4545b"),
"name" : "Bob",
"last_name" : "Bob",
"nr_album" : "222",
"grades" ; {
"IT" : [
3,
3,
5,
4
]
}
This is what I came up with
function addGrade(
nr_album,grades,value
) {
db.studenci.update (
{nr_album: nr_album},
{ $push: { [grades]: value}});
}
addGrade("222","grades.IT",100)
It`s working properly, but what I want to achieve is to except "grades.IT" pass only "IT" in the parameters.
You can use template strings in ES2015.
Pass Arguments like that
addGrade("222","IT",100)
Get parameters "IT" and make it dynamically required string
function addGrade(nr_album, grades, value) {
const string = `grades.{$grades}`
db.studenci.update({
nr_album: nr_album
}, {
$push: { [string]: value }
});
}

How to convert this sql query to mongodb

Considering this query written in sql server how would I efficiently convert it to mongodb:
select * from thetable where column1 = column2 * 2
You can use below aggregation.
You project a new field comp to calculate the expression value followed by $match to keep the docs with eq(0) value and $project with exclusion to drop comp field.
db.collection.aggregate([
{ $addFields: {"comp": {$cmp: ["$column1", {$multiply: [ 2, "$column2" ]} ]}}},
{ $match: {"comp":0}},
{ $project:{"comp":0}}
])
If you want to run your query in mongo Shell,
try below code,
db.thetable .find({}).forEach(function(tt){
var ttcol2 = tt.column2 * 2
var comapreCurrent = db.thetable.findOne({_id : tt._id,column1 : ttcol2});
if(comapreCurrent){
printjson(comapreCurrent);
}
});
I liked the answer posted by #Veeram but it would also be possible to achieve this using $project and $match pipeline operation.
This is just for understanding the flow
Assume we have the below 2 documents stored in a math collection
Mongo Documents
{
"_id" : ObjectId("58a055b52f67a312c3993553"),
"num1" : 2,
"num2" : 4
}
{
"_id" : ObjectId("58a055be2f67a312c3993555"),
"num1" : 2,
"num2" : 6
}
Now we need to find if num1 = 2 times of num2 (In our case the document with _id ObjectId("58a055b52f67a312c3993553") will be matching this condition)
Query:
db.math.aggregate([
{
"$project": {
"num2": {
"$multiply": ["$num2",1]
},
"total": {
"$multiply": ["$num1",2]
},
"doc": "$$ROOT"
}
},
{
"$project": {
"areEqual": {"$eq": ["$num2","$total"]
},
doc: 1
}
},
{
"$match": {
"areEqual": true
}
},
{
"$project": {
"_id": 1,
"num1": "$doc.num1",
"num2": "$doc.num2"
}
}
])
Pipeline operation steps:-
The 1st pipeline operation $project calculates the total
The 2nd pipeline operation $project is used to check if the total
matches the num2. This is needed as we cannot use the comparision
operation of num2 with total in the $match pipeline operation
The 3rd pipeline operation matches if areEqual is true
The 4th pipeline operation $project is just used for projecting the fields
Note:-
In the 1st pipeline operation I have multiplied num2 with 1 as num1 and num2 are stored as integers and $multiply returns double value. So incase I do not use $mutiply for num2, then it tries to match 4 equals 4.0 which will not match the document.
Certainly no need for multiple pipeline stages when a single $redact pipeline will suffice as it beautifully incorporates the functionality of $project and $match pipeline steps. Consider running the following pipeline for an efficient query:
db.collection.aggregate([
{
"$redact": {
"$cond": [
{
"$eq": [
"$column1",
{ "$multiply": ["$column2", 2] }
]
},
"$$KEEP",
"$$PRUNE"
]
}
}
])
In the above, $redact will return all documents that match the condition using $$KEEP and discards those that don't match using the $$PRUNE system variable.

Parsing JSON polymorphic records with Elm

Probably it is a beginner's question. I have a JSON data format that holds polymorphic records and I need to parse it. These are vertices or edges of a graph
{
"records": [{
"id": 0,
"object": {
"id": "vertex1"
}
}, {
"id": 1,
"object": {
"id": "vertex2"
}
}, {
"id": 2,
"object": {
"from": "vertex1",
"to": "vertex2"
}
}]
}
As you can see they all have id, but vertices and edges have different record structures.
I tried to find something on parsing such structures, but the only thing I found was Handling records with shared substructure in Elm, but I cannot translate the answer to Elm 0.17 (a simple renaming of data to type did not help)
In general there are 2 challenges:
defining a polymorphic record
decode JSON dynamically into a vertex or an edge
This is how far I got:
type alias RecordBase =
{ id : Int
}
type Records = List (Record RecordBase)
type Record o =
VertexRecord o
| EdgeRecord o
type alias VertexRecord o =
{ o | object : {
id : Int
}
}
type alias EdgeRecord o =
{ o | object : {
from : Int
, to : Int
}
}
but the compiler complains with
Naming multiple top-level values VertexRecord makes things
ambiguous.
Apparently union already defined the VertexRecord and EdgeRecord types.
I really don't know how to proceed from here. All suggestions are most welcome.
Since you have the label id in multiple places and of multiple types, I think it makes things a little cleaner to have type aliases and field names that indicate each id's purpose.
Edit 2016-12-15: Updated to elm-0.18
type alias RecordID = Int
type alias VertexID = String
type alias VertexContents =
{ vertexID : VertexID }
type alias EdgeContents =
{ from : VertexID
, to : VertexID
}
Your Record type doesn't actually need to include the field name of object anywhere. You can simply use a union type. Here is an example. You could shape this a few different ways, the important part to understand is fitting both types of data in as a single Record type.
type Record
= Vertex RecordID VertexContents
| Edge RecordID EdgeContents
You could define a function that returns the recordID given either a vertex or edge like so:
getRecordID : Record -> RecordID
getRecordID r =
case r of
Vertex recordID _ -> recordID
Edge recordID _ -> recordID
Now, onto decoding. Using Json.Decode.andThen, you can decode the common record ID field, then pass the JSON off to another decoder to get the rest of the contents:
recordDecoder : Json.Decoder Record
recordDecoder =
Json.field "id" Json.int
|> Json.andThen \recordID ->
Json.oneOf [ vertexDecoder recordID, edgeDecoder recordID ]
vertexDecoder : RecordID -> Json.Decoder Record
vertexDecoder recordID =
Json.object2 Vertex
(Json.succeed recordID)
(Json.object1 VertexContents (Json.at ["object", "id"] Json.string))
edgeDecoder : RecordID -> Json.Decoder Record
edgeDecoder recordID =
Json.object2 Edge
(Json.succeed recordID)
(Json.object2 EdgeContents
(Json.at ["object", "from"] Json.string)
(Json.at ["object", "to"] Json.string))
recordListDecoder : Json.Decoder (List Record)
recordListDecoder =
Json.field "records" Json.list recordDecoder
Putting it all together, you can decode your example like this:
import Html exposing (text)
import Json.Decode as Json
main =
text <| toString <| Json.decodeString recordListDecoder testData
testData =
"""
{
"records": [{
"id": 0,
"object": {
"id": "vertex1"
}
}, {
"id": 1,
"object": {
"id": "vertex2"
}
}, {
"id": 2,
"object": {
"from": "vertex1",
"to": "vertex2"
}
}]
}
"""