Mule DWL 2.0 to convert number to word - mule

Input : 123
output: one hundred and twenty three
how to convert number to word in dwl

This answer has been inspired from a Java solution to the problem at hand. However this is not generic enough and solves only for 3 digit numbers. You can definitely extend this approach though.
Script
%dw 2.0
output json
import * from dw::Runtime
var units = [
"",
" one",
" two",
" three",
" four",
" five",
" six",
" seven",
" eight",
" nine"
]
var twoDigits = [
" ten",
" eleven",
" twelve",
" thirteen",
" fourteen",
" fifteen",
" sixteen",
" seventeen",
" eighteen",
" nineteen"
]
var tenMultiples = [
"",
"",
" twenty",
" thirty",
" forty",
" fifty",
" sixty",
" seventy",
" eighty",
" ninety"
]
var placeValues = [
" ",
" thousand",
" million",
" billion",
" trillion"
]
var inputNum = 413
fun wtonum(inp) = do {
var word =""
var index=0
---
if((inp mod 1000) != 0)
{ a: (conv(inp mod 1000) ) }
else ""
}
fun conv(numInp) = do {
var words = ""
var calc = if((numInp mod 100) < 10) {a: units[numInp]} else if ((numInp mod 100) < 20) {a: twoDigits[(numInp mod 100) mod 10]} else {a: (tenMultiples[(numInp mod 100)/10] ++ units[(numInp mod 100) mod 10])}
---
if(numInp/100 > 0.99) units[numInp/100] ++ " hundred" ++ calc.a else calc.a
}
---
wtonum(inputNum)

Related

Building a pandas condition query using a loop

I am having an object filters which gives me conditions to be applied to a dataframe as shown below:
"filters": [
{
"dimension" : "dimension1",
"operator" : "IN",
"value": ["value1", "value2", "value3"],
"conjunction": None
},
{
"dimension" : "dimension2",
"operator" : "NOT IN",
"value": ["value1", "value2", "value3"],
"conjunction": "OR"
},
{
"dimension" : "dimension3",
"operator" : ">=",
"value": ["value1", "value2", "value3"],
"conjunction": None
},
{
"dimension" : "dimension4",
"operator" : "==",
"value": ["value1", "value2", "value3"],
"conjunction": "AND"
},
{
"dimension" : "dimension5",
"operator" : "<=",
"value": ["value1", "value2", "value3"],
"conjunction": None
},
{
"dimension" : "dimension6",
"operator" : ">",
"value": ["value1", "value2", "value3"],
"conjunction": "OR"
},
]
Here is the grammar by which I used to build the SQL Query:
for eachFilter in filters:
conditionString = ""
dimension = eachFilter["dimension"]
operator = eachFilter["dimension"]
value = eachFilter["dimension"]
conjunction = eachFilter["dimension"]
if len(eachFilter["value"]) == 1:
value = value[0]
if operator != "IN" or operator != "NOT IN":
conditionString += f' {dimension} {operator} {value} {conjunction}'
else:
conditionString += f' {dimension} {operator} {value} ({conjunction})'
else:
value = ", ".join(value)
if operator != "IN" or operator != "NOT IN":
conditionString += f' {dimension} {operator} {value} {conjunction}'
else:
conditionString += f' {dimension} {operator} {value} ({conjunction})'
But when it comes to pandas I can't use such queries so wanted to know if there's a good way to loop these filter conditions based on the conditions given in filters. Note that these are the only conditions I will be operating through.
In case of None as conjunction it should have the conjunction as "AND".
I have used eval function to create nested eval statements for pandas conditional filtering and then used it at the end to evaluate them all as shown below:
for eachFilter in filtersArray:
valueString = ""
values = eachFilter[self.queryBuilderMap["FILTERS_MAP"]["VALUE"]]
dimension = eachFilter[self.queryBuilderMap["FILTERS_MAP"]["DIMENSION"]]
conjunction = self.defineConjunction(eachFilter[self.queryBuilderMap["FILTERS_MAP"]["CONJUNCTION"]])
if filterCheck==len(filtersArray) - 1:
conjunction = ""
if (eachFilter[self.queryBuilderMap["FILTERS_MAP"]["OPERATOR"]]).lower() == "in":
for eachValue in values:
valueString += f"(df['{dimension}'] == {eachValue}) {conjunction} "
evalString += valueString
elif (eachFilter[self.queryBuilderMap["FILTERS_MAP"]["OPERATOR"]]).lower() == "not in":
for eachValue in values:
valueString += f"(df['{dimension}'] != {eachValue}) {conjunction} "
evalString += valueString
else:
for eachValue in values:
valueString += f"(df['{dimension}'] {eachFilter[self.queryBuilderMap['FILTERS_MAP']['OPERATOR']]} {eachValue}) {conjunction} "
evalString += valueString
filterCheck += 1
print(valueString)
#print(evalString)
df = eval(f'df.loc[{evalString}]')
#print(df.keys())
return df
Here filtermap is the dictionary key value pair:
"FILTERS_MAP": {
"DIMENSION": "dimension",
"OPERATOR": "operator",
"VALUE": "value",
"CONJUNCTION": "conjunction",
"WRAPPER": "wrapper"
}

How to search Host Groups in Zabbix API

I want to list all Host Groups that match some search criteria.
I've tried that:
data = '{"jsonrpc": "2.0",
"method": "hostgroup.get",
"params": {
"output": "extend",
"search": {
"name": [
"' + group_name + '"
]
},
},
"id":' + str(msg_id) + ',
"auth": "' + auth + '"
}'
But that is not a correct syntax.
I also tried this:
data = '{"jsonrpc": "2.0",
"method": "hostgroup.get",
"params": {
"output": "extend",
"filter": {
"name": [
"' + group_name + '"
]
},
},
"id":' + str(msg_id) + ',
"auth": "' + auth + '"
}'
This one works, but it only matches exactly the group name. And, so, it always returns 1 or 0 matches.
I tried adding the "options":"searchWildcardsEnabled" option in this last query, but it didn't make a difference in the result (i.e. it didn't produce multiple groups as output).
I've found the correct way. I'll post it here in case anyone else needs it later.
data = '{"jsonrpc": "2.0",
"method": "hostgroup.get",
"params": {
"output": "extend",
"search": {
"name": [
"' + group_name + '"
]
}
},
"id":' + str(msg_id) + ',
"auth": "' + auth + '"
}'
You don't need to specify the wildcard, it's default. Also, you don't need to put the % inside your query.

Transform PDI response to determined structure

I'm new to PDI I'm working recovering information from an API and I need to transform the information that comes to a certain structure and I'm not clear how to do it with the concatenation or with another transformation.
This is my answer to work with:
[
  {
    "457": {
      "1": {
        "value": "4.1",
        "timestamp": 1534159593
      },
      "2": {
        "value": "52.2",
        "timestamp": 1534159593
      },
      "3": {
        "value": "23.0",
        "timestamp": 1534159593
      },
      "4": {
        "value": "250.0",
        "timestamp": 1534159593
      }
    }
  }
]
and I would need something of this type to remain, to send it by POST to another API
{
  "id": "457",
  "type": "greenhouse",
  "1": {
    "value": 4.1,
    "type": "Float",
    "timestamp": 1534159593
  },
  "2": {
    "value": 52.2,
    "type": "Integer",
    "timestamp": 1534159593
  },
  "3": {
    "value": 23.0,
    "type": "Integer",
    "timestamp": 1534159593
  },
  "4": {
    "value": 250.0,
    "type": "Integer",
    "timestamp": 1534159593
  }
}
Thanks for the help.
Edit01
Holla again,
I'm doing it as you told me and I have a problem.
This is my code:
// Script here
var data = data2;
var tests = data;
var tests3 = {"457": {"2": {"value": "54.0", "timestamp": 1534246741}, "3": {"value": "22.2", "timestamp": 1534246741}, " 4 ": {" value ":" 260.0 "," timestamp ": 1534246741}," 21 ": {" value ":" 890.0 "," timestamp ": 1534246741}," 1 ": {" value ":" 4.13 "," timestamp ": 1534246741}," 17 ": {" value ":" 194.04687499999997 "," timestamp ": 1534246741}," 5 ": {" value ":" 35.417 "," timestamp ": 1534246741}," 6 ": {" value ":" 26.299999999999997 "," timestamp ": 1534246741}," 8 ": {" value ":" 4.7 "," timestamp ": 1534246741}," 15 ": {" value ":" 0.78 "," timestamp ": 1534246741}," 10 ": {" value ":" 24.94 "," timestamp ": 1534246741}," 22 ": {" value ":" 0.0 "," timestamp ": 1534246741}," 23 ": {" value ":" 0.0 "," timestamp ": 1534246741}," 24 ": {" value ":" 0.0 "," timestamp ": 1534246741}," 26 ": {" value ":" 0.0 "," timestamp ": 1534246741}," 653 ": {" value ":" 0.0 "," timestamp ": 1534246741}," 657 ": {" value ":" - 98.0 "," timestamp ": 1518420299}, "43": {"value": "11.892947103200001", "timestamp": 1534246741}, "42": {"value": "403.61749999999995", "timestamp": 1534246741}}};
var key = Object.keys (data) [0];
var finalobj = {};
for (var and in data [key]) {
    finalobj [e] = {
        type: "float"
        , value: parseFloat (data [key] [e] .value)
        , metadata: {
            timestamp: {
                value: parseInt (data [key] [e] .timestamp)
                , type: "Integer"
            }
        }
    };
}
 var JsonOutput = JSON.stringify (finalobj);
The variable data2 is the one that has my JSON is really the same information that tests3, the code if it works but if I step puts puts data by tests3, which I do not understand since data has the same value and would have to work and the response of JsonOutput is {} but I do it with tests3 if it works correctly.
It looks like it is at the time of retrieving the variable but then I show that it has data and data2 and it is the same information that tests3, I do not understand what happens.
can you help me?
Right now there isn't a built in step for writing nested JSON in Pentaho, you have to use JavaScript to achieve it, there is a really great post here that i'm using as guide to build my own process.

beanShell json body for post method on Jmeter

can anyone help to add escape quotation marks with slash like \" on this son body:
{
"firstName": "teo",
"lastName": "leo",
"companyName": "abc",
"restaurantId": "54d34443e4b0382b3208703d",
"phones": [
{
"label": "Mobile",
"value": "123456789",
"countryCode": "+123",
"isPrimary": true
}
],
"addresses": "haha"
}
i've tried with this one but beanShell PreProcessor can't accept it
String formvalues = "{\"firstName\": \"teo\",\"lastName\": \"leo\",\"companyName\": \"abc\",\"restaurantId\": \"54d34443e4b0382b3208703d\",\"phones\": [{\"label\":\"Mobile\",\"value\": \"123456789\",\"countryCode\": \"+123\",\"isPrimary\": true}],\"addresses\": \"haha\"}"
thanks you so much!
If you want to keep formatting:
String formvalues = "{\n" +
" \"firstName\": \"teo\",\n" +
" \"lastName\": \"leo\",\n" +
" \"companyName\": \"abc\",\n" +
" \"restaurantId\": \"54d34443e4b0382b3208703d\",\n" +
" \"phones\": [\n" +
" {\n" +
" \"label\": \"Mobile\",\n" +
" \"value\": \"123456789\",\n" +
" \"countryCode\": \"+123\",\n" +
" \"isPrimary\": true\n" +
" }\n" +
" ],\n" +
" \"addresses\": \"haha\"\n" +
"}";
If you want single line (mind that Content-Length will be different)
String formvalues = "{\"firstName\":\"teo\",\"lastName\":\"leo\",\"companyName\":\"abc\",\"restaurantId\":\"54d34443e4b0382b3208703d\",\"phones\":[{\"label\":\"Mobile\",\"value\":\"123456789\",\"countryCode\":\"+123\",\"isPrimary\":true}],\"addresses\":\"haha\"}";
Full code to generate the body and add it as parameter:
import org.apache.jmeter.config.Arguments;
import org.apache.jmeter.protocol.http.util.HTTPArgument;
String formvalues = "{\n" +
" \"firstName\": \"teo\",\n" +
" \"lastName\": \"leo\",\n" +
" \"companyName\": \"abc\",\n" +
" \"restaurantId\": \"54d34443e4b0382b3208703d\",\n" +
" \"phones\": [\n" +
" {\n" +
" \"label\": \"Mobile\",\n" +
" \"value\": \"123456789\",\n" +
" \"countryCode\": \"+123\",\n" +
" \"isPrimary\": true\n" +
" }\n" +
" ],\n" +
" \"addresses\": \"haha\"\n" +
"}";
Arguments arguments = new Arguments();
arguments.addArgument(new HTTPArgument("",formvalues));
sampler.setArguments(arguments);
JavaDoc on relevant classes:
Arguments
HTTPArgument
HTTPSamplerProxy (shorthand for sampler)
See How to Use BeanShell: JMeter's Favorite Built-in Component guide for more information on Beanshell scripting in JMeter.

Replace double quotes to single quotes with awk

BEGIN {
q = "\""
FS = OFS = q ", " q
}
{
split($1, arr, ": " q)
for(i in arr ) {
if(arr[i] == "name") {
gsub(q, "'", arr[i+1])
# print arr[1] ": " q arr[2], $2, $3
}
}
}
I have a json file, some data like this:
{"last_modified": {"type": "/type/datetime", "value": "2008-04-01T03:28:50.625462"}, "type": {"key": "/type/author"}, "name": "National Research Council. Committee on the Scientific and Technologic Base of Puerto Rico"s Economy.", "key": "/authors/OL2108538A", "revision": 1}
The name's value have a double quote, I only want to replace the double quote to single quote , not the all double quote, please tell me how to fix it?
awk '{for(i=1;i<=NF;i++) if($i~/name/){ gsub("\042","\047",$(i+1)) }}1' file