I have a PG table with a field of type char(10)[].
I need to update a record in the table with values from a Mule flow.
So, i did something like this:
flowVars.test=['aaa', 'bbb',ccc'];
Then, I'm trying to submit an update statement like this:
update tab1 set fld1=#[flowVars.test]
it's failing with the error:
Cannot cast an instance of java.util.ArrayList to type Types.ARRAY
My understanding is that SQL array should be used in this scenario but I can't figure out how to get an instance of such an array in a flow and how to work with it in MEL.
Can someone please advise?
Thank you,
There are many sources that suggest to use the Connection#createArrayOf(). But I don't know how to use it in the Database connector.
However, for this purpose I will do this solution:
Convert the ArrayList to a String. It should be formed as: {value1, value2, ...}
Change the Database Query Type from Parameterized into Dynamic
Update the SQL Query become: update tab1 set fld1 = '#[flowVars.test]'. The additional single quote is required for this query type.
Finally, by using the following configuration I can update field of type character(10)[]:
<expression-transformer expression="#[flowVars.test = ['aaa', 'bbb', 'ccc'].toString().replace('[', '{').replace(']', '}')]" doc:name="Expression"/>
<db:update config-ref="Postgre_Database_Configuration" doc:name="Database">
<db:dynamic-query><![CDATA[update tab1 set fld1 = '#[flowVars.test]']]></db:dynamic-query>
</db:update>
Ok, I've found an answer in MuleSoft doc.
Starting from version 3.6 DB connector supports custom types and allows defining mapping between SQL arrays and structures and custom user classes.
It's documented here .
Related
how to set input parameter in select query on anypoint-stutio
i'm newer on anypoint studio. and i'm tring to get some data from database . but i don't know how to set input parameter in anypoint.
may i know who can provide some guides or cookbook for this.
for example:
my query is "select * from ESPOSSG.xf_salesimport where xf_txdate=:xf_txdate "
i need to set input parameters in anypoint
the i can use "http://localhost:10256/sales/txdate='20190601'" to get data
Your Input parameters should be like Dataweave expression, instead you have used expression to print logger.
Try this Instead:
{
xf_txdate:message.inboundProperties.'http.query.params'.txdate
}
I have a huge dataset where each record have json data similar to below -
{"project":{"id":"2625","createDate":1542597000000,"rank":0,"highlight":false,"isDisplay":true,"isNewProject":true,"propertyId":2231,"districts":{"id":41,"name":"abc","region":"123"}}}
When I am trying to genrate key value pairs using select kvgen(t.project) from dfs.filePath t in apache drill, I am getting below error -
DrillRuntimeException: Mappify/kvgen does not support heterogeneous value types. All values in the input map must be of the same type. The field [createDate] has a differing type [minor_type: BIGINT mode: OPTIONAL ]
It looks like drill expects all values to be of same type. But how to do that? Is there any function available in drill?
My drill version is 1.9.0
Try setting session option store.json.all_text_mode to true.
https://drill.apache.org/docs/json-data-model/
I figured it out. KVGEN method doesn't work if json is nested.
To make it work, there are two approaches which can be followed -
Take out the nested json outside
{"project":{"id":"2625","createDate":1542597000000,"rank":0,"highlight":false,"isDisplay":true,"isNewProject":true,"propertyId":2231},"districts":{"id":41,"name":"abc","region":"123"}}
and then apply KVGEN method as select kvgen(t.project) from dfs.filePath t
Apply kvgen method on inner json first and then use nested query as below
select tbl2.col1.id, tbl2.col2.value from (select tbl1.project as col1, flatten(kvgen(tbl1.project.districts)) col2 from dfs.filePath tbl1) tbl2
And as rightly mentioned by #arina-yelchiyeva, session option store.json.all_text_mode needs to be set to true.
I have to make some changes in a existing mule flow with little knowledge and although I've spent some days reading online documentation and possible solutions to this, I cannot figure out why this query is failing, as I also have more dynamic queries in my flow with #[xxx] parameters. The query is as follows:
select times from user_request where
ip_address=SUBSTR(#message.inboundProperties.MULE_REMOTE_CLIENT_ADDRESS],2,INSTR(#[message.inboundProperties.MULE_REMOTE_CLIENT_ADDRESS], ':')-2)
and request_date=CAST(CURRENT_DATE as varchar2(8))
And the error I got is:
Message : Index: 0
(java.lang.IndexOutOfBoundsException). Payload :
{fecha_solicitud=2016-06-22, moneda=USD, client_id=RIVERA,
user_ip=127.0.0.1, request_times=0} Payload Type :
java.util.LinkedHashMap Element :
/OANDAFlow/processors/3 # oanda:oanda.xml:126 Element XML :
select times from user_requestwhere
ip_address=SUBSTR(#[message.inboundProperties.MULE_REMOTE_CLIENT_ADDRESS],2,INSTR(#[message.inboundProperties.MULE_REMOTE_CLIENT_ADDRESS],
':')-2)and request_date=CAST(CURRENT_DATE as
varchar2(8))>
Note: The transformation to varchar of the date is because the column request_date is varchar.
I've tried this query directly in the Oracle SQL developer replacing #[message.inboundProperties.MULE_REMOTE_CLIENT_ADDRESS]
with and example like /127.0.0.1:55406 and it worked fine so why through mule is failing???
In the first: #message.inboundProperties.MULE_REMOTE_CLIENT_ADDRESS] you are missing a [
One of the fields in your query expects a string value try to put a single quote..it would work ,
Try this
select times from user_request where
ip_address=SUBSTR('#message.inboundProperties.MULE_REMOTE_CLIENT_ADDRESS]',2,'INSTR(#[message.inboundProperties.MULE_REMOTE_CLIENT_ADDRESS]', ':')-2)
and request_date=CAST(CURRENT_DATE as varchar2(8))
I am working on SQL query, I have pre-created SOQL query, I am looking for a way to convert it to SQL query. The query is:
SELECT CronJobDetail.Name, Id, CreatedDate, State
FROM CronTrigger
WHERE CronjobDetail.JobType = 'bla bla'
AND CronJobDetail.Name LIKE '%bla bla2%'
But It does not run on terminal when I try to create monitoring script in Ruby. The error that I get:
(Got exception: INVALID_FIELD: No such relation 'CronJobDetail' on
entity 'CronTrigger'. If you are attempting to use a custom field, be
sure to append the '__c' after the custom field name. Please reference
your WSDL or the describe call for the appropriate names. in
/Users/gakdugan/.rvm/gems/ruby-1.9.3-p547/gems/restforce-2.2.0/lib/restforce/middleware/raise_error.rb:18:in
`on_complete'
Do you have any idea how can I fix it and make it run on SQL?
You are trying to access a relation without adding it to your FROM clause. Alternatively, if that's a custom field name, then do what the error message suggests you to do (add __c after the custom field name).
You probably want to do something like this:
SELECT CronJobDetail.Name, CronTrigger.Id, CronTrigger.CreatedDate, CronTrigger.State
FROM CronTrigger
INNER JOIN CronJobDetail ON CronJobDetail.id = CronTrigger.foreign_id // this you have to do yourself
WHERE CronjobDetail.JobType = 'bla bla'
AND CronJobDetail.Name LIKE '%bla bla2%'
I have JDBC where I'm calling the stored Procedure, It is returning the response as below, But I'm pretty not sure how to extract the value of result set
Please find the response from DB
{updateCount1=4,resultSet1=[{XML_F5RYI-11YTR=<Customers><Customer1>John<Customer1><Customer2>Ganesh<Customer2><Customers>}],resultSet2[{SequenceNumber=94}],updateCount2=1, updateCount3=4}
I have used the this expression #[message.payload.get(0)], It has return the ResultSet as below, But not exactly value required. I need to take the xml value of XML_F5RYI-11YTR.
{XML_F5RYI-11YTR=<Customers><Customer1>John<Customer1><Customer2>Ganesh<Customer2><Customers>}
Also tried like below
#[message.payload.get(0).XML_F5RYI-11YTR] but getting error , not able to extract the xml.
Could you please suggest how can I extract the xml from the ResultSet1
In most cases, the way you did it should work. I think what is happening here is that the hyphen in the column name is interpreted by the MEL parser as a subtraction. So you could change yours to this syntax, and it should work:
#[message.payload.get(0)['XML_F5RYI-11YTR']]
Also you can omit "message", as payload is resolvable directly:
#[payload.get(0)['XML_F5RYI-11YTR']]
You could use array bracket syntax to access the first row in the result set, instead of the get method:
#[payload[0]['XML_F5RYI-11YTR']]
Finally, you might want to do something for each row returned from the database. If you use a collection-splitter or a for-each, your payload will be the map that represents the row, instead of a list of maps representing the whole result set:
<collection-splitter />
<logger message="#[payload['XML_F5RYI-11YTR']]" />
EDIT
To access the result set in the payload shown in the question, you would need to access it like so:
#[payload.resultSet1[0]['XML_F5RYI-11YTR']]
The database connector gives you a list of maps. The map keys will be the name of the columns. Therefore if you want to get updateCount1, you can use something like this:
#[payload.get('updateCount1')]"
Thump rule - you database connector gives you list of map, not sure what format does it is carry, if you want XML_F5RYI.. value then do the below
[message.payload.get(0)] convert it to json or map from which #[message.payload.get("XML_F5RYI-11YTR")]