Splitter group attribute inject variable parameter - apache

can you inject a value from the config params file in to the splitter group attribute? if so, what is the proper way of doing it? thanks!
I've tried,
<split streaming="true" >
<tokenize token="\n" group="{{noOfLines}}" />
<log message="Split Group Body: ${body}"/>
<to uri="bean:extractHeader" />
<to id="acceptedFileType" ref="pConsumer" />
</split>
<split streaming="true" >
<tokenize token="\n" group={{noOfLines}} />
<log message="Split Group Body: ${body}"/>
<to uri="bean:extractHeader" />
<to id="acceptedFileType" ref="pConsumer" />
</split>
what am I doing wrong?
ERROR: 'Open quote is expected for attribute "group" associated with an element type "tokenize".
<tokenize token="\n" group="<simple>${properties:noOfLines:500}</simple>" />
ERROR: 'The value of attribute "group" associated with an element type "tokenize" must not contain the '<' character.'
<tokenize token="\n" group="${properties:noOfLines:500}" />
Caused by: org.xml.sax.SAXParseException: cvc-datatype-valid.1.2.1: '${properties:noOfLines:500}' is not a valid value for 'integer'.

my quest for info and answers didn't go deep enough. I found this was already encounted and answered by Claus Ibsen. Please see if you run in to this need.
Validation error with integer property (camel)
http://camel.apache.org/using-propertyplaceholder.html
under section "Using Property Placeholders for Any Kind of Attribute in the XML DSL"
here is what I did following these instructions.
added property prefix namespace
xmlns:prop="http://camel.apache.org/schema/placeholder"
then modified the tokenize attribute
<tokenize token="\n" prop:group="noOfLines" />
I'm using the property placeholder
<cm:property-placeholder persistent-id="com.digital.passthru.core" />
this works like a charm. thank you Claus.

this is what worked for me.
<?xml version="1.0" encoding="UTF-8"?>
<blueprint
xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:prop="http://camel.apache.org/schema/placeholder"
xmlns:camel="http://camel.apache.org/schema/blueprint"
xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0"
xsi:schemaLocation="
http://www.osgi.org/xmlns/blueprint/v1.0.0
http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
http://camel.apache.org/schema/blueprint
http://camel.apache.org/schema/blueprint/camel-blueprint.xsd">
<cm:property-placeholder persistent-id="com.ge.digital.passthru.core" />
<bean id="deadLetterErrorHandler" class="org.apache.camel.builder.DeadLetterChannelBuilder">
<property name="deadLetterUri" value="${deadLetterQueue}"/>
<property name="redeliveryPolicy" ref="redeliveryPolicyConfig"/>
<property name="useOriginalMessage" value="true" />
</bean>
<bean id="redeliveryPolicyConfig" class="org.apache.camel.processor.RedeliveryPolicy">
<property name="maximumRedeliveries" value="3"/>
<property name="redeliveryDelay" value="5000" />
</bean>
...
<camelContext
id="com.ge.digital.passthru.coreCamelContext"
trace="true"
xmlns="http://camel.apache.org/schema/blueprint"
allowUseOriginalMessage="false"
streamCache="true"
errorHandlerRef="deadLetterErrorHandler" >
...
<route
id="core.predix.accept.file.type.route"
autoStartup="true" >
<from uri="{{fileEntranceEndpoint}}" />
<convertBodyTo type="java.lang.String" />
<split streaming="true" strategyRef="csvAggregationStrategy">
<tokenize token="\n" />
<process ref="toCsvFormat" /> <!-- passthru only we do not allow embedded commas in numeric data -->
</split>
<log message="CSV body: ${body}" loggingLevel="INFO"/>
<choice>
<when>
<simple>${header.CamelFileName} regex '^.*\.(csv|CSV|txt|gpg)$'</simple>
<log message="${file:name} accepted for processing..." />
<choice>
<when>
<simple>${header.CamelFileName} regex '^.*\.(CSV|txt|gpg)$'</simple>
<setHeader headerName="CamelFileName">
<!-- <simple>${file:name.noext}.csv</simple> --> <!-- file:name.noext.single -->
<simple>${file:name.noext.single}.csv</simple>
</setHeader>
<log message="${file:name} changed file name." />
</when>
</choice>
<split streaming="true" >
<tokenize token="\n" prop:group="noOfLines" />
<log message="Split Group Body: ${body}"/>
<to uri="bean:extractHeader" />
<to id="acceptedFileType" ref="predixConsumer" />
</split>
<to uri="bean:extractHeader?method=cleanHeader"/>
</when>
<otherwise>
<log message="${file:name} is an unknown file type, sending to unhandled repo." loggingLevel="INFO" />
<to uri="{{unhandledArchive}}" />
</otherwise>
</choice>
</route>
...
and noOfLines is a property
now there is an order to all this as I found out while doing this also.
please go to the link below for the Camel ordering of components in XML DSL
Camel DataFormat Jackson using blueprint XML DSL throws context exception

Related

WSO2 EI: Can I use mediator to request another API and pass its response to the body request?

In my case, I want to add a dynamic value ("Bearer" + {access-token}) to the header mediator .
So before the header mediator, I want to invoke a get-token API and extract {access-token} element from its response. How can I get that ? Thank you so much.
You can achieve such requirements with mediation sequences. You can refer to this blog for more detailed instructions on how to develop a sequence for your requirement. The blog is written for the API Manager product, but nevertheless, you can follow the same to get it done in the EI.
A sample mediation sequence will be as following
<?xml version="1.0" encoding="UTF-8"?>
<sequence name="oauth2-sequence" xmlns="http://ws.apache.org/ns/synapse">
<!-- token generation to the oauth server's token endpoint -->
<!-- add the base64 encoded credentials -->
<property name="client-authorization-header" scope="default" type="STRING" value="MDZsZ3BTMnh0enRhOXBsaXZGUzliMnk4aEZFYTpmdE4yWTdLcnE2SWRsenBmZ1RuTVU1bkxjUFFh" />
<property name="request-body" expression="json-eval($)" scope="default" type="STRING" />
<property name="resource" expression="get-property('axis2', 'REST_URL_POSTFIX')" scope="default" type="STRING" />
<!-- creating a request payload for client_credentials -->
<payloadFactory media-type="xml">
<format>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Body>
<root xmlns="">
<grant_type>client_credentials</grant_type>
</root>
</soapenv:Body>
</soapenv:Envelope>
</format>
<args></args>
</payloadFactory>
<!-- set related headers to call the token endpoint -->
<header name="Authorization" expression="fn:concat('Basic ', get-property('client-authorization-header'))" scope="transport" />
<header name="Content-Type" value="application/x-www-form-urlencoded" scope="transport" />
<property name="messageType" value="application/x-www-form-urlencoded" scope="axis2" type="STRING" />
<property name="REST_URL_POSTFIX" value="" scope="axis2" type="STRING" />
<!-- change the token endpoint -->
<call blocking="true">
<endpoint>
<http method="POST" uri-template="https://localhost:9443/oauth2/token" />
</endpoint>
</call>
<!-- append the acquired access token and make the call to the backend service -->
<property name="bearer-token" expression="json-eval($.access_token)" scope="default" type="STRING" />
<property name="REST_URL_POSTFIX" expression="get-property('resource')" scope="axis2" type="STRING" />
<header name="Authorization" expression="fn:concat('Bearer ', get-property('bearer-token'))" scope="transport" />
<payloadFactory media-type="json">
<format>$1</format>
<args>
<arg evaluator="xml" expression="get-property('request-body')" />
</args>
</payloadFactory>
<!-- perform a send or call to complete the execution of the backend service call in EI -->
</sequence>
Hope this helps you to start with implementation.

Is there any similar kind of configure to rolingStyle="once" of log4net is available for NLog?

I am using log4net currently and on every hour log file archive is being performed.
Now I am changing log4net to NLog
Is there similar setting available in NLog Like rolingStyle="once"? which was configure setting available in log4net.
For example earlier while using log4net the files created were used to be like:
LogFile.log
LogFile.log.1 <-last archive file
Following is configuration I used in log4net and I want to use the exact configure settings so that archived file naming should remains as it was in log4net:
<appender name="Work" type="RMM.Common.Logger.LogFileRollingAppender, Common">
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
<dateTimeStrategy type="log4net.Appender.RollingFileAppender+UniversalDateTime" />
<file type="log4net.Util.PatternString" value="%property{EdgeAPILogPath}\WebAPI_Work.log" />
<param name="AppendToFile" value="true"/>
<rollingStyle value="Once" />
<rollingStyle value="Composite" />
<datePattern value=".yyyyMMdd-HH" />
<maxSizeRollBackups value="300" />
<maximumFileSize value="20MB" />
<Encoding value="UTF-8" />
<staticLogFileName value="true" />
<layout type="log4net.Layout.PatternLayout">
<param name="ConversionPattern" value="%utcdate{yyyy/MM/dd HH:mm:ss,fff} [%-5p] [%3t] %m%n" />
</layout>
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="DEBUG" />
<levelMax value="FATAL" />
</filter>
</appender>
You can:
Use fileName="${basedir}/logs/${cached:${date:format=yyyy-MM-dd HH_mm_ss}}.log" as a way to ensure one log file per application instance.
Use archiveFileName="${archiveLogDirectory}/LogFile.log.{####}" to append numbers at the end (feel free to add or remove # as required, depending on your maxArchiveFiles).
Use archiveNumbering="Sequence" to achieve the order you want (higher numbers = newer logs).
Source: this piece of documentation and some personal experience.
Hopefully this basic example will help you getting closer to your final target:
<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
autoReload="true"
internalLogLevel="Error"
internalLogFile="./internal-nlog.txt"
throwExceptions="true">
<variable name="logDirectory" value="./logs"/>
<variable name="archiveLogDirectory" value="./logs/archive"/>
<targets>
<target name="errors"
xsi:type="File"
fileName="${logDirectory}/${cached:${date:format=yyyy-MM-dd HH_mm_ss}}.log"
archiveFileName="${archiveLogDirectory}/LogFile.log.{#}"
maxArchiveFiles="9"
archiveEvery="Hour"
archiveNumbering="Sequence"
/>
</targets>
<rules>
<logger name="*" minlevel="Debug" writeTo="errors"/>
</rules>
</nlog>
Not an expert on log4net, but it sounds like that rolingStyle="once" is the same as NLog FileTarget ArchiveOldFileOnStartup. So maybe something like this:
<nlog>
<variable name="EdgeAPILogPath" layout="${basedir}" />
<targets>
<target xsi:type="file" name="work"
fileName="${EdgeAPILogPath}/WebAPI_Work.log"
encoding="utf-8"
archiveNumbering="DateAndSequence"
archiveFileName="${EdgeAPILogPath}/WebAPI_Work.{#}.log"
archiveDateFormat="yyyyMMdd-HH"
archiveEvery="Hour"
archiveAboveSize="20000000"
archiveOldFileOnStartup="true"
maxArchiveFiles="300" />
</targets>
<rules>
<logger name="*" minLevel="Debug" writeTo="work" />
</rules>
</nlog>
See also: https://github.com/NLog/NLog/wiki/FileTarget-Archive-Examples

How to call many databases without hard-coding?

I have following scenario:
There are many sites/departments and every site has it's own ServiceMix and different number of databases. Those databases are grouped, for example there are two groups: production_one and production_two.
On every site I deployed bundles as a datasource, one datasource for every database.
Datasource: blueprint.xml
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:cxf="http://camel.apache.org/schema/blueprint/cxf"
xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0"
xsi:schemaLocation="
http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd">
<!-- Properties -->
<!-- blueprint property placeholders, that will use etc/datasources.cfg as the properties file -->
<cm:property-placeholder id="db.placeholder" persistent-id="datasources">
</cm:property-placeholder>
<!-- Datasource -->
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver" />
<property name="url"
value="jdbc:sqlserver://${bundle_ds_1_db_host}:${bundle_ds_1_db_port};databaseName=${bundle_ds_1_db_database};" />
<property name="username" value="${bundle_ds_1_db_username}" />
<property name="password" value="${bundle_ds_1_db_password}" />
</bean>
<!-- Init database-->
<bean id="initDatabase"
class="com.xyz.mssql.ds.DatabaseInitBean"
init-method="create" destroy-method="destroy">
<property name="ds" ref="dataSource" />
</bean>
<service ref="dataSource" interface="javax.sql.DataSource">
<service-properties>
<entry key="datasource.name" value="${bundle_ds_1_ds_name}" />
<entry key="datasource.type" value="${bundle_ds_1_ds_type}" />
<entry key="datasource.id" value="${bundle_ds_1_ds_id}" />
<entry key="datasource.group_id" value="${bundle_ds_1_ds_group_id}" />
</service-properties>
</service>
There is also bundle, which receives a SOAP message via cxf and process it by calling database:
CXF: blueprint.xml
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:cxf="http://camel.apache.org/schema/blueprint/cxf"
xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0"
xsi:schemaLocation="
http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
http://camel.apache.org/schema/blueprint/cxf http://camel.apache.org/schema/blueprint/cxf/camel-cxf.xsd
http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd">
<cxf:cxfEndpoint id="processXEndpoint" address="/mssql-soap1/"
serviceClass="com.xyz.mssql.soap1.ProcessXService" />
<bean id="saveDataProcessor"
class="com.xyz.mssql.soap1.SaveDataProcessor">
<property name="ds" ref="dataSource" />
</bean>
<bean id="getDataProcessor"
class="com.xyz.mssql.soap1.GetDataProcessor">
<property name="ds" ref="dataSource" />
</bean>
<!-- DataSource -->
<reference id="dataSource" interface="javax.sql.DataSource"
filter="(datasource.name=mssql-ds-1)">
<reference-listener bind-method="onBind"
unbind-method="onUnbind">
<bean class="com.xyz.mssql.soap1.ListenerBean" />
</reference-listener>
</reference>
<!-- Camel Context -->
<camelContext xmlns="http://camel.apache.org/schema/blueprint">
<route id="cxf">
<from uri="cxf:bean:processXEndpoint" />
<recipientList>
<simple>direct:${header.operationName}</simple>
</recipientList>
</route>
<route id="saveRoute">
<from uri="direct:saveData" />
<log message="Calling saveData" />
<process ref="saveDataProcessor" />
</route>
<route id="getRoute">
<from uri="direct:getData" />
<log message="Call getData" />
<process ref="getDataProcessor" />
</route>
</camelContext>
</blueprint>
The problem is, that following code inject to bean (ex: SaveDataProcessor) only one datasource resolved by it's name "datasource.name=mssql-ds-1", but I need to call many databases and on every servicemix installation there will be different number of them.
I need to call all databases (datasources) which have same type, for example "datasource.type=production_one".
Is it possible? How to do it?
How to pass to bean an array of datasources, or call this bean against every datasource which meet same conditions (name, type and so on).

Mule log is not working if i use synch and asyn in same log4 file

I am trying to use Asyn log and Syn log in the same log4j2.xml file and want to set the root level is "INFO" and the specific mule package(org.mule) is "FATAL".but its automatically pick the info level for org.mule package and its not restricting the log level to specific package if i use sync and asynch level at same file. please help us if you have any idea. Thanks in advance.
Mule ESB - 3.6
<?xml version="1.0" encoding="utf-8"?>
<Configuration>
<Appenders>
<RollingFile name="file"
fileName="${sys:mule.home}${sys:file.separator}logs${sys:file.separator}eztutorial.log"
filePattern="${sys:mule.home}${sys:file.separator}logs${sys:file.separator}eztutorial-%i.log">
<PatternLayout pattern="%d [%t] %-5p %c - %m%n" />
<SizeBasedTriggeringPolicy size="10 MB" />
<DefaultRolloverStrategy max="10" />
</RollingFile>
</Appenders>
<Loggers>
<!-- CXF is used heavily by Mule for web services -->
<AsyncLogger name="org.apache.cxf" level="WARN" />
<!-- Apache Commons tend to make a lot of noise which can clutter the log -->
<AsyncLogger name="org.apache" level="WARN" />
<!-- Reduce startup noise -->
<AsyncLogger name="org.springframework.beans.factory"
level="WARN" />
<logger name="org.mule">
<level value="FATAL" />
</logger>
<logger name="com.mulesoft">
<level value="FATAL" />
</logger>
<category name="org.mule">
<priority value="FATAL" />
</category>
<category name="com.mulesoft">
<priority value="FATAL" />
</category>
<!-- Reduce DM verbosity -->
<AsyncLogger name="org.jetel" level="WARN" />
<AsyncLogger name="Tracking" level="WARN" />
<AsyncRoot level="INFO">
<AppenderRef ref="file" />
</AsyncRoot>
</Loggers>
</Configuration>
You can remove the <Category ...> entries from your config. (Log4j2 ignores them.)
For all named <Logger> and <AsyncLogger> entries, add additivity="false". Without this, the INFO level statements will still appear in your log because the root logger picks them up.
See http://logging.apache.org/log4j/2.x/manual/configuration.html#Additivity

WSO2 ESB: Display dataService response

How do you display the response returned by calling a webservice endpoint on a sequence?
Below is the sequence that I use. I would like to display the return value from the dataservice called "CDServiceEndpoint" on the wso2carbon.log. Is that possible? If not, how can I get the data displayed.
<sequence xmlns="http://ws.apache.org/ns/synapse" name="ConcurGetADPExtractFlow" onError="GeneralErrorHandler">
<property xmlns:ns="http://org.apache.synapse/xsd" xmlns:ns3="http://org.apache.synapse/xsd" name="current-context-details" expression="concat(get-property('current-context-details'), ', ConcurGetADPExtractFlow')" />
<property name="scenario" value="ConcurGetADPExtractFlow" />
<log level="custom">
<property name="DEBUGGING" value="ConcurGetADPExtractFlow" />
<property name="start-date" value="2015-02-23" />
<property xmlns:ns="http://org.apache.synapse/xsd" xmlns:ns3="http://org.apache.synapse/xsd" name="End Date" expression="get-property('current-date')" />
</log>
<xslt key="Concur_Get_ADP_Extract_Transformation">
<property name="start-date" value="2015-03-02" />
<property xmlns:ns="http://org.apache.synapse/xsd" xmlns:ns3="http://org.apache.synapse/xsd" name="end-date" expression="get-property('current-date')" />
</xslt>
<property name="post-data-service-sequence" value="ConcurTransformADPExtractFlow" />
<property name="OUT_ONLY" value="false" />
<send receive="DataServiceInvocationErrorFlow">
<endpoint key="ConcurDataServiceEndpoint" />
</send>
<description>Sends a request to the data service for the ADP extract data.</description>
</sequence>
Below is how my DataServiceInvocationErrorFlow looks like.
<sequence xmlns="http://ws.apache.org/ns/synapse" name="DataServiceInvocationErrorFlow">
<filter xmlns:ns="http://org.apache.synapse/xsd" xmlns:m="http://ws.wso2.org/dataservice" xmlns:ns3="http://org.apache.synapse/xsd" xpath="//m:DataServiceFault">
<then>
<log level="custom">
<property name="status" value="data-service-fault" />
</log>
<property name="error_message" expression="//m:DataServiceFault" />
<sequence key="GeneralErrorHandler" />
<drop />
</then>
<else>
<filter source="string-length(get-property('ERROR_MESSAGE'))" regex="0.0">
<then>
<filter xpath="//soapenv:Fault">
<then>
<log level="custom">
<property name="status" value="ERROR" />
</log>
<property name="error_message" expression="//soapenv:Fault" />
<sequence key="GeneralErrorHandler" />
<drop />
</then>
<else>
<log level="custom">
<property name="status" value="success response from DSS" />
</log>
<filter source="string-length(normalize-space(get-property('post-data-service-sequence')))" regex="0.0">
<then />
<else>
<property name="temp-post-data-service-sequence" expression="get-property('post-data-service-sequence')" />
<property name="post-data-service-sequence" value="" />
<sequence key="{get-property('temp-post-data-service-sequence')}" />
</else>
</filter>
</else>
</filter>
</then>
<else>
<property name="error_message" expression="get-property('ERROR_MESSAGE')" />
<sequence key="GeneralErrorHandler" />
<drop />
</else>
</filter>
</else>
</filter>
</sequence>
If by CDServiceEndpoint you meant ConcurDataServiceEndpoint, then you are already handling the response from that endpoint in your DataServiceInvocationErrorFlow sequence that you defined on the Send mediator, which by the way is confusingly named, since your receive sequence will run no matter if you get an error response or a good one. All you need to do is log it inside DataServiceInvocationErrorFlow using Log mediator.
Had you not defined a receive sequence on your Send mediator, the response would have gotten to your outSequence, where you can log it with Log mediator too.