<route>
<from uri="timer://SomeTimer?fixedRate=true&period=60000" />
<to uri="seda:SomeProcessor" />
</route>
<route>
<from uri="seda:SomeProcessor" />
<setHeader headerName="ServiceName">
<constant>SomeService</constant>
</setHeader>
<to uri="bean:serviceConsumer?method=callService" />
</route>
it intern calls a procedure which is supposed to Poll and then inert some data in a table. Its duplicating intermittently in the target table and we can see the sqlid is separate for the duplicate rows inserted. Lately the DB is performing badly and taking a lot of time.
I am thinking FixedRate=true is making timer objects bunching up and then firing rapidly creating a race condition to duplicate data. Can anyone advice please.
Related
I'm using apache camel and spring boot to implement the integration flow between two tables. Source table includes more than 1000 records. What I want to do is after doing some modification in to the source data those data should insert to a another table in the same database. I'm stuck with the data insert stage.
<camelContext id="Integrator" xmlns="http://camel.apache.org/schema/spring">
<route id="hello">
<from id="timer" uri="timer:test?period={{timer.period}}"/>
<setBody id="query">
<constant>SELECT * FROM abc WHERE code = 'MDV1'</constant>
</setBody>
<log id="log_1" message="log msg"/>
<to id="jdbc_con" uri="jdbc:dataSource"/>
<process id="changebody" ref="editPayload"/>
<log id="log_2" message="process row ${body}"/>
</route>
</camelContext>
Updated:
This is not the exact answer I wanted. But this flow can insert the record from source table to target. In this solution record will be insert in to target table one by one. I wanted to insert as bulk in the final stage instead of inserting one by one.
<camelContext id="Integrator"
xmlns="http://camel.apache.org/schema/spring">
<route id="data_transfer">
<from id="timer" uri="timer:abcStaging?period={{timer.period}}" />
<setBody id="select_query">
<constant>select * from abc</constant>
</setBody>
<to id="jdbc_con_select" uri="jdbc:dataSource" />
<split>
<simple>${body}</simple>
<process id="change_body" ref="editPayload" />
<to id="jdbc_con_insert" uri="sql:{{sql.abcStaging}}" />
<log id="log_1" message="Inserted abcStaging" />
</split>
</route>
</camelContext>
properties file:
sql.abcStaging =insert into abcStaging (id, rate) values (:#id, :#rate)
editPayload Bean:
public class ChangePayload implements Processor {
#Override
public void process(Exchange exchange) throws Exception {
LinkedHashMap linkedHashMap = (LinkedHashMap) exchange.getIn().getBody();
Map<String, Object> staging = new HashMap<>();
/* data changing logics */
staging.put("id", "id");
staging.put("rate", "rate");
exchange.getOut().setBody(staging);
}
}
If your main requirement is to do execute a single bulk query, you may have to build the query yourself, in an Aggregator.
If your DBMS supports syntax like:
INSERT INTO T1 (F1, F2) Values (a1, b1), (a2,b2)
then an aggregator could build up that large string. (This would not be feasible if you have (say) 10,000 or 100,000 rows since it might cross a statement-size limit.) The other downside is that one would need to build the value clauses knowing data-types...not sure if one can parameterize something like that.
I have network with 2 junctions. All cars starts from the left and on the first junction they drive bottom(with probability=0.2) or right (with probability=0.8). And that works perfectly fine. The code doing the stuff is below (hello.rou.xml):
<?xml version="1.0" encoding="UTF-8"?>
<routes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://sumo.dlr.de/xsd/routes_file.xsd">
<vType accel="1.0" decel="5.0" id="Car" length="5.0" minGap="2.0" maxSpeed="50.0" sigma="0" />
<flow id="type1" color="1,1,0" begin="0" end= "7200" period="3" type="Car">
<routeDistribution id="routedist1">
<route id="route0" edges="gneE7 gneE8" probability="0.2"/>
<route id="route1" edges="gneE7 gneE10" probability="0.8"/>
</routeDistribution>
</flow>
</routes>
So we have many cars that comes to gneE10 edge. I want to specify probability of going top/right/bottom in this moment. So I attach additional file with rerouter:
<?xml version="1.0" encoding="UTF-8"?>
<additional>
<routeDistribution id="reRouteE10">
<route id="routeX0" edges="gneE10 -gneE14" probability="0.34" /><!--UP-->
<route id="routeX1" edges="gneE10 gneE15" probability="0.33" /><!--STRAIGHT-->
<route id="routeX2" edges="gneE10 gneE12" probability="0.33" /><!-- DOWN -->
</routeDistribution>
<rerouter id="rerouterE10" edges="gneE10">
<interval begin="0" end="100000">
<routeProbReroute id="reRouteE10" />
</interval>
</rerouter>
</additional>
If we have probabilities like above I always have such behaviour:
Every vehicle goes down! Why? Most likely because of being last in code to be honest.
I have specified a little bit different density.
Even with very large number 0.7 - all cars still goes straight. (It changes with 0.9 - then all goes up).
Not sure if bug or I have misunderstood something. Full code aviable at Github
The rerouter does not take route distributions as argument. I agree this would be the logical thing to do, but the SUMO way to do it is:
<rerouter id="rerouterE10" edges="gneE10">
<interval begin="0" end="100000">
<routeProbReroute id="routeX0" probability="0.34" />
<routeProbReroute id="routeX1" probability="0.33" />
<routeProbReroute id="routeX2" probability="0.33" />
</interval>
</rerouter>
In your example sumo draws one route from the given distribution on creating the rerouter and then always uses this one.
Hard to believe, but I can't seem to find a straight answer for this: How can I get the SQL statement including the parameter values when the statement generates an exception and only when it generates an exception. I know how to log the statement+parameters for every SQL generated, but that's way too much. When there's a System.Data.SqlClient.SqlException, though, it only provides the SQL, not the parameter values. How can I catch that at a point where I have access to the that data so that I can log it?
Based on a number of responses to various questions (not just mine), I've cobbled something together that does the trick. I think it could be useful to others as well, so I'm including a good deal of it here:
The basic idea is to
Have NH log all queries, pretty-printed and with the parameter values in situ
Throw all those logs out except the one just prior to the exception.
I use Log4Net, and the setup is like this:
<?xml version="1.0"?>
<log4net>
<appender name="RockAndRoll" type="Util.PrettySqlRollingFileAppender, Util">
<file type="log4net.Util.PatternString" >
<conversionPattern value="%env{Temp}\\%property{LogDir}\\MyApp.log" />
</file>
<DatePattern value="MM-dd-yyyy" />
<appendToFile value="true" />
<immediateFlush value="true" />
<rollingStyle value="Composite" />
<maxSizeRollBackups value="10" />
<maximumFileSize value="100MB" />
<staticLogFileName value="true" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date %-5level %logger - %message%newline" />
</layout>
</appender>
<appender name="ErrorBufferingAppender" type="log4net.Appender.BufferingForwardingAppender">
<bufferSize value="2" />
<lossy value="true" />
<evaluator type="log4net.Core.LevelEvaluator">
<threshold value="ERROR" />
</evaluator>
<appender-ref ref="RockAndRoll" />
<Fix value="0" />
</appender>
<logger name="NHibernate.SQL">
<additivity>false</additivity>
<appender-ref ref="ErrorBufferingAppender" />
<level value="debug" />
</logger>
<logger name="error-buffer">
<additivity>false</additivity>
<appender-ref ref="ErrorBufferingAppender" />
<level value="debug" />
</logger>
<root>
<level value="info" />
<appender-ref ref="RockAndRoll" />
</root>
</log4net>
The NHibernate.SQL logger logs all queries to the ErrorBufferingAppender, which keeps throwing them out and saves only the last one in its buffer. When I catch an exception I log one line at ERROR level to logger error-buffer, which passes it to ErrorBufferingAppender which -- because it's at ERROR level -- pushes it, along with the last query, out to RockAndRoll, my RollingFileAppender.
I implemented a subclass of RollingFileAppender called PrettySqlRollingFileAppender (which I'm happy to provide if anyone's interested) that takes the parameters from the end of the query and substitutes them inside the query itself, making it much more readable.
If you are using nhibernate for querying the db (as the tag presence on your question suggests), and your SQL dialect/driver relies on ADO, you should get a GenericADOException from the failing query.
Its Message property normally already include parameters values.
For example, executing the following failing query (provided you have at least one row in DB):
var result = session.Query<Entity>()
.Where(e => e.Name.Length / 0 == 1);
Yields a GenericADOException with message:
could not execute query
[ select entity0_.Id as Id1_0_, entity0_.Name as Name2_0_ from Entity entity0_ where len(entity0_.Name)/#p0=#p1 ]
Name:p1 - Value:0 Name:p2 - Value:1
The two literals, 0 and 1, of the query have been parameterized and their values are included in the message (with an index base mismatch: on hibernate queries, they are 1 based, while on the SQL query with my setup, they end up 0 based).
So there is nothing special to do to have them. Just log the exception message.
Have you just missed it, or were you asking something else indeed?
Your question was not explicit enough in my opinion. You should include a MVCE. It would have show me more precisely in which case you were not able of getting those parameters values.
I'm using Apache Camel 2.13.1 to poll a database table which will have upwards of 300k rows in it. I'm looking to use the Idempotent Consumer EIP to filter rows that have already been processed.
I'm wondering though, whether the implementation is really scalable or not. My camel context is:-
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route id="main">
<from
uri="sql:select * from transactions?dataSource=myDataSource&consumer.delay=10000&consumer.useIterator=true" />
<transacted ref="PROPAGATION_REQUIRED" />
<enrich uri="direct:invokeIdempotentTransactions" />
<!-- Any processors here will be executed on all messages -->
</route>
<route id="idempotentTransactions">
<from uri="direct:invokeIdempotentTransactions" />
<idempotentConsumer
messageIdRepositoryRef="jdbcIdempotentRepository">
<ognl>#{request.body.ID}</ognl>
<!-- Anything here will only be executed for non-duplicates -->
<log message="non-duplicate" />
<to uri="stream:out" />
</idempotentConsumer>
</route>
</camelContext>
It would seem that the full 300k rows are going to be processed every 10 seconds (via consumer.delay parameter) which seems very inefficient. I would expect some sort of feedback loop as part of the pattern so that the query that feeds the filter could take advantage of the set of rows already processed.
However, the messageid column in the CAMEL_MESSAGEPROCESSED table has the pattern of
{1908988=null}
where 1908988 is the request.body.ID I've set the EIP to key on so this doesn't make it easy to incorporate into my query.
Is there a better way of using the CAMEL_MESSAGEPROCESSED table as a feedback loop into my select statement so that the SQL server is performing most of the load?
Update:
So, I've since found out that it was my ognl code that was causing the odd message id column value. Changing it to
<el>${in.body.ID}</el>
has fixed it. So, now that I have a usable messageId column, I can now change my 'from' SQL query to
select * from transactions tr where tr.ID IN (select cmp.messageid from CAMEL_MESSAGEPROCESSED cmp where cmp.processor = 'transactionProcessor')
but I still think I'm corrupting the Idempotent Consumer EIP.
Does anyone else do this? Any reason not to?
Yes, it is. But you need to use scalable storage for holding sets of already processed messages. You can use either Hazelcast - http://camel.apache.org/hazelcast-idempotent-repository-tutorial.html or Infinispan - http://java.dzone.com/articles/clustered-idempotent-consumer - depending on which solution is already in your stack. Of course, JDBC repository would work, but only if it meets performance criteria selected.
I am trying to find a way to pass in "destAddr=" in smpp route below, a value that comes from the above sql query in order to import the senders number in the sms destination address but after much search, I can't find a way. How can I save the value I need from the query and then use it dynamically in the smpp option? Any suggestions would be much appreciated.
<from uri="sql:{{sql.selectRunRecList}}" />
<to uri="bean:smppBean?method=smsConstruct" />
<to uri="sql:{{sql.markSms}}"/>
<to uri="bean:smppBean?method=smsPrintText" />
<to uri="file:C:/workspace/SMPP/outbox" />
<to uri="smpp://smppclient#localhost:2775?password=password&destAddr= " />
See this FAQ how to use dynamic values when sending to an endpoint in Camel
http://camel.apache.org/how-do-i-use-dynamic-uri-in-to.html