I have the following schematic implementation of a JAX-RS service endpoint:
#GET
#Path("...")
#Transactional
public Response download() {
java.sql.Blob blob = findBlob(...);
return Response.ok(blob.getBinaryStream()).build();
}
Invoking the JAX-RS endpoint will fetch a Blob from the database (through JPA) and stream the result back to the HTTP client. The purpose of using a Blob and a stream instead of e.g. JPA's naive BLOB to byte[] mapping is to prevent that all of the data must be kept in memory, but instead stream directly from the database to the HTTP response.
This works as intended and I actually don't understand why. Isn't the Blob handle I get from the database associated with both the underlying JDBC connection and transaction? If so, I would have expected the Spring transaction to be commited when I return from the download() method, making it impossible for the JAX-RS implementation to later access data from the Blob to stream it back to the HTTP response.
Are you sure that the transaction advice is running? By default, Spring uses the "proxy" advice mode. The transaction advice would only run if you registered the Spring-proxied instance of your resource with the JAX-RS Application, or if you were using "aspectj" weaving instead of the default "proxy" advice mode.
Assuming that a physical transaction is not being re-used as a result of transaction propagation, using #Transactional on this download() method is incorrect in general.
If the transaction advice is actually running, the transaction ends when returning from the download() method. The Blob Javadoc says: "A Blob object is valid for the duration of the transaction in which is was created." However, ยง16.3.7 of the JDBC 4.2 spec says: "Blob, Clob and NClob objects remain valid for at least the duration of the transaction in which they are created." Therefore, the InputStream returned by getBinaryStream() is not guaranteed to be valid for serving the response; the validity would depend on any guarantees provided by the JDBC driver. For maximum portability, you should rely on the Blob being valid only for the duration of the transaction.
Regardless of whether the transaction advice is running, you potentially have a race condition because the underlying JDBC connection used to retrieve the Blob might be re-used in a way that invalidates the Blob.
EDIT: Testing Jersey 2.17, it appears that the behavior of constructing a Response from an InputStream depends on the specified response MIME type. In some cases, the InputStream is read entirely into memory first before the response is sent. In other cases, the InputStream is streamed back.
Here is my test case:
#Path("test")
public class MyResource {
#GET
public Response getIt() {
return Response.ok(new InputStream() {
#Override
public int read() throws IOException {
return 97; // 'a'
}
}).build();
}
}
If the getIt() method is annotated with #Produces(MediaType.TEXT_PLAIN) or no #Produces annotation, then Jersey attempts to read the entire (infinite) InputStream into memory and the application server eventually crashes from running out of memory. If the getIt() method is annotated with #Produces(MediaType.APPLICATION_OCTET_STREAM), then the response is streamed back.
So, your download() method may be working simply because the blob is not being streamed back. Jersey might be reading the entire blob into memory.
Related: How to stream an endless InputStream with JAX-RS
EDIT2: I have created a demonstration project using Spring Boot and Apache CXF:
https://github.com/dtrebbien/so30356840-cxf
If you run the project and execute on the command line:
curl 'http://localhost:8080/myapp/test/data/1' >/dev/null
Then you will see log output like the following:
2015-06-01 15:58:14.573 DEBUG 9362 --- [nio-8080-exec-1] org.apache.cxf.transport.http.Headers : Request Headers: {Accept=[*/*], Content-Type=[null], host=[localhost:8080], user-agent=[curl/7.37.1]}
2015-06-01 15:58:14.584 DEBUG 9362 --- [nio-8080-exec-1] org.apache.cxf.jaxrs.utils.JAXRSUtils : Trying to select a resource class, request path : /test/data/1
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] org.apache.cxf.jaxrs.utils.JAXRSUtils : Trying to select a resource operation on the resource class com.sample.resource.MyResource
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] org.apache.cxf.jaxrs.utils.JAXRSUtils : Resource operation getIt may get selected
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] org.apache.cxf.jaxrs.utils.JAXRSUtils : Resource operation getIt on the resource class com.sample.resource.MyResource has been selected
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.j.interceptor.JAXRSInInterceptor : Request path is: /test/data/1
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.j.interceptor.JAXRSInInterceptor : Request HTTP method is: GET
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.j.interceptor.JAXRSInInterceptor : Request contentType is: */*
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.j.interceptor.JAXRSInInterceptor : Accept contentType is: */*
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.j.interceptor.JAXRSInInterceptor : Found operation: getIt
2015-06-01 15:58:14.595 DEBUG 9362 --- [nio-8080-exec-1] o.s.j.d.DataSourceTransactionManager : Creating new transaction with name [com.sample.resource.MyResource.getIt]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; ''
2015-06-01 15:58:14.595 DEBUG 9362 --- [nio-8080-exec-1] o.s.j.d.DataSourceTransactionManager : Acquired Connection [ProxyConnection[PooledConnection[org.hsqldb.jdbc.JDBCConnection#7b191894]]] for JDBC transaction
2015-06-01 15:58:14.596 DEBUG 9362 --- [nio-8080-exec-1] o.s.j.d.DataSourceTransactionManager : Switching JDBC Connection [ProxyConnection[PooledConnection[org.hsqldb.jdbc.JDBCConnection#7b191894]]] to manual commit
2015-06-01 15:58:14.602 DEBUG 9362 --- [nio-8080-exec-1] o.s.jdbc.core.JdbcTemplate : Executing prepared SQL query
2015-06-01 15:58:14.603 DEBUG 9362 --- [nio-8080-exec-1] o.s.jdbc.core.JdbcTemplate : Executing prepared SQL statement [SELECT data FROM images WHERE id = ?]
2015-06-01 15:58:14.620 DEBUG 9362 --- [nio-8080-exec-1] o.s.j.d.DataSourceTransactionManager : Initiating transaction commit
2015-06-01 15:58:14.620 DEBUG 9362 --- [nio-8080-exec-1] o.s.j.d.DataSourceTransactionManager : Committing JDBC transaction on Connection [ProxyConnection[PooledConnection[org.hsqldb.jdbc.JDBCConnection#7b191894]]]
2015-06-01 15:58:14.621 DEBUG 9362 --- [nio-8080-exec-1] o.s.j.d.DataSourceTransactionManager : Releasing JDBC Connection [ProxyConnection[PooledConnection[org.hsqldb.jdbc.JDBCConnection#7b191894]]] after transaction
2015-06-01 15:58:14.621 DEBUG 9362 --- [nio-8080-exec-1] o.s.jdbc.datasource.DataSourceUtils : Returning JDBC Connection to DataSource
2015-06-01 15:58:14.621 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Invoking handleMessage on interceptor org.apache.cxf.interceptor.OutgoingChainInterceptor#7eaf4562
2015-06-01 15:58:14.622 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Adding interceptor org.apache.cxf.interceptor.MessageSenderInterceptor#20ffeb47 to phase prepare-send
2015-06-01 15:58:14.622 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Adding interceptor org.apache.cxf.jaxrs.interceptor.JAXRSOutInterceptor#5714d386 to phase marshal
2015-06-01 15:58:14.622 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Chain org.apache.cxf.phase.PhaseInterceptorChain#11ca802c was created. Current flow:
prepare-send [MessageSenderInterceptor]
marshal [JAXRSOutInterceptor]
2015-06-01 15:58:14.623 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Invoking handleMessage on interceptor org.apache.cxf.interceptor.MessageSenderInterceptor#20ffeb47
2015-06-01 15:58:14.623 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Adding interceptor org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor#6129236d to phase prepare-send-ending
2015-06-01 15:58:14.623 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Chain org.apache.cxf.phase.PhaseInterceptorChain#11ca802c was modified. Current flow:
prepare-send [MessageSenderInterceptor]
marshal [JAXRSOutInterceptor]
prepare-send-ending [MessageSenderEndingInterceptor]
2015-06-01 15:58:14.623 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Invoking handleMessage on interceptor org.apache.cxf.jaxrs.interceptor.JAXRSOutInterceptor#5714d386
2015-06-01 15:58:14.627 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.j.interceptor.JAXRSOutInterceptor : Response content type is: application/octet-stream
2015-06-01 15:58:14.631 DEBUG 9362 --- [nio-8080-exec-1] o.apache.cxf.ws.addressing.ContextUtils : retrieving MAPs from context property javax.xml.ws.addressing.context.inbound
2015-06-01 15:58:14.631 DEBUG 9362 --- [nio-8080-exec-1] o.apache.cxf.ws.addressing.ContextUtils : WS-Addressing - failed to retrieve Message Addressing Properties from context
2015-06-01 15:58:14.636 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Invoking handleMessage on interceptor org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor#6129236d
2015-06-01 15:58:14.639 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.t.http.AbstractHTTPDestination : Finished servicing http request on thread: Thread[http-nio-8080-exec-1,5,main]
2015-06-01 15:58:14.639 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.t.servlet.ServletController : Finished servicing http request on thread: Thread[http-nio-8080-exec-1,5,main]
I have trimmed the log output for readability. The important thing to note is that the transaction is committed and the JDBC connection is returned before the response is sent. Therefore, the InputStream returned by blob.getBinaryStream() is not necessarily valid and the getIt() resource method may be invoking undefined behavior.
EDIT3: A recommended practice for using Spring's #Transactional annotation is to annotate the service method (see Spring #Transactional Annotation Best Practice). You could have a service method that finds the blob and transfers the blob data to the response OutputStream. The service method could be annotated with #Transactional so that the transaction in which the Blob is created would remain open for the duration of the transfer. However, it seems to me that this approach could introduce a denial of service vulnerability by way of a "slow read" attack. Because the transaction should be kept open for the duration of the transfer for maximum portability, numerous slow readers could lock up your database table(s) by holding open transactions.
One possible approach is to save the blob to a temporary file and stream back the file. See How do I use Java to read from a file that is actively being written? for some ideas on reading a file while it's being simultaneously written, though this case is more straightforward because the length of the blob can be determined by calling the Blob#length() method.
I've spent some time now debugging the code, and all my assumptions in the question are more or less correct. The #Transactional annotation works as expected, the transaction (both the Spring and the DB transactions) are commited immediately after returning from the download method, the physical DB connection is returned to the connection pool and the content of the BLOB is obviously been read later and streamed to the HTTP response.
The reason why this still works is that the Oracle JDBC driver implements functionality beyond what's required by the JDBC specification. As Daniel pointed out, the JDBC API documentation states that "A Blob object is valid for the duration of the transaction in which is was created." The documentation only states that the Blob is valid during the transaction, it does not state (as claimed by Daniel and initially assumed by me), that the Blob is not valid after ending the transaction.
Using plain JDBC, retrieving the InputStream from two Blobs in two different transactions from the same physical connection and not reading the Blob data before after the transactions are commited demonstrates this behaviour:
Connection conn = DriverManager.getConnection(...);
conn.setAutoCommit(false);
ResultSet rs = conn.createStatement().executeQuery("select data from ...");
rs.next();
InputStream is1 = rs.getBlob(1).getBinaryStream();
rs.close();
conn.commit();
rs = conn.createStatement().executeQuery("select data from ...");
rs.next();
InputStream is2 = rs.getBlob(1).getBinaryStream();
rs.close();
conn.commit();
int b1 = 0, b2 = 0;
while(is1.read()>=0) b1++;
while(is2.read()>=0) b2++;
System.out.println("Read " + b1 + " bytes from 1st blob");
System.out.println("Read " + b2 + " bytes from 2nd blob");
Even if both Blobs have been selected from the same physical connection and from within two different transactions, they can both be read completely.
Closing the JDBC connection (conn.close()) does however finally invalidate the Blob streams.
I had a similar related problem and I can confirm that at least in my situation PostgreSQL throws an exception Invalid large object descriptor : 0 with autocommit when using the StreamingOutput approach. The reason of this is that when the Response from JAX-RS is returned the transaction is committed and the streaming method is executing later. In the meanwhile the file descriptor is not valid anymore.
I have created some helper method, so that the streaming part is opening a new transaction and can stream the Blob. com.foobar.model.Blob is just a return class encapsulating the blob so that not the complete entity must be fetched. findByID is a method using a projection on the blob column and only fetching this column.
So StreamingOutput of JAX-RS and Blob under JPA and Spring transactions are working, but it must be tweaked. The same applied to JPA and EJB, I guess.
// NOTE: has to run inside a transaction to be able to stream from the DB
#Transactional
public void streamBlobToOutputStream(OutputStream outputStream, Class entityClass, String id, SingularAttribute attribute) {
BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(outputStream);
try {
com.foobar.model.Blob blob = fooDao.findByID(id, entityClass, com.foobar.model.Blob.class, attribute);
if (blob.getBlob() == null) {
return;
}
InputStream inputStream;
try {
inputStream = blob.getBlob().getBinaryStream();
} catch (SQLException e) {
throw new RuntimeException("Could not read binary data.", e);
}
IOUtils.copy(inputStream, bufferedOutputStream);
// NOTE: the buffer must be flushed without data seems to be missing
bufferedOutputStream.flush();
} catch (Exception e) {
throw new RuntimeException("Could not send data.", e);
}
}
/**
* Builds streaming response for data which can be streamed from a Blob.
*
* #param contentType The content type. If <code>null</code> application/octet-stream is used.
* #param contentDisposition The content disposition. E.g. naming of the file download. Optional.
* #param entityClass The entity class to search in.
* #param id The Id of the entity with the blob field to stream.
* #param attribute The Blob attribute in the entity.
* #return the response builder.
*/
protected Response.ResponseBuilder buildStreamingResponseBuilder(String contentType, String contentDisposition,
Class entityClass, String id, SingularAttribute attribute) {
StreamingOutput streamingOutput = new StreamingOutput() {
#Override
public void write(OutputStream output) throws IOException, WebApplicationException {
streamBlobToOutputStream(output, entityClass, id, attribute);
}
};
MediaType mediaType = MediaType.APPLICATION_OCTET_STREAM_TYPE;
if (contentType != null) {
mediaType = MediaType.valueOf(contentType);
}
Response.ResponseBuilder response = Response.ok(streamingOutput, mediaType);
if (contentDisposition != null) {
response.header("Content-Disposition", contentDisposition);
}
return response;
}
/**
* Stream a blob from the database.
* #param contentType The content type. If <code>null</code> application/octet-stream is used.
* #param contentDisposition The content disposition. E.g. naming of the file download. Optional.
* #param currentBlob The current blob value of the entity.
* #param entityClass The entity class to search in.
* #param id The Id of the entity with the blob field to stream.
* #param attribute The Blob attribute in the entity.
* #return the response.
*/
#Transactional
public Response streamBlob(String contentType, String contentDisposition,
Blob currentBlob, Class entityClass, String id, SingularAttribute attribute) {
if (currentBlob == null) {
return Response.noContent().build();
}
return buildStreamingResponseBuilder(contentType, contentDisposition, entityClass, id, attribute).build();
}
I also have to add to my answer that there might be an issue with the Blob behavior under Hibernate. By default Hibernate is merging the complete entity with the DB, also if only one field was changed, i.e. if you update a field name and also have a large Blob image untouched the image will be updated. Even worse because before the merge if the entity is detached Hibernate has to fetch the Blob from the DB to determine the dirty status. Because blobs cannot be byte wise compared (too large) they are considered immutable and the equal comparison is only based on the object reference of the blob. The fetched object reference from the DB will be a different object reference, so although nothing was changed the blob is updated again. At least this was the situation for me. I have used the annotation #DynamicUpdate at the entity and have written a user type handling the blob in a different way and checking if the must be updated.
Related
I have two different sources of some IDs I have to do work with. One is from a file, another is from URL. When I'm creating Flux from the Files' lines, I can perfectly well work on it. When I'm switching the Flux-creating function with the one that uses WebClient....get(), I get different results; the WebClient does never get called for some reason.
private Flux<String> retrieveIdListFromFile(String filename) {
try {
return Flux.fromIterable(Files.readAllLines(ResourceUtils.getFile(filename).toPath()));
} catch (IOException e) {
return Flux.error(e);
}
}
Here the WebClient part...
private Flux<String> retrieveIdList() {
return client.get()
.uri(uriBuilder -> uriBuilder.path("capdocuments_201811v2/selectRaw")
.queryParam("q", "-P_Id:[* TO *]")
.queryParam("fq", "DateLastModified:[2010-01-01T00:00:00Z TO 2016-12-31T00:00:00Z]")
.queryParam("fl", "id")
.queryParam("rows", "10")
.queryParam("wt", "csv")
.build())
.retrieve()
.bodyToFlux(String.class);
}
When I do a subscribe(System.out::println)on the WebClient's flux, nothing happens. When I do a blockLast(), it works (URL is called, data returned). I don't get why, and how to correct this, and what I'm doing wrong.
With the flux that originates from the file, even the subscribe works fine. I sort of thought, that Fluxes are interchangable...
When I do a retrieveIdList().log().subscribe():
INFO [main] reactor.Flux.OnAssembly.1 | onSubscribe([Fuseable] FluxOnAssembly.OnAssemblySubscriber)
INFO [main] reactor.Flux.OnAssembly.1 | request(unbounded)
When I do the same with blockLast() instead of subscribe():
INFO [main] reactor.Flux.OnAssembly.1 | onSubscribe([Fuseable] FluxOnAssembly.OnAssemblySubscriber)
INFO [main] reactor.Flux.OnAssembly.1 | request(unbounded)
INFO [reactor-http-nio-4] reactor.Flux.OnAssembly.1 | onNext(id)
.
.
.
Judging from your question update, it seems that nothing is waiting on the processing to finish. I assume this is a batch or CLI application, and not a web application?
Assuming the following:
Flux<User> users = userService.fetchAll();
Calling blockLast on a Flux will trigger the processing and block until the result is there.
Calling subscribe on it will trigger the processing asynchronously; we're seeing the subscriber request elements in your logs, but nothing more. This probably means that the JVM exits before any elements are published - nothing is waiting on the result.
If you're effectively writing some CLI/batch application and not processing requests within a web application, you can block on the final reactive pipeline to get the result. If you wish to write that result to a file or send it to a different service, then you should compose on it with reactor operators.
async scope is not working in Clustering in mule 3.4.2. we are getting below exception.
Message : Interrupted while queueing event for "SEDA Stage Main_Flow.async1". Message payload is of type: ConfirmReceiveMessageResponse
Code : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. com.sample.client.ReceiveMessageResponse (java.io.NotSerializableException)
java.io.ObjectOutputStream:1183 (null)
2. java.io.NotSerializableException: com.elexon.bmrs.ecp.client.ReceiveMessageResponse (org.apache.commons.lang.SerializationException)
org.apache.commons.lang.SerializationUtils:111 (null)
3. Interrupted while queueing event for "SEDA Stage Main_Flow.async1". Message payload is of type: ConfirmReceiveMessageResponse (org.mule.api.service.FailedToQueueEventException)
org.mule.processor.SedaStageInterceptingMessageProcessor:92 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/service/FailedToQueueEventException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
java.io.NotSerializableException: com.sample.client.ReceiveMessageResponse
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1183)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
at org.apache.commons.collections.map.AbstractHashedMap.doWriteObject(AbstractHashedMap.java:1182)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
After removing the async scope we are able to test the application. Could please help us how to make the application works with async in cluster env?
If the flow ref is using an async processing strategy it will try and persist the event in a cluster I believe. And your object is not Serializable.
You can make com.sample.client.ReceiveMessageResponse implement java.io.Serializable if you want the message to be persisted.
Or you can try forcing the flow that you are flow-ref'ing processingStrategy="synchronous" maybe.
I have an endpoint that works as a distributor with three other worker endpoints.
The handling endpoint of the received message opens a transaction and tries to import some xml data into an sql db. If some exception is thrown during this process, the exception is caught, the transaction is rolled back and the xml data is written to an error folder.
Simplified, it looks like that:
public void Handle(doSomethingCmd message)
{
System.Data.SqlClient.BeginTransaction();
try
{
//... some xml data import
throw new Exception();
//Commit if succeded
}
catch (Exception exception)
{
System.Data.IDbTransaction.Rollback();
//...Write file to error folder
}
}
In the first place, no retry happens after the transaction rollback. But when the message is sent again, all the worker endpoints (only the workers) get an exception (Cannot enlist transaction, failed to send msg to control queue --> see stacktrace below) and nservicebus does retry the message (this leads to the case, that the file appears several times in the error folder)
It looks like distributed transaction is in an invalid state. I could just handing over the exception (re-throw the exception), so nservicebus handles the rollback for me, but in that case the file is written to the error folder several times as well (due to retry mechanism)
Failed raising finished message processing event.|NServiceBus.Unicast.Queuing.FailedToSendMessageException: Failed to send message to address: someEndpoint.distributor.control#SRVPS01 ---> System.Messaging.MessageQueueException: Cannot enlist the transaction.
at System.Messaging.MessageQueue.SendInternal(Object obj, MessageQueueTransaction internalTransaction, MessageQueueTransactionType transactionType)
at System.Messaging.MessageQueue.Send(Object obj, MessageQueueTransactionType transactionType)
at NServiceBus.Transports.Msmq.MsmqMessageSender.Send(TransportMessage message, Address address) in c:\BuildAgent\work\31f8c64a6e8a2d7c\src\NServiceBus.Core\Transports\Msmq\MsmqMessageSender.cs:line 49
--- End of inner exception stack trace ---
at NServiceBus.Transports.Msmq.MsmqMessageSender.ThrowFailedToSendException(Address address, Exception ex) in c:\BuildAgent\work\31f8c64a6e8a2d7c\src\NServiceBus.Core\Transports\Msmq\MsmqMessageSender.cs:line 88
at NServiceBus.Transports.Msmq.MsmqMessageSender.Send(TransportMessage message, Address address) in c:\BuildAgent\work\31f8c64a6e8a2d7c\src\NServiceBus.Core\Transports\Msmq\MsmqMessageSender.cs:line 75
at NServiceBus.Distributor.MSMQ.ReadyMessages.ReadyMessageSender.SendReadyMessage(String sessionId, Int32 capacityAvailable, Boolean isStarting) in c:\BuildAgent\work\c3100604bbd3ca20\src\NServiceBus.Distributor.MSMQ\ReadyMessages\ReadyMessageSender.cs:line 62
at NServiceBus.Distributor.MSMQ.ReadyMessages.ReadyMessageSender.TransportOnFinishedMessageProcessing(Object sender, FinishedMessageProcessingEventArgs e) in c:\BuildAgent\work\c3100604bbd3ca20\src\NServiceBus.Distributor.MSMQ\ReadyMessages\ReadyMessageSender.cs:line 50
at System.EventHandler1.Invoke(Object sender, TEventArgs e)
at NServiceBus.Unicast.Transport.TransportReceiver.OnFinishedMessageProcessing(TransportMessage msg) in c:\BuildAgent\work\31f8c64a6e8a2d7c\src\NServiceBus.Core\Unicast\Transport\TransportReceiver.cs:line 435
NServicebus version: 4.6.0.0
Queueing: MSMQ
The worker ends it unit of work by sending a message back to the distributor. This send will join the existing distributed transaction. The error you get is caused by this new transactional resource trying to join an already failing transaction. Something has marked the distributed transaction as rolling back.
This is normally caused by your code. Either your database operation is failing somehow or you probably have exceeded the transaction timeout limit handling the message. (Default one minute)
Check your logs to see if you are using above the transaction timeout limit to process the message on the worker.
In my <catch-exception-strategy>, I write error payload to file. But sometimes when flow involves web-service calls and host is unavailable or unknown (for e.g. java.net.UnknownHostException is thrown), payload is not anymore an instance of InputStream or String. If I try to log error then to file, following exception is thrown:
exception.AbstractExceptionListener (AbstractExceptionListener.java:299) -
********************************************************************************
Message : Could not find a transformer to transform "SimpleDataType{type=org.apache.commons.httpclient.methods.PostMethod, mimeType='*/*'}" to "SimpleDataType{type=java.io.InputStream, mimeType='*/*'}".
Code : MULE_ERROR-65237
--------------------------------------------------------------------------------
Exception stack is:
1. Could not find a transformer to transform "SimpleDataType{type=org.apache.commons.httpclient.methods.PostMethod, mimeType='*/*'}" to "SimpleDataType{type=java.io.InputStream, mimeType='*/*'}". (org.mule.api.transformer.TransformerException)
org.mule.registry.MuleRegistryHelper:252 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transformer/TransformerException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
org.mule.api.transformer.TransformerException: Could not find a transformer to transform "SimpleDataType{type=org.apache.commons.httpclient.methods.PostMethod, mimeType='*/*'}" to "SimpleDataType{type=java.io.InputStream, mimeType='*/*'}".
at org.mule.registry.MuleRegistryHelper.lookupTransformer(MuleRegistryHelper.java:252)
at org.mule.DefaultMuleMessage.getPayload(DefaultMuleMessage.java:355)
at org.mule.DefaultMuleMessage.getPayload(DefaultMuleMessage.java:313)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
I am thinking of doing a choice block before writing to file to make sure payload is writable. Shall I do something like #[payload instanceof java.io.InputStream]? But then how about cases where payload is DOM or something else? Please advise.
I would use a transformer inside the catch exception strategy and encapsulate there the logic that would consider the input and produce a writable payload.
If you want to check whether there is a transformer available for a specific payload type and output type, I guess you could look up the available transformers from the registry. In Groovy:
transformers = message.getMuleContext().getRegistry().lookupTransformers(
new org.mule.transformer.types.SimpleDataType(payload.getClass()),
new org.mule.transformer.types.SimpleDataType(java.io.InputStream))
if (transformers.size() == 0) {
//set some variable or whatever
}
Using JOliver EventStore 3.0, and just getting started with simple samples.
I have a simple pub/sub CQRS implementation using NServiceBus. A client sends commands on the bus, a domain server recieves and processes the commands and stores events to the eventstore, which are then published on the bus by the eventstore's dispatcher. a read-model server then subscribes to those events to update the read-model. Nothing fancy, pretty much by-the-book.
It is working, but just in simple tests I am getting lots of concurrency exceptions (intermittantly) on the domain server when the event is stored to the EventStore. It properly retries, but sometimes it hits the 5 retry limit and the command ends up on the error queue.
Where could I start investigating to see what is causing the concurrency exception? I remove the dispatcher and just focus on storing events and it has the same issue.
I'm using RavenDB for persistence of my EventStore. I'm not doing anything fancy, just this:
using (var stream = eventStore.OpenStream(entityId, 0, int.MaxValue))
{
stream.Add(new EventMessage { Body = myEvent });
stream.CommitChanges(Guid.NewGuid());
}
The stack trace for the exception looks like this:
2012-03-17 18:34:01,166 [Worker.14] WARN
NServiceBus.Unicast.UnicastBus [(null)] <(null)> -
EmployeeCommandHandler failed handling message.
EventStore.ConcurrencyException: Exception of type
'EventStore.ConcurrencyException' was thrown. at
EventStore.OptimisticPipelineHook.PreCommit(Commit attempt) in
c:\Code\public\EventStore\src\proj\EventStore.Core\OptimisticPipelineHook.cs:line
55 at EventStore.OptimisticEventStore.Commit(Commit attempt) in
c:\Code\public\EventStore\src\proj\EventStore.Core\OptimisticEventStore.cs:line
90 at EventStore.OptimisticEventStream.PersistChanges(Guid
commitId) in
c:\Code\public\EventStore\src\proj\EventStore.Core\OptimisticEventStream.cs:line
168 at EventStore.OptimisticEventStream.CommitChanges(Guid
commitId) in
c:\Code\public\EventStore\src\proj\EventStore.Core\OptimisticEventStream.cs:line
149 at CQRSTest3.Domain.Extensions.StoreEvent(IStoreEvents
eventStore, Guid entityId, Object evt) in
C:\dev\test\CQRSTest3\CQRSTest3.Domain\Extensions.cs:line 13 at
CQRSTest3.Domain.ComandHandlers.EmployeeCommandHandler.Handle(ChangeEmployeeSalary
message) in
C:\dev\test\CQRSTest3\CQRSTest3.Domain\ComandHandlers\Emplo
yeeCommandHandler.cs:line 55
I figured it out. Had to dig through source code to find it though. I wish this was better documented! Here's my new eventstore wireup:
EventStore = Wireup.Init()
.UsingRavenPersistence("RavenDB")
.ConsistentQueries()
.InitializeStorageEngine()
.Build();
I had to add .ConsistentQueries() in order for the raven persistence provider to internally use WaitForNonStaleResults on the queries eventstore was making to raven.
Basically when I add a new event, and then try to add another before raven has caught up with indexing, the stream revision was not up to date. The second event would step on the first one.