We are using NSB 6.0. I have set up RabbitMQTransport and MsmqTransport on my configuration with following code
if (this.UseRabbitMQ)
{
config.UseTransport<RabbitMQTransport>().Transactions(TransportTransactionMode.ReceiveOnly);
}
else
{
config.UseTransport<MsmqTransport>().Transactions(TransportTransactionMode.ReceiveOnly);
}
I have no problem with RabbitMQTransport, But as soon as I switch to MsmqTransport I have the following error.
"The given key (RabbitMQ.RoutingTopologySupportsDelayedDelivery) was not present in the dictionary.'"
I'm not sure what's the requirement to run either one transport or another.
RabbitMQ has a feature that is picked up when assembly scanning is taking place and executed even though it's not configured to be the transport. You'll have to explicitly exclude RabbitMQ transport assembly from scanning using assembly scanning API.
Related
I can't seem to get the Castle Windsor Integration working for Mass Transit over RabbitMQ. Everything was working fine until I introduced Windsor into the picture. I referenced Castle.Windsor 3.2 and MassTransit.WindsorIntegration 2.9 and configured the container for use within my application. I'm registering the MassTransit Consumers via:
Container.Register(..., Types.FromThisAssembly().BasedOn<IConsumer>());
When I debug and inspect the container after this line is ran, I can see that it successfully registered all of the consumers along with all of my other components. I then have the following code to initialize and register the service bus:
var serviceBus = ServiceBusFactory.New(sbc =>
{
sbc.UseRabbitMq();
sbc.ReceiveFrom(Config.ServiceBusEndpoint);
sbc.Subscribe(sc => sc.LoadFrom(Container));
});
Container.Register(Component.For<IServiceBus>().Instance(serviceBus));
I am using the LoadFrom(IWindsorContainer container) extension method provided by MassTransit.WindsorIntegration.
All of the examples I've found so far stop here and indicate that this is all you should have to do. Unfortunately for me my Consumers are never being called and messages are just piling up in the queue (eventually timing out and going to error queue). The fact that messages are showing up in the Consumer queue at all (+ I see a single consumer bound to the queue via the RabbitMQ Admin Tool) indicates to me that the consumers are probably being subscribed properly - so I'm not sure where the problem lies.
I added NLog logging for Windsor and MassTransit but no errors are showing up in the logs. I'm not sure how I should proceed troubleshooting at this point. Any ideas?
Also, this application is currently just a console application using Topshelf for development. Ultimately it will be installed as a Windows Service. Not sure if that is relevant or not but I thought I'd mention it just in case.
UPDATE
As a test I created a very simple Consumer with a parameter-less constructor for processing a very simple test message. This Consumer is successfully called! The "real" Consumers however have dependencies that need to be injected into them via the constructor. I was hoping the Container would resolve these but apparently it's having some sort of trouble. Weird that nothing is showing up in the logs about it. Stay tuned...
Okay I figured it out. Somewhere along the way when I was adding/removing NuGet packages I somehow managed to delete a reference to a DLL (ServiceStack.Text.dll) that one of my components needed (RedisClientsManager).
I started the debugger, let all my components get registered then popped open the Immediate Window and attempted to resolve each component one by one (by calling container.Resolve<RegisteredType>()) until I found the one that threw the exception when I attempted to resolve it.
The Exception message from Windsor at that point told me exactly what the problem was. I'm a little lost as to why this wasn't being logged or why the Exception was not raised when the container itself attempted to resolve it. Anyhow, moral of the story is make sure your components resolve.
I'm using neo4j in a glassfish server through a modified version of Alex Smirnov neo4j JCA connector.
My version is available here : https://github.com/Riduidel/neo4j-connector
I'm using this connector with neo4j 1.8.
As a consequence, when i want to use it, i first install the connector in my Glassfish application server, then use this connector in applications wishing to connect to.
It works OK when using it with fresh stores.
But, when using it with stores created with previous version, I encounter weird bugs.
Typically, I got today the following stack
javax.resource.spi.ResourceAllocationException: Error in allocating a connection. Cause: Failed to transition org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader#3bbd53b1 from NONE to STOPPED
...
...
.../* JCA internal exception stack */
...
...
Caused by: com.sun.appserv.connectors.internal.api.PoolingException: Failed to transition org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader#494b584c from NONE to STOPPED
at com.sun.enterprise.resource.pool.ConnectionPool.createSingleResource(ConnectionPool.java:924)
at com.sun.enterprise.resource.pool.ConnectionPool.createResource(ConnectionPool.java:1185)
at com.sun.enterprise.resource.pool.datastructure.RWLockDataStructure.addResource(RWLockDataStructure.java:98)
... 66 more
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Failed to transition org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader#494b584c from NONE to STOPPED
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:388)
at org.neo4j.kernel.lifecycle.LifeSupport.init(LifeSupport.java:82)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:116)
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:227)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:79)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:70)
at com.netoprise.neo4j.AbstractNeo4jManagedConnectionFactory.createDatabase(AbstractNeo4jManagedConnectionFactory.java:165)
at com.netoprise.neo4j.AbstractNeo4jManagedConnectionFactory.createDatabase(AbstractNeo4jManagedConnectionFactory.java:127)
at com.netoprise.neo4j.Neo4jManagedConnectionFactory.createManagedConnection(Neo4jManagedConnectionFactory.java:163)
at com.sun.enterprise.resource.allocator.ConnectorAllocator.createResource(ConnectorAllocator.java:160)
at com.sun.enterprise.resource.pool.ConnectionPool.createSingleResource(ConnectionPool.java:907)
... 68 more
Caused by: java.lang.AssertionError
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:265)
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:260)
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:260)
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:260)
at org.neo4j.index.impl.lucene.LuceneDataSource.<init>(LuceneDataSource.java:185)
at org.neo4j.index.lucene.LuceneIndexProvider.load(LuceneIndexProvider.java:72)
at org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader.loadIndexImplementations(InternalAbstractGraphDatabase.java:1171)
at org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader.init(InternalAbstractGraphDatabase.java:1143)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:382)
... 78 more
A fast inspection reveals that this exception is linked to an undeletable "write.lock" file. My write.lock file can't be deleted because I guess migration is not over.
How can I make sure the migration is done before using it without migrating it outside of Glassfish ?
Is there a way to ahve exclusive store migrations in that context ? And if so, how ?
And is it the solution for my problem ?
EDIT 1 Added exception message.
EDIT 2 All this only happen when loaded graph was previously used with a Neo4j 1.5 and now with a Neo4j 1.8 connector. when graph is created by connector, absolutely no error happens.
EDIT 3 Strangely enough, this happens as long as there is no debugger plugged into that code : as soon as I try to debug it, the issue stop appearing. Which make me thinking there may be a migration cleanup mechanism that remvoe the write lock once migration is done, and this cleanup is not performed when using my neo4j JCA connector. Is it a valid observation ?
I am not too familiar with the JCA connector, but to be sure, I would just write a very small migration java class that opens the database, lets it migrate and shut down. Then try it again with the JCA connector?
After further investigations, truth revealed to not be in multiple calls to the EmbeddedGraphDatabase constructor, but instead to multiple identicail IndexProvider being loaded.
I use neo4j embedded in an open-source JCA connector.
In this connector, the org.neo4j.kernel.Service class is replaced by a custom one which contains a workaround regarding service loading for JBoss non shared libraries.
Unfortunatly, in our context, this workaround implies loading twice the index provider :
once using the EAR classloader
once using the Glassfish library classloader.
Why ?
Because, as our neo4j instance is using for application data AND for authentication, neo4j connector jar is put in ${domain}/lib. As a consequence, due to Classloader delegation in application server, the EAR classloader delegates to the Glassfish library classloader, and find this way the LuceneIndexProvider. Then, the Glassfish library classloader is directly used to load the same LuceneIndexProvider class.
This concludes by us having two LuceneIndexProvider objects, both trying to migrate the lucene index. Which lead to the AssertionError as the write.lock file created by the first object should be deleted by the second one, which can't do that.
I've then changed slightly that very specific class to use JBoss workaround only when default loading mechanism do not return any class (seee commit here). This small change worked like a charm, so I think you can considered this issue as fixed.
I have a WCF service to consume in .NET. As per requirement the Action element in the header has to be "http://abc" and the To element has to be "ws://xyz" in order for the service to recognize and respond to the request. The soapAction of the operation is however blank in WSDL and it can't be changed.
My service configuration built programmatically is this:
text message encoding binding with Soap11 envelope version and WSAddressing10 addressing version
no security biding
http transport binding
The setup I found achieving this requirement is "ws://xyz" as the endpoint URL and Request.Headers.Action set to "http://abc" in BeforeSendRequest using a message inspector added using an endpoint behaviour attached to the endpoint. Then I also attach a ClientViaBehavior with the URL of "http://abc".
On my development machine this causes as required
<a:Action>http://abc</a:Action>
<a:To>ws://xyz</a:To>
However on the test server it generates
<a:Action>http://abc</a:Action>
<a:To>http://xyz</a:To>
I don't know exact configuration of the server but I believe it is Windows server as is my development box. Does the same code generates different messages on two different machines or how else would I achieve this? I should also say it worked fine for several weeks and stopped last Monday.
I have found the following later:
The test server has .NET 4.5 on it as well as another machine I tried it on (also failed). The dev machine where it works fine has just .NET 4.0 on it which would suggest it could have something to do with it. However I have no evidence it is caused by .NET 4.5 as it was installed several weeks before the problem appeared. Moreover there have been no Windows updates since it stopped to work!
I've also tried to set the To element in my ClientMessageInspector implementation but the protocol still gets flipped to http.
I think the BeforeSendRequest is not called due miss configration of your service bindings. Check if you have added the the extention configuration to you service endpoints you want to have the behavior.
I´m creating a new WCF service. I initially had only three operations. But after some time I decided to add two more. This operations doesn't appear in the Microsoft test client, neither in the list of operations when I try to add a service reference from my WPF client. Also I tried to comment one of the initial operations. It still apears in the Microsoft test client and can be invoked. I Tried also delete the dlls generated by the service and regenerate again. No luck. There are some kind of "cache" where Visual Studio stores the WCF services libraries that I can delete?
UPDATE: I'm working with the service running in the ASP.NET devolopment server.
You need to understand the order in which things happen.
You change your code, adding methods with [OperationContract] on them, or removing them, or changing their parameters or return values.
You then must build your service, producing a .DLL that contains the changes.
You must then deploy the changed DLL to the server it's going to run on
You must then restart the service (this may happen automatically depending on the server. For instance, IIS will recycle the service when it sees that the DLL changed)
You must then update your client, either the WCF Test Client, or "Add Service Reference", or the equivalent.
This last will have the effect of sending a request to the service for the new metadata or WSDL. Only then can the client see the changes you made to the definition of the service.
I don't know why, but I created a new project and copied the definitions of the operations from the problematic project and the problem is gone. One case more for Microsoft mysteries.
Make sure you are updating the services after adding the new operations.
Also make sure they have the attribute [OperationContract].
One thing we have discovered is that when you deploy the dlls that they must be in the bin, and cannot reside in the debug or release folder.
For me worked: just rebuild the wcf project
Did you close the client connection in client side
as showing your service
class Test
{
static void Main()
{
LocationClient client = new LocationClient();
// Use the 'client' variable to call operations on the service.
// Always close the client.
client.Close();
}
}
SOLUTION HERE :
Make sure your dataContract does NOT contain any enum
(You can use integer instead)
Be sure to reference a project in the solution and not a dll on your disk
Remove your "bin" and "obj" folders
Recompile
In IIS recycle the application pool
In IIS restart your service
In IIS "Browse" your service
=> You got it
I get this error:
StandardWrapperValve[Vaadin Servlet]: PWC1406: Servlet.service() for servlet Vaadin Servlet threw exception
java.lang.ClassCastException: com.delhi.entities.Category cannot be cast to com.delhi.entities.Category
when I try to run my webapps on glassfish v2.
Category is a JPA entity object
the offending code according to the server log is:
for (Category c : categories) {
mymethod();
}
categories is derived from:
List<Category> categories = q.getResultList();
Any idea what went wrong?
This is a class loader issue. If a class is loaded by different class loaders, it's objects cannot be assigned to each other. You have probably passed an object from one WAR into another one. There are several options to resolve this:
Put all your code into a single WAR.
Use some form of remoting between your WARs. Serialization takes care of the class loader problem.
Try putting all you WARs into a single EAR. If that doesn't work, put all code into JARs that are on the EAR's Classpath in the MANIFEST.MF.
I once had the same problem and the environment I had was following:
I had Glassfish v4
Netbeans with following projects
webpage war project containing entities
and ear project with that webpage war project
The problem was that in war's project settings I had checked [x] Run>Deploy on save. This was causing deploying war project everyime I hit save. It was sometimes leading to PermGen (memory) problems and unability to deploy EAR correctly (because e.g. in between undeploying and deploying EAR - this "crazy" Netbeans was deploying this war).
Solution: If Netbeans && using EAR, then uncheck deploy on save in project properties.
EDIT:
it seems that this error is connected with
SEVERE: The web application [/faces] created a ThreadLocal with key of type [org.glassfish.pfl.dynamic.codegen.impl.CurrentClassLoader$1] (value [org.glassfish.pfl.dynamic.codegen.impl.CurrentClassLoader$1#249ea63a]) and a value of type [org.glassfish.web.loader.WebappClassLoader] (value [WebappClassLoader (delegate=true; repositories=WEB-INF/classes/)]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
I've had same problem today. Solution was closing EntityManagerFactory after use.
This answer helped me:
https://stackoverflow.com/a/13823219/2455506
I'm experiencing this problem too with Glassfish v2 and Glassfish v3.
Can I ask you a question: Are you attempting to initialize any persistence object when the application is deployed (through a servlet loaded on startup or a context listener)?
Like bguiz, I've noticed this problem only happens on redeploy. A new deploy to a freshly restarted Glassfish server, never has this problem.
Like FelixM mentioned, I'm convinced this is a class loader issue, however I don't believe it's an issue with multiple wars (I only have 1 deployed to my server). In Glassfish 3, I can see that my WAR is utilizing 2 Glassfish "engines". One for the web(war) and one for the jpa. From what I understand, these are different containers each with their own classloader. I'm guessing Glassfish v2 works in the same manner.
I'm using Spring and (re)initialize some persistence objects on (re)deploy. What I'm thinking, is that while the web engine is reinitializing the war, the jpa engine is still using the old class definitions. Often if I retry the redeploy after this initial failure, it may succeed (sometimes it may take more than one retry but eventually I can get it to succeed without a restart - having better success with Glassfish v3 than v2).
At this point I'm thinking that either these two classloaders are out of sync or there is some sort of race condition on redeploy allowing this operation to sometimes succeed. I've tried to force the classloader, writing code like this
HashMap<Object, Object> properties = new HashMap<Object, Object>();
properties.put(PersistenceUnitProperties.CLASSLOADER, this.getClass().getClassLoader());
entityManagerFactory = Persistence.createEntityManagerFactory(jpaContext, properties);
but it didn't seem to have any affect.
I'm also wondering if eliminating the initialization at startup could fix the problem, giving the appserver time to resynchronize both engines before using any jpa classes (which is why I asked my follow up question).
My observation is that it only happens when using a hot redeploy or a static redeploy. This only applies, of course, if you get a class cast exception where both the to and from classes are the same.
Workarounds:
Don't use undeploy and deploy instead of redeploy
Restart app server
Remove static members of the affected classes
Use a remote interface (serialization makes this go away)
IMO I think the class loader was unable to reload the class and the old version was reused, resulting in the error.
This article doesn't talk about this error directly, but it is good background info on how the class loader works.