RabbitMQ.Client Faild to consume the messages shoveled from Azure Service Bus - rabbitmq

I have a .net application to connect to RabbitMQ with RabbitMQ.Client 6.4.0.
The messages are shoveled from Azure Service Bus.
The RabbitMQ.Client throw the following exception. But, if the message is sent from RabbitMQ UI, or RabbitMQ.Client itself. there is no problem.
The code is in https://github.com/wjmirror/HiBunny
The shovel is from Azure Service Bus, topic portal.quoterequests to rabbitmq exchange portal.quoterequests.
RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=541, text='INTERNAL_ERROR', classId=0, methodId=0
at RabbitMQ.Client.Impl.SimpleBlockingRpcContinuation.GetReply(TimeSpan timeout)
at RabbitMQ.Client.Impl.ModelBase.BasicConsume(String queue, Boolean autoAck, String consumerTag, Boolean noLocal, Boolean exclusive, IDictionary2 arguments, IBasicConsumer consumer) at RabbitMQ.Client.Impl.AutorecoveringModel.BasicConsume(String queue, Boolean autoAck, String consumerTag, Boolean noLocal, Boolean exclusive, IDictionary2 arguments, IBasicConsumer consumer)
at RabbitMQ.Client.IModelExensions.BasicConsume(IModel model, String queue, Boolean autoAck, IBasicConsumer consumer)
at HiBunny.Program.Main(String[] args) in d:\study\HiBunny\HiBunny\Program.cs:line 64

Related

Handling Custom exception for rabbitmq listener connectivity issues

I have a listener which consumes the messages from the third party producer.I need to send the custom error message in case of the queue connection issues
I'm new to rabbit mq .May I know what are the ways to handle the custom exception here
You can use a ConnectionListener callback and inject it into a org.springframework.amqp.rabbit.connection.ConnectionFactory configuration:
void addConnectionListener(ConnectionListener listener);
https://docs.spring.io/spring-amqp/docs/current/reference/html/#connection-channel-listeners

Create RabbitMQ queue using spring cloud stream without a consumer or without an active consumer

Is there a way to create a RabbitMQ queue using spring cloud stream without having a consumer for the queue.
Our scenario is that we want to use the delay messaging strategy, so messages coming to the first queue would be held until expired and moved to a DLQ.
The application would be consuming the messages from the DLQ.
Wanted to check if there is a way we can use spring cloud stream to configure the queues, when we do not have a consumer for the first queue and it's just there to hold messages till expiry.
Yes; simply add a Queue bean (and binding if needed).
Boot auto configures a RabbitAdmin which will detect such beans when the connection is first established.
https://docs.spring.io/spring-amqp/docs/current/reference/html/#broker-configuration
#Bean
public Queue queue() {
return QueueBuilder.nonDurable("foo")
.autoDelete()
.exclusive()
.withArgument("foo", "bar")
.build();
}

ActiveMQ broker error "Setup of JMS message listener invoker failed for destination trying to recover. Cause: The Consumer is closed"

We have observed in our enviornment as consumer from ActiveMQ UI get removed. We have very low traffic and observed as initially we have 3 consumer and those got each of them removed after interval of couple hours and once we restart our consumer it again refresh connection for couple more hours, we dont see any error in logs except:
Setup of JMS message listener invoker failed for destination 'queue-1' - trying to recover. Cause: The Consumer is closed
I use AWS ActiveMQ broker and don't see any error in the broker logs.
We use PoolConnectionFactory with ActiveMQConnectionFactory for creating pool of connection for our consumer as recommended. we are using ActiveMQ 5.15
#Bean
public PooledConnectionFactory pooledConnectionFactory() {
ActiveMQConnectionFactory activeMQConnectionFactory =
new ActiveMQConnectionFactory();
activeMQConnectionFactory.setBrokerURL(brokerUrl);
activeMQConnectionFactory.setUserName(username);
activeMQConnectionFactory.setPassword(password);
activeMQConnectionFactory.setTrustAllPackages(true);
ActiveMQPrefetchPolicy activeMQPrefetchPolicy = new ActiveMQPrefetchPolicy();
activeMQPrefetchPolicy.setQueuePrefetch(100);
//activeMQPrefetchPolicy.setQueuePrefetch();
activeMQConnectionFactory.setPrefetchPolicy(activeMQPrefetchPolicy);
PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory(activeMQConnectionFactory);
pooledConnectionFactory.setMaxConnections(poolSize);
return pooledConnectionFactory;
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(pooledConnectionFactory());
factory.setMessageConverter(jacksonJmsMessageConverter());
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setConcurrency("1-1");
factory.setErrorHandler(ActiveMQErrorHandler());
return factory;
}
#Bean
public JmsTemplate jmsTemplate() {
JmsTemplate jmsTemplate = new JmsTemplate(pooledConnectionFactory());
jmsTemplate.setMessageConverter(jacksonJmsMessageConverter());
return jmsTemplate;
}
#Bean
public Queue queue() {
return new ActiveMQQueue(queueName);
}
#Bean
public ErrorHandler ActiveMQErrorHandler() {
return t -> {
LOGGER.error("JMS_LISTENER_ERROR");
};
}
Given the information provided it sounds as though either the connection is dropping and the client isn't reporting that or the remote is closing the consumer on it's end which the pool will likely not notice until some user action is performed.
This is one of the gotchas of using a JMS pool which is that the pool doesn't have complete insight into what is going on with the client and therefore checking out a connection that's been sitting in the pool can result in obtaining a stale and no longer active connection due to the IO interruption that doesn't bubble up to the pool layer. One means of working around this would be to use the ActiveMQ client failover transport to allow it to automatically reconnect to the broker if the connection is dropped.
Another option you could try is to use the PooledJMS JMS connection pooling library which has some additional work done to attempt to validate failed connections and or closed resources sooner and match that with a ConnectionFactory that creates ActiveMQ connection's that use failover so that remotely closed resources like consumers could be caught in some cases.
Ultimately though your code will still need to deal with potential failure cases out of the JMS resources and retry where needed such as senders seeing security exceptions etc. The pooling bits don't make all your troubles go away, and in some cases they just introduce new one's you hadn't thought of yet.

Azure Service Bus Relay Occasional FaultException

We can't determine why the Azure BasicHttpRelay is throwing an occasional FaultException without any details. We've enabled WCF diagnostic tracing, but the available stack trace information is still the same. It seems like the WCF client channel fails for a brief time and then shortly returns.
We do cache the WCF Channel (e.g. CreateChannel), but this is the first time we've experienced this strange behavior. We have other Azure Service Bus relay solutions that work fine with this approach.
Error Message:
There was an error encountered while processing the request.
Stack Trace:
at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc)
at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)
Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at [our WCF method]...
FaultException - FaultCode Details:
Name: ServerErrorFault
Namespace: http://schemas.microsoft.com/netservices/2009/05/servicebus/relay
IsPredefinedFault: false
IsReceiverFault: false
IsSenderFault: false
Soap Message
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Header />
<s:Body>
<s:Fault>
<faultcode xmlns:a="http://schemas.microsoft.com/netservices/2009/05/servicebus/relay">a:ServerErrorFault</faultcode>
<faultstring xml:lang="en-US">There was an error encountered while processing the request.</faultstring>
<detail>
<ServerErrorFault xmlns="http://schemas.microsoft.com/netservices/2009/05/servicebus/relay" xmlns:i="http://www.w3.org/2001/XMLSchema-instance" />
</detail>
</s:Fault>
</s:Body>
</s:Envelope>
Through debugging, we can see the server properly responds to the message requests (via IDispatchMessageInspector), but the client fails to handle the response appropriately (IClientMessageInspector reports fault). Subsequent relay requests will succeed after the client channel seemingly corrects itself. These failures seem to be intermittent and not load-driven. We never see these FaultException errors using basicHttpBinding outside the Azure relay.
Does anyone have any suggestions? We are using Azure SDK 1.8.
I've tried configured a new Service Bus Relay namespace using owner shared secret, but still seeing the same results.
After reaching out to MS - this issue turned out to be an MS bug with the Relay or the SDK, specifically when using Http Connectivity Mode. At this point, the only workaround is ensure you have the appropriate outgoing TCP ports opened up to ensure reliable connectivity with the Azure Relay.
Allow Outgoing TCP Ports: 9350 - 9354
MS has told us that they are still working on resolving the root cause. Hopefully this workaround will help others. Our corporate firewall had these TCP ports blocked which forced all communication over port 80 which must trigger this issue. The positive thing is that opening up these ports enables faster connectivity to the relay when starting up your listeners (AutoDetect doesn't have to check the TCP ports availability every time).

WCF Remote MSMQ - I can write to a remote queue, but cannot receive

jobsServer: Windows Server 2008 R2
.NET Version: 4.5
I'm using WCF to connect two servers - app and queue. I want app to be able to send/receive messages from queue. For some reason, app can send messages, but CANNOT receive them.
The netMsmq binding looks like:
<binding name="JobExecutionerBinding" receiveErrorHandling="Move">
<security>
<transport msmqAuthenticationMode="None" msmqProtectionLevel="None" />
</security>
</binding>
And the service binding looks like:
Now, the client binding looks like:
<endpoint address="net.msmq://queue/private/jobs"
binding="netMsmqBinding"
bindingConfiguration="JobExecutionerBinding"
contract="JobExecution.Common.IJobExecutionService"
name="SimpleEmailService"
kind=""
endpointConfiguration=""/>
I changed a few names for security's sake.
So, the WC client can send to the remote queue without a problem. It even properly queues the outgoing message and forwards it on later in the event that the remote queue server is down. But every time I start up the WCF service, I get this:
There was an error opening the queue. Ensure that MSMQ is installed
and running, the queue exists and has proper authorization to be read
from. The inner exception may contain additional information. --->
System.ServiceModel.MsmqException: An error occurred while opening the
queue:The queue does not exist or you do not have sufficient
permissions to perform the operation. (-1072824317, 0xc00e0003). The
message cannot be sent or received from the queue. Ensure that MSMQ is
installed and running. Also ensure that the queue is available to open
with the required access mode and authorization. at
System.ServiceModel.Channels.MsmqQueue.OpenQueue() at
System.ServiceModel.Channels.MsmqQueue.GetHandle() at
System.ServiceModel.Channels.MsmqQueue.SupportsAccessMode(String
formatName, Int32 accessType, MsmqException& msmqException) --- End
of inner exception stack trace --- at
System.ServiceModel.Channels.MsmqVerifier.VerifyReceiver(MsmqReceiveParameters
receiveParameters, Uri listenUri) at
System.ServiceModel.Channels.MsmqTransportBindingElement.BuildChannelListener[TChannel](BindingContext
context) at
System.ServiceModel.Channels.Binding.BuildChannelListener[TChannel](Uri
listenUriBaseAddress, String listenUriRelativeAddress, ListenUriMode
listenUriMode, BindingParameterCollection parameters) at
System.ServiceModel.Description.DispatcherBuilder.MaybeCreateListener(Boolean
actuallyCreate, Type[] supportedChannels, Binding binding,
BindingParameterCollection parameters, Uri listenUriBaseAddress,
String listenUriRelativeAddress, ListenUriMode listenUriMode,
ServiceThrottle throttle, IChannelListener& result, Boolean
supportContextSession) at
System.ServiceModel.Description.DispatcherBuilder.BuildChannelListener(StuffPerListenUriInfo
stuff, ServiceHostBase serviceHost, Uri listenUri, ListenUriMode
listenUriMode, Boolean supportContextSession, IChannelListener&
result) at
System.ServiceModel.Description.DispatcherBuilder.InitializeServiceHost(ServiceDescription
description, ServiceHostBase serviceHost) at
System.ServiceModel.ServiceHostBase.InitializeRuntime() at
I've been all over StackOverflow and the internet for 8 hours. Here's what I've done:
Ensured that ANONYMOUS LOGIN, Everyone, Network, Network Service, and Local Service have full control
Stopped the remote MSMQ server and observed what the WCF service does, and I get a different error - so I'm sure that the WCF service when starting up is speaking to the MSMQ server
Disabled Windows Firewall on both boxes and opened all ports via EC2 security groups
Set AllowNonauthenticatedRpc and NewRemoteReadServerAllowNoneSecurityClient to 1 in the registry
Configured MS DTC on both servers (the queue is transactional, but I get the same error regardless as to whether the queue is transactional or not)
Confirmed that the WCF server starts up fine if I use the local queue, and receives without a problem
Help!!! I can't scale my app without a remote queueing solution.
It's not clear from your post which tier can not read and more importantly which queue.
However, reading remote queues transactionally is not supported:
Message Queuing supports sending transactional messages to remote queues, but does not support reading messages from a remote queue within a transaction. This means that reliable, exactly-once reception is not available from remote queues. See
Reading Messages from Remote Queues
I suspect that somewhere your system is still performing transactional remote reads even though you mentioned you disabled it.
From a best practice point of view, even if you got it to work, your design will not scale which is a shame as it is something you mentioned you wanted.
Remote reading is a high-overhead and therefore inefficient process. Including remote read operations in an application limits scaling. 1
You should always remote write not remote read.
A better way is to insert a message broker or router service that acts as the central point for messaging. Your app and queue services (confusing names by the way) should merely read transactionally from their local queues.
i.e.
app should transactionally read it's local queue
app should transactionally send to the remote broker
broker transactionally reads local queue
broker transactionally sends to remote queue
Similarly if your queue tier wanted to reply the reverse process to the above would occur.
Later if you wish to improve performance you can introduce a Dynamic router which redirects a message to a different remote queue on another machine based on dynamic rulesets or environmental conditions such as stress levels.
Remote transactional reads are supported as of MSMQ 4.0 (Windows server 2008). If you are facing this issue, be sure to checkout https://msdn.microsoft.com/en-us/library/ms700128(v=vs.85).aspx