Why WebLogic is stopping long running threads without throwing any Exception? - weblogic

I have one Java application(Deployed on WebLogic) which I use for polling messages from Amazon FIFO but problem is that after running for a long time Whatever WebLogic is doing my thread behaves like a hibernated thread and my application stops polling messages without any exception.

Related

how to resove "connection.blocked: true" in capabilities on the RabbitMQ UI

"rabbitmqctl list_connections" shows as running but on the UI in the connections tab, under client properties, i see "connection.blocked: true".
I can see that messages are in queued in RabbitMq and the connection is in idle state.
I am running Airflow with Celery. My jobs are not executing at all.
Is this the reason why jobs are not executing?
How to resolve the issue so that my jobs start running
I'm experiencing the same kind of issue by just using celery.
It seems that when you have a lot of messages in the queue, and these are fairly chunky, and your node memory goes high, the rabbitMQ memory watermark gets trespassed and this triggers a blocking into consumer connections, so no worker can access that node (and related queues).
At the same time publishers are happily sending stuff via the exchange so you get in a lose-lose situation.
The only solution we had is to avoid hitting that memory watermark and scale up the number of consumers.
Keep messages/tasks lean so that the signature is not MB but KB

how to stop activemq failover reconnect process

I use ActiveMQ Client 5.10.0 and failover protocol to connect to ActiveMQ Node,what I face now is that the failover reconnect process executes indefinitely if the ActiveMQ Node used in the failover uri breaks up,and another thread which is intended to stop the reconnect process just waits.
The thread dump which belongs to the mentioned stop thread above is as follows:
I can not post the whole thread dump here,please get from the link
could anyone help me?I am stuck in this problem for a long time!
Thanks very much!
By the way,I can not use any newer versions of ActiveMQ Client,because the target environment JDK is 1.6

iis idle timeout and long running request on wcf service

I have to implement long running process which is starts via request to the wcf method (not start proces when application start)
I now that this is wrong solution, better will be windows serwis or something else for long running process, but for my situation it is impossible. I have to use wcf servis hosted on IIS.
I read about appdomain recycled and I can't figure out thing about Idle Timeout - appdomain restart if request run over 20 minutes. I know that this issue appears when is started background task in application start.
So will be my appdomain kill when (idle timeout is setup 20 minutes).
it is start one long running request, and after that will be not another request.
When process is started in application start IIS nothing knows about this task and this is for me clear that in this situation appdomain is closed
Does after 20 minutes IIS kill appdomain, besides that eier request still running ? I am confused, because IIS know about still running request and mayby does not do this.
What is true ?
Yes, IIS will kill the process because it works on a rolling horizon of requests, not what is running. A way around this might be to have the web service request itself while it is running to continually ping the server to let it know that it is still running. But on the whole, IIS will kill its processes when no requests are coming in.
Taken directly from MSDN: The worker process shuts down after it finishes processing its existing requests, or after a configured time-out, whichever comes first.
In your case, if your process is longer than the timeout, your process will never finish.

WCF Azure long running action

I have a WCF service that needs to get called so that the call will trigger a 2-3 hour of processing. I'm using windows C# client application to call the service and have set the timeouts to all the max values. When I deployed this to Windows Azure, the WCF process that was triggered by the client seems to stop after a certain moment. The client doesn't get the timeout exceptions. I can use Azure Worker Role, but the process can only be completed using only the WCF code because it is a complicated operation. In other words I can't just schedule Worker Role that executes a simple edit/insert operation to a database. So I kind of have a chicken and egg problem. The background process needs the WCF code to do the background operation, but the WCF seems to stop after a certain while on Azure. What is a way to execute a long running call in WCF and plus how to execute a long running call on Azure that needs to use the hosted cloud service WCF code to do the long running operation?
This is because of the load balancer. The timeout used to be 60 seconds, but a few months ago this was increased to 'more than 60 seconds' (depending on the concurrent connections). Anyways, you need to keep the connection alive in order to avoid the timeout.
I suggest you try implementing this in your WCF client/service: WCF Azure Net.TCP Keep Alive
Why not rethink your architecture? Instead of depending on a connection (that can be disconnected for whatever reason), why not simply have your client drop a message in a queue? Your worker role picks up the message from the queue, does the 2-3 hour processing and once it's done it drops a message in another queue. Finally your client polls that other queue and once a message arrives there it knows the process is complete.
You can place the code required for the long running operation in a seperate project. You can then include this project in your WCF solution and your Worker Role Solution.
The background process will then have all the functionality that it requires to complete the operation.

Accessing AMQP connection from Resque worker when using Thin as Web Server

trying to work past an issue when using a resque job to process inbound AMQP messages.
am using an initializer to set up the message consumer at application startup and then feed the received messages to resque job for processing. that is working quite well.
however, i also want to process a response message out of the worker, i.e. publish it back out to a queue, and am running into the issue of the forking process making the app-wide AMQP connection unaddressable from inside the resque worker. would be very interested to see how other folks have tackled this as i can't believe this pattern is unusual.
due to message volumes, taking the approach of firing up a new thread and amqp connection for every response is not a workable solution.
ideas?
my bust on this, had my eye off the ball and forgot about resque forking when it kicks off a worker. going to go the route suggested by others of daemonizing the process instead....