How do you print a "recovery" log message using the RetryBackoffSpec in reactor? - spring-webflux

I have the following code which handles some websocket messages:
Flux.create(sink -> {
handler//.log("handler")
.doOnNext(s -> log.info("propagating: {}", s))
.doOnNext(sink::next)
.doOnError(sink::error)
.onErrorComplete() // this allows you to silence the error and let the upstream subscriber handle it.
.publishOn(Schedulers.boundedElastic())
.subscribeOn(Schedulers.boundedElastic(), false)
.subscribe();
})
.take(10)
.onErrorResume(e -> Exceptions.unwrap(e)
.getClass()
.isAssignableFrom(Errors.NativeIoException.class),
t -> {
log.error("Websocket connection error: {}", t.getMessage());
log.debug("{}", t.fillInStackTrace().toString());
// the error is handled and hidden by retryWhen.
return Mono.error(t);
})
.retryWhen(RetryBackoffSpec.backoff(10000, Duration.ofSeconds(3))
.maxBackoff(Duration.ofSeconds(3))
.transientErrors(true)
.doBeforeRetry((s) -> log.error("Retrying connection to {}", "abc"))
.doAfterRetry(s -> log.error("Attempt {}/{} to restore connection to {} failed with {}", s.totalRetriesInARow(), 10000, "abc", s.failure().getMessage()))
);
Every now and then the connection drops and that's why there's a retryWhen operator in the pipe. As you can see above, I am printing messages to the console which inform about the connection drop and how many times it retries.
I, however, am not able to figure out how to print a "recover" message (i.e. Connection to X restored). Am I missing something from the docs or am I expected to write a custom RetryBackoffSpec that does it?
Example log output:
03:11:34.205 10-02-2023 | INFO | boundedElastic-6 | org.example.chaos.simulation.NetworkDisruptionTest | propagating: {"timestamp":1675991494202,"hello":"world!"}
03:11:34.703 10-02-2023 | INFO | boundedElastic-6 | org.example.chaos.simulation.NetworkDisruptionTest | propagating: {"timestamp":1675991494702,"hello":"world!"}
03:11:35.205 10-02-2023 | INFO | boundedElastic-6 | org.example.chaos.simulation.NetworkDisruptionTest | propagating: {"timestamp":1675991495203,"hello":"world!"}
03:11:35.704 10-02-2023 | INFO | boundedElastic-6 | org.example.chaos.simulation.NetworkDisruptionTest | propagating: {"timestamp":1675991495703,"hello":"world!"}
03:11:46.746 10-02-2023 | ERROR | boundedElastic-6 | org.example.chaos.simulation.NetworkDisruptionTest | Websocket connection error: recvAddress(..) failed: Connection timed out
03:11:46.749 10-02-2023 | ERROR | boundedElastic-6 | org.example.chaos.simulation.NetworkDisruptionTest | Retrying connection to abc
03:11:49.752 10-02-2023 | ERROR | parallel-3 | org.example.chaos.simulation.NetworkDisruptionTest | Attempt 0/10000 to restore connection to abc failed with recvAddress(..) failed: Connection timed out
03:11:52.763 10-02-2023 | ERROR | boundedElastic-5 | org.example.chaos.simulation.NetworkDisruptionTest | Retrying connection to abc
03:11:55.764 10-02-2023 | ERROR | parallel-4 | org.example.chaos.simulation.NetworkDisruptionTest | Attempt 1/10000 to restore connection to abc failed with connection timed out: /172.25.0.2:8090
03:11:58.772 10-02-2023 | ERROR | boundedElastic-3 | org.example.chaos.simulation.NetworkDisruptionTest | Retrying connection to abc
03:12:01.773 10-02-2023 | ERROR | parallel-5 | org.example.chaos.simulation.NetworkDisruptionTest | Attempt 2/10000 to restore connection to abc failed with connection timed out: /172.25.0.2:8090
--- A message such as "Connection to abc has been restored." is expected to appear here.

Related

TDengine An offline node cannot be deleted

taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
=================================================================================================================================================
1 | td-1:6030 | 6 | 80 | ready | 2022-12-05 11:20:16.972 | |
2 | td-2:6030 | 2 | 16 | offline | 2022-12-05 11:20:17.342 | status msg timeout |
Query OK, 2 row(s) in set (0.002706s)
taos> drop dnode 2;
DB error: Node is offline (0.138705s)
if you want to delete the TDengine database data node, you have to migrate the data back, hence before dropping the dnode , your dnode must be online but not offline .

Authorization failed: The resource could not be found. (HTTP 404)

I'm trying to run the monitoring service in openstack, but I'm receiving this error:
~$ monasca metric-list
Authorization failed: The resource could not be found. (HTTP 404)
When I check the log file this is why I found:
2016-09-20 10:27:36.357 27771 WARNING keystonemiddleware.auth_token [-] Fetch revocation list failed, fallback to online validation.
2016-09-20 10:27:36.376 27771 ERROR keystonemiddleware.auth_token [-] Bad response code while validating token: 403
2016-09-20 10:27:36.377 27771 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "You are not authorized to perform the requested action, identity:validate_token.", "code": 403, "title": "Forbidden"}}
2016-09-20 10:27:36.377 27771 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Failed to fetch token data from identity server
This is the services, projects, users , role and endpoints in keystone
+----------------------------------+----------+------------+
| ID | Name | Type |
+----------------------------------+----------+------------+
| 1c38cf31124d404783561793fc1fb7f0 | monasca | monitoring |
| 1eb72109ea604b6e8f2bd264787ca370 | keystone | identity |
+----------------------------------+----------+------------+
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 733a0a1369f94f6ab31b8875ef19e0ee | service |
| 9e732f1a2aca48e098daf62bb230f85e | monasca |
| f2df2111f893434f83fda7d5bd6cac4a | admin |
+----------------------------------+---------+
+----------------------------------+---------------+
| ID | Name |
+----------------------------------+---------------+
| 3a1b8582a11f4e07b3a21e84e9fb7c23 | monasca-user |
| 559752237e824d81a6133494b63c5789 | monasca-agent |
| 5bcf19af4e8e4067a5679e6a0f2f88f1 | admin |
+----------------------------------+---------------+
+----------------------------------+---------------+
| ID | Name |
+----------------------------------+---------------+
| 1679c1c099b543db96ac4412be21b15a | admin |
| 6ca31578625c49568085284dee72e4b8 | monasca-agent |
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
| a3267f589e7342ceaedef57ea9e4aac2 | monasca-user |
+----------------------------------+---------------+
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
| 3fbfc68e9f894e47846b896c6c8d3f3e | RegionOne | keystone | identity | True | internal | http://controller:5000/v2.0 |
| 470043a7f6364add902548df6fb7b60e | RegionOne | monasca | monitoring | True | public | http://localhost:8082/v2.0 |
| 9e68606b37084cbeb95106ff1bede0cb | RegionOne | monasca | monitoring | True | internal | http://localhost:8082/v2.0 |
| b4273c72671e4fac99e7d2bc6334156c | RegionOne | monasca | monitoring | True | admin | http://localhost:8082/v2.0 |
| d27bb34d619443658ca745b9fee1c967 | RegionOne | keystone | identity | True | admin | http://controller:35357/v2.0 |
| f736ebca8ac24b78bdf1dff60ac86ab1 | RegionOne | keystone | identity | True | public | http://controller:5000/v2.0 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
this is the keystone section in my api.conf file:
[keystone_authtoken]
identity_uri = http://controller:35357
auth_uri = http://controller:5000/v3
admin_password = PASSWORD
admin_user = monasca-user
admin_tenant_name = monasca
cafile =
certfile =
keyfile =
insecure = false
file to get the token for the identity service
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=PASSWORD
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
and for the monitoring service
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=monasca
export OS_TENANT_NAME=monasca
export OS_USERNAME=monasca-user
export OS_PASSWORD=PASSWORD
export OS_AUTH_URL=http://controller:35357/
export OS_IDENTITY_API_VERSION=3
I can find what is wrong with the configuration
Are you able to run the following command?
openstack token issue
Why do you have localhost in endpoint?
you can use --debug option for monasca commands to get more details
I think the problem is that you register keystone endpoint with version specified while in env var the version is another one. it is recommended that not include version number in endpoint, especially the keystone one. Please check install manuals and follow its instructions

Mule Server 3.6 > Anypoint Studio > CloudHub > 405 Not Allowed

I am encountering the error when trying out my app. It works fine on my local instance as well as on a stand-alone Mule Server, but not on CloudHub. Any reason why this problem occurs?
Update
The app doesn't have any issues running from Studio on my local machine. The problem only occurred after I deployed it to CloudHub. I am not able to access the app from CloudHub and encounter a 405 Not Allowed (nginx) error page.
CloudHub logs shows the app has started successfully as far as I understand it. See the CloudHub deployment logs below:
19:29:02.272 | 07/10/2015 | SYSTEM | Deploying application to 1 workers.
19:29:05.450 | 07/10/2015 | SYSTEM | Provisioning CloudHub worker...
19:29:07.277 | 07/10/2015 | SYSTEM | Deploying application to 1 workers.
19:29:08.325 | 07/10/2015 | SYSTEM | Provisioning CloudHub worker...
19:29:25.696 | 07/10/2015 | SYSTEM | Starting CloudHub worker at 52.74.74.41 ...
19:29:32.862 | 07/10/2015 | SYSTEM | Starting CloudHub worker at 54.169.12.90 ...
19:30:49.027 | 07/10/2015 | SYSTEM | Worker(54.169.12.90): Starting your application...
19:30:50.715 | 07/10/2015 | INFO |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Initializing app 'myapp' +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
19:30:50.958 | 07/10/2015 | INFO | Monitoring enabled: true
19:30:50.958 | 07/10/2015 | INFO | Registering ping flow injector...
19:30:50.975 | 07/10/2015 | INFO | Creating the PersistentQueueManager. Asynchronous VM queues will be persistent.
19:30:50.984 | 07/10/2015 | INFO | About to create the PersistentQueueManager with environment: scope: 55824160e4b0f5ecab93e207_559f73dfe4b02b744ec0f626 bucket: ch-persistent-queues-us-east-prod accessKey: AKIAIYBZZZI7M2PAT4WQ region: ap-southeast-1 privateKey: no
19:30:53.393 | 07/10/2015 | INFO | Apply ap-southeast-1 to the client com.amazonaws.services.sqs.AmazonSQSClient#5c6c69b6
19:30:53.393 | 07/10/2015 | INFO | New sqsClient is created com.amazonaws.services.sqs.AmazonSQSClient#5c6c69b6
19:30:53.412 | 07/10/2015 | INFO | Apply ap-southeast-1 to the client com.amazonaws.services.s3.AmazonS3Client#4a1e0237
19:30:53.469 | 07/10/2015 | INFO | New s3Client is created com.mulesoft.ch.queue.sqs.clients.S3Client#20d271f6
19:30:53.470 | 07/10/2015 | INFO | Done with SQSQueueManagementService() instance
19:30:53.505 | 07/10/2015 | INFO | Initialising RegistryBroker
19:30:53.760 | 07/10/2015 | INFO | Refreshing org.mule.config.spring.MuleArtifactContext#44b5763: startup date [Fri Jul 10 11:30:53 UTC 2015]; root of context hierarchy
19:30:57.438 | 07/10/2015 | WARN | Schema warning: Use of element <response-builder> is deprecated. HTTP transport is deprecated and will be removed in Mule 4.0. Use HTTP module instead..
19:30:57.630 | 07/10/2015 | WARN | Schema warning: Use of element <header> is deprecated. HTTP transport is deprecated and will be removed in Mule 4.0. Use HTTP module instead..
19:30:58.537 | 07/10/2015 | INFO | Using files for tx logs /opt/mule/mule-3.6.2-R43/./.mule/myapp/queue-tx-log/tx1.log and /opt/mule/mule-3.6.2-R43/./.mule/myapp/queue-tx-log/tx2.log
19:30:58.538 | 07/10/2015 | INFO | Using files for tx logs /opt/mule/mule-3.6.2-R43/./.mule/myapp/queue-xa-tx-log/tx1.log and /opt/mule/mule-3.6.2-R43/./.mule/myapp/queue-xa-tx-log/tx2.log
19:30:58.632 | 07/10/2015 | INFO | Initialising model: _muleSystemModel
19:30:59.749 | 07/10/2015 | INFO | Initialising flow: myappFlow
19:30:59.749 | 07/10/2015 | INFO | Initialising exception listener: org.mule.exception.DefaultMessagingExceptionStrategy#4fe27195
19:30:59.856 | 07/10/2015 | INFO | Initialising service: myappFlow.stage1
19:31:00.389 | 07/10/2015 | INFO | Registering the queue=55824160e4b0f5ecab93e207_559f73dfe4b02b744ec0f626_2D1531125C052F65252D3BF0500990 with display name=seda.queue(myappFlow.stage1)
19:31:00.402 | 07/10/2015 | INFO | Configured Mule using "org.mule.config.spring.SpringXmlConfigurationBuilder" with configuration resource(s): "[ConfigResource{resourceName='/opt/mule/mule-3.6.2-R43/apps/myapp/myapp.xml'}]"
19:31:00.403 | 07/10/2015 | INFO | Configured Mule using "org.mule.config.builders.AutoConfigurationBuilder" with configuration resource(s): "[ConfigResource{resourceName='/opt/mule/mule-3.6.2-R43/apps/myapp/myapp.xml'}]"
19:31:00.430 | 07/10/2015 | INFO | Apply ap-southeast-1 to the client com.amazonaws.services.sqs.AmazonSQSClient#1c5e47e5
19:31:00.430 | 07/10/2015 | INFO | New sqsClient is created com.amazonaws.services.sqs.AmazonSQSClient#1c5e47e5
19:31:00.448 | 07/10/2015 | INFO | Apply ap-southeast-1 to the client com.amazonaws.services.s3.AmazonS3Client#7b8419b2
19:31:00.468 | 07/10/2015 | INFO | New s3Client is created com.mulesoft.ch.queue.sqs.clients.S3Client#2c611f62
19:31:00.473 | 07/10/2015 | INFO | Done with SQSQueueManagementService() instance
19:31:00.504 | 07/10/2015 | INFO |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Starting app 'myapp' +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
19:31:00.561 | 07/10/2015 | INFO | Starting ResourceManager
19:31:00.561 | 07/10/2015 | INFO | Starting persistent queue manager
19:31:00.562 | 07/10/2015 | INFO | Started ResourceManager
19:31:00.574 | 07/10/2015 | INFO | Successfully opened CloudHub ObjectStore
19:31:00.590 | 07/10/2015 | INFO |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ DevKit Extensions (0) used in this application +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
19:31:00.591 | 07/10/2015 | INFO | Starting model: _muleSystemModel
19:31:00.594 | 07/10/2015 | INFO | Starting flow: myappFlow
19:31:00.595 | 07/10/2015 | INFO | Starting service: myappFlow.stage1
19:31:00.881 | 07/10/2015 | WARN | HTTP transport is deprecated and will be removed in Mule 4.0. Use HTTP module instead.
19:31:00.947 | 07/10/2015 | INFO | Initialising connector: connector.http.mule.default
19:31:01.156 | 07/10/2015 | INFO | No suitable connector in application, new one is created and used.
19:31:01.158 | 07/10/2015 | INFO | Connected: HttpConnector
{
name=connector.http.mule.default
lifecycle=initialise
this=5f68f6a9
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[http]
serviceOverrides=<none>
}
19:31:01.158 | 07/10/2015 | INFO | Starting: HttpConnector
{
name=connector.http.mule.default
lifecycle=initialise
this=5f68f6a9
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[http]
serviceOverrides=<none>
}
19:31:01.159 | 07/10/2015 | INFO | Starting connector: connector.http.mule.default
19:31:01.169 | 07/10/2015 | INFO | Initialising flow: ____ping____flow_____76326334
19:31:01.169 | 07/10/2015 | INFO | Initialising exception listener: org.mule.exception.DefaultMessagingExceptionStrategy#66551924
19:31:01.186 | 07/10/2015 | INFO | Initialising service: ____ping____flow_____76326334.stage1
19:31:01.380 | 07/10/2015 | INFO | Starting flow: ____ping____flow_____76326334
19:31:01.381 | 07/10/2015 | INFO | Starting service: ____ping____flow_____76326334.stage1
19:31:01.387 | 07/10/2015 | INFO | Registering listener: ____ping____flow_____76326334 on endpointUri: http://localhost:7557/____ping____
19:31:01.459 | 07/10/2015 | INFO | Loading default response transformer: org.mule.transport.http.transformers.MuleMessageToHttpResponse
19:31:01.542 | 07/10/2015 | INFO | Initialising: 'null'. Object is: HttpMessageReceiver
19:31:01.559 | 07/10/2015 | INFO | Connecting clusterizable message receiver
19:31:01.563 | 07/10/2015 | WARN | Localhost is being bound to all local interfaces as specified by the "mule.tcp.bindlocalhosttoalllocalinterfaces" system property. This property may be removed in a future version of Mule.
19:31:01.575 | 07/10/2015 | INFO | Starting clusterizable message receiver
19:31:01.575 | 07/10/2015 | INFO | Starting: 'null'. Object is: HttpMessageReceiver
19:31:01.644 | 07/10/2015 | INFO | Listening for requests on http://localhost:8085
19:31:01.645 | 07/10/2015 | INFO | Mule is embedded in a container already launched by a wrapper.Duplicates will not be registered. Use the org.tanukisoftware.wrapper:type=WrapperManager MBean instead for control.
19:31:01.675 | 07/10/2015 | INFO | Attempting to register service with name: Mule.myapp:type=Endpoint,service="____ping____flow_____76326334",connector=connector.http.mule.default,name="endpoint.http.localhost.7557.ping"
19:31:01.676 | 07/10/2015 | INFO | Registered Endpoint Service with name: Mule.myapp:type=Endpoint,service="____ping____flow_____76326334",connector=connector.http.mule.default,name="endpoint.http.localhost.7557.ping"
19:31:01.686 | 07/10/2015 | INFO | Registered Connector Service with name Mule.myapp:type=Connector,name="connector.http.mule.default.1"
19:31:01.689 | 07/10/2015 | INFO |
**********************************************************************
* Application: myapp *
* OS encoding: /, Mule encoding: UTF-8 *
* *
* Agents Running: *
* DevKit Extension Information *
* Batch module default engine *
* Clustering Agent *
* JMX Agent *
**********************************************************************
19:31:01.754 | 07/10/2015 | INFO | Registering the queue=55824160e4b0f5ecab93e207_559f73dfe4b02b744ec0f626_515E160FC244EB5157E8E7C8B162E0 with display name=seda.queue(____ping____flow_____76326334.stage1)
19:31:02.561 | 07/10/2015 | SYSTEM | Worker(54.169.12.90): Your application has started successfully.
19:31:09.295 | 07/10/2015 | INFO | Mule system health monitoring started for your application.
19:31:59.537 | 07/10/2015 | SYSTEM | Worker(52.74.74.41): Starting your application...
19:32:01.200 | 07/10/2015 | INFO |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Initializing app 'myapp' +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
19:32:01.525 | 07/10/2015 | INFO | Creating the PersistentQueueManager. Asynchronous VM queues will be persistent.
19:32:01.526 | 07/10/2015 | INFO | About to create the PersistentQueueManager with environment: scope: 55824160e4b0f5ecab93e207_559f73dfe4b02b744ec0f626 bucket: ch-persistent-queues-us-east-prod accessKey: AKIAIYBZZZI7M2PAT4WQ region: ap-southeast-1 privateKey: no
19:32:07.054 | 07/10/2015 | INFO |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Stopping app 'myapp' +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
19:32:07.056 | 07/10/2015 | INFO | Stopping flow: ____ping____flow_____807729792
19:32:07.056 | 07/10/2015 | INFO | Removing listener on endpointUri: http://localhost:7557/____ping____
19:32:07.057 | 07/10/2015 | INFO | Stopping: 'null'. Object is: HttpMessageReceiver
19:32:07.059 | 07/10/2015 | INFO | Disposing: 'null'. Object is: HttpMessageReceiver
19:32:07.059 | 07/10/2015 | INFO | Stopping service: ____ping____flow_____807729792.stage1
19:32:12.028 | 07/10/2015 | SYSTEM | Worker(52.74.74.41): Your application has started successfully.
19:32:15.041 | 07/10/2015 | INFO | Mule system health monitoring started for your application.
19:32:33.488 | 07/10/2015 | INFO | Stopping flow: myappFlow
19:32:33.489 | 07/10/2015 | INFO | Stopping service: myappFlow.stage1
19:32:37.061 | 07/10/2015 | SYSTEM | Application was updated successfully with zero downtime. The new version of your application has been launched and the old version has been stopped.
19:32:37.545 | 07/10/2015 | SYSTEM | Your application is started.
I found the solution to my problem. From the deployment logs:
19:31:01.644 | 07/10/2015 | INFO | Listening for requests on http://localhost:8085
The host should have been the default All interfaces [0.0.0.0], and the port 8085 is not allowed on CloudHub. And I must use the default port 8081.
I was running multiple Mule apps on my local machine at the same time, that's why I had used a different port for this app, and I wasn't aware that port 8085 is not allowed on CloudHub.
Thank you all for the response.
for the port please try using ${http.port} or ${https.port} as per your requirement when deployed to Cloud hub.
It worked for me.

Five second wait in activemq cms MessageProducer.send with zookeeper

I'm testing ActiveMQ 5.9.0 with Replicated LevelDB.
Running against a standalone ActiveMQ with a local LevelDB store, each producer.send(message) call takes about 1 ms. With my replicated setup with 3 zookeepers and 3 activemq brokers, producer.send(message) takes slightly more than 5 seconds to return! This happens even with sync="local_mem" in <replicatedLevelDB ... >. It's always just above 5 seconds, so there seems to be some strange wait/timeout involved.
Does this ring a bell?
It doesn't matter if I set brokerurl to failover:(<all three brokers>) or just tcp://brokerX, where brokerX is in the replicated LevelDB setup. There is no noticable delay sending messages in the brokerX web ui (hawtio). If I change to tcp://brokerY, where broker is an otherwise identical broker with <persistenceAdapter ...> set to <levelDB...> instead of <replicatedLevelDB...>, we're down at 1 ms per send.
Changing zookeeper tickTime etc makes no difference.
Debug log below. As you see, 5 seconds between "sent to queue", but zookeeper ping is quick.
2014-02-19 10:45:34,719 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-227
2014-02-19 10:45:34,724 | DEBUG | localhost Message ID:<hostname>-57776-1392803129562-0:0:1:1:2 sent to queue://IO_stab_test_Q | org.apache.activemq.broker.region.Queue | ActiveMQ Transport: tcp:///<ip address>:54727#61616
2014-02-19 10:45:34,725 | DEBUG | IO_stab_test_Q toPageIn: 1, Inflight: 1, pagedInMessages.size 1, enqueueCount: 27, dequeueCount: 25 | org.apache.activemq.broker.region.Queue | ActiveMQ BrokerService[localhost] Task-20
2014-02-19 10:45:34,731 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-222
2014-02-19 10:45:34,735 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:34,867 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-222
2014-02-19 10:45:35,403 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:35,634 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-227
2014-02-19 10:45:36,071 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:36,740 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:37,410 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:38,088 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 8ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:38,623 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-222
2014-02-19 10:45:38,750 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:39,420 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:39,735 | DEBUG | localhost Message ID:<hostname>-57776-1392803129562-0:0:1:1:3 sent to queue://IO_stab_test_Q | org.apache.activemq.broker.region.Queue | ActiveMQ Transport: tcp:///<ip address>:54727#61616
2014-02-19 10:45:39,737 | DEBUG | IO_stab_test_Q toPageIn: 1, Inflight: 2, pagedInMessages.size 2, enqueueCount: 28, dequeueCount: 25 | org.apache.activemq.broker.region.Queue | ActiveMQ BrokerService[localhost] Task-24
2014-02-19 10:45:40,090 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
Set queuePrefetch=0.
Some background on our situation...
Our message sizes are fairly small (<1kb xml) but our consumers vary from fast (<1 sec) to slow (10+ hours). Previously we had set prefetch=1, but even this would cause issues for us when a slow message is being worked and another message gets prefetched behind it.
We had noticed that our fast messages would often finish processing before the producer even gets the ack! We found that the producer.send() method was taking +5 seconds (exactly) to what we expect. That is what lead me to find this question.
Anyway, the solution for us was to set prefetch=0. This eliminated the 5 second delay completely for us, and resolved the other issue for us as well.

Apache tomcat printing random nullpointers

Our apache tomcat server is printing random nullpointer exceptions. Normally you would see a stacktrace..
INFO | jvm 1 | srvmain | 2012/03/30 13:44:43.733 | SEVERE: Servlet.service() for servlet frontend threw exception
INFO | jvm 1 | srvmain | 2012/03/30 13:44:43.733 | java.lang.NullPointerException
INFO | jvm 1 | srvmain | 2012/03/30 13:44:46.139 | Mar 30, 2012 1:44:46 PM org.apache.catalina.core.StandardWrapperValve invoke
INFO | jvm 1 | srvmain | 2012/03/30 13:44:46.139 | SEVERE: Servlet.service() for servlet frontend threw exception
INFO | jvm 1 | srvmain | 2012/03/30 13:44:46.139 | java.lang.NullPointerException
INFO | jvm 1 | srvmain | 2012/03/30 13:44:47.998 | Mar 30, 2012 1:44:47 PM org.apache.catalina.core.StandardWrapperValve invoke
INFO | jvm 1 | srvmain | 2012/03/30 13:44:47.998 | SEVERE: Servlet.service() for servlet frontend threw exception
INFO | jvm 1 | srvmain | 2012/03/30 13:44:47.998 | java.lang.NullPointerException
INFO | jvm 1 | srvmain | 2012/03/30 13:44:50.623 | Mar 30, 2012 1:44:50 PM org.apache.catalina.core.StandardWrapperValve invoke
You are going to have to restart the server if you want complete stack traces. You need to set the following JVM option:
XX:-OmitStackTraceInFastThrow