Five second wait in activemq cms MessageProducer.send with zookeeper - activemq

I'm testing ActiveMQ 5.9.0 with Replicated LevelDB.
Running against a standalone ActiveMQ with a local LevelDB store, each producer.send(message) call takes about 1 ms. With my replicated setup with 3 zookeepers and 3 activemq brokers, producer.send(message) takes slightly more than 5 seconds to return! This happens even with sync="local_mem" in <replicatedLevelDB ... >. It's always just above 5 seconds, so there seems to be some strange wait/timeout involved.
Does this ring a bell?
It doesn't matter if I set brokerurl to failover:(<all three brokers>) or just tcp://brokerX, where brokerX is in the replicated LevelDB setup. There is no noticable delay sending messages in the brokerX web ui (hawtio). If I change to tcp://brokerY, where broker is an otherwise identical broker with <persistenceAdapter ...> set to <levelDB...> instead of <replicatedLevelDB...>, we're down at 1 ms per send.
Changing zookeeper tickTime etc makes no difference.
Debug log below. As you see, 5 seconds between "sent to queue", but zookeeper ping is quick.
2014-02-19 10:45:34,719 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-227
2014-02-19 10:45:34,724 | DEBUG | localhost Message ID:<hostname>-57776-1392803129562-0:0:1:1:2 sent to queue://IO_stab_test_Q | org.apache.activemq.broker.region.Queue | ActiveMQ Transport: tcp:///<ip address>:54727#61616
2014-02-19 10:45:34,725 | DEBUG | IO_stab_test_Q toPageIn: 1, Inflight: 1, pagedInMessages.size 1, enqueueCount: 27, dequeueCount: 25 | org.apache.activemq.broker.region.Queue | ActiveMQ BrokerService[localhost] Task-20
2014-02-19 10:45:34,731 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-222
2014-02-19 10:45:34,735 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:34,867 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-222
2014-02-19 10:45:35,403 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:35,634 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-227
2014-02-19 10:45:36,071 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:36,740 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:37,410 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:38,088 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 8ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:38,623 | DEBUG | Handling request for path /jolokia | io.hawt.web.AuthenticationFilter | qtp1217711018-222
2014-02-19 10:45:38,750 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:39,420 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)
2014-02-19 10:45:39,735 | DEBUG | localhost Message ID:<hostname>-57776-1392803129562-0:0:1:1:3 sent to queue://IO_stab_test_Q | org.apache.activemq.broker.region.Queue | ActiveMQ Transport: tcp:///<ip address>:54727#61616
2014-02-19 10:45:39,737 | DEBUG | IO_stab_test_Q toPageIn: 1, Inflight: 2, pagedInMessages.size 2, enqueueCount: 28, dequeueCount: 25 | org.apache.activemq.broker.region.Queue | ActiveMQ BrokerService[localhost] Task-24
2014-02-19 10:45:40,090 | DEBUG | Got ping response for sessionid: 0x244457fceb80003 after 0ms | org.apache.zookeeper.ClientCnxn | main-SendThread(<hostname>:2181)

Set queuePrefetch=0.
Some background on our situation...
Our message sizes are fairly small (<1kb xml) but our consumers vary from fast (<1 sec) to slow (10+ hours). Previously we had set prefetch=1, but even this would cause issues for us when a slow message is being worked and another message gets prefetched behind it.
We had noticed that our fast messages would often finish processing before the producer even gets the ack! We found that the producer.send() method was taking +5 seconds (exactly) to what we expect. That is what lead me to find this question.
Anyway, the solution for us was to set prefetch=0. This eliminated the 5 second delay completely for us, and resolved the other issue for us as well.

Related

How do you print a "recovery" log message using the RetryBackoffSpec in reactor?

I have the following code which handles some websocket messages:
Flux.create(sink -> {
handler//.log("handler")
.doOnNext(s -> log.info("propagating: {}", s))
.doOnNext(sink::next)
.doOnError(sink::error)
.onErrorComplete() // this allows you to silence the error and let the upstream subscriber handle it.
.publishOn(Schedulers.boundedElastic())
.subscribeOn(Schedulers.boundedElastic(), false)
.subscribe();
})
.take(10)
.onErrorResume(e -> Exceptions.unwrap(e)
.getClass()
.isAssignableFrom(Errors.NativeIoException.class),
t -> {
log.error("Websocket connection error: {}", t.getMessage());
log.debug("{}", t.fillInStackTrace().toString());
// the error is handled and hidden by retryWhen.
return Mono.error(t);
})
.retryWhen(RetryBackoffSpec.backoff(10000, Duration.ofSeconds(3))
.maxBackoff(Duration.ofSeconds(3))
.transientErrors(true)
.doBeforeRetry((s) -> log.error("Retrying connection to {}", "abc"))
.doAfterRetry(s -> log.error("Attempt {}/{} to restore connection to {} failed with {}", s.totalRetriesInARow(), 10000, "abc", s.failure().getMessage()))
);
Every now and then the connection drops and that's why there's a retryWhen operator in the pipe. As you can see above, I am printing messages to the console which inform about the connection drop and how many times it retries.
I, however, am not able to figure out how to print a "recover" message (i.e. Connection to X restored). Am I missing something from the docs or am I expected to write a custom RetryBackoffSpec that does it?
Example log output:
03:11:34.205 10-02-2023 | INFO | boundedElastic-6 | org.example.chaos.simulation.NetworkDisruptionTest | propagating: {"timestamp":1675991494202,"hello":"world!"}
03:11:34.703 10-02-2023 | INFO | boundedElastic-6 | org.example.chaos.simulation.NetworkDisruptionTest | propagating: {"timestamp":1675991494702,"hello":"world!"}
03:11:35.205 10-02-2023 | INFO | boundedElastic-6 | org.example.chaos.simulation.NetworkDisruptionTest | propagating: {"timestamp":1675991495203,"hello":"world!"}
03:11:35.704 10-02-2023 | INFO | boundedElastic-6 | org.example.chaos.simulation.NetworkDisruptionTest | propagating: {"timestamp":1675991495703,"hello":"world!"}
03:11:46.746 10-02-2023 | ERROR | boundedElastic-6 | org.example.chaos.simulation.NetworkDisruptionTest | Websocket connection error: recvAddress(..) failed: Connection timed out
03:11:46.749 10-02-2023 | ERROR | boundedElastic-6 | org.example.chaos.simulation.NetworkDisruptionTest | Retrying connection to abc
03:11:49.752 10-02-2023 | ERROR | parallel-3 | org.example.chaos.simulation.NetworkDisruptionTest | Attempt 0/10000 to restore connection to abc failed with recvAddress(..) failed: Connection timed out
03:11:52.763 10-02-2023 | ERROR | boundedElastic-5 | org.example.chaos.simulation.NetworkDisruptionTest | Retrying connection to abc
03:11:55.764 10-02-2023 | ERROR | parallel-4 | org.example.chaos.simulation.NetworkDisruptionTest | Attempt 1/10000 to restore connection to abc failed with connection timed out: /172.25.0.2:8090
03:11:58.772 10-02-2023 | ERROR | boundedElastic-3 | org.example.chaos.simulation.NetworkDisruptionTest | Retrying connection to abc
03:12:01.773 10-02-2023 | ERROR | parallel-5 | org.example.chaos.simulation.NetworkDisruptionTest | Attempt 2/10000 to restore connection to abc failed with connection timed out: /172.25.0.2:8090
--- A message such as "Connection to abc has been restored." is expected to appear here.

Mule Server 3.6 > Anypoint Studio > CloudHub > 405 Not Allowed

I am encountering the error when trying out my app. It works fine on my local instance as well as on a stand-alone Mule Server, but not on CloudHub. Any reason why this problem occurs?
Update
The app doesn't have any issues running from Studio on my local machine. The problem only occurred after I deployed it to CloudHub. I am not able to access the app from CloudHub and encounter a 405 Not Allowed (nginx) error page.
CloudHub logs shows the app has started successfully as far as I understand it. See the CloudHub deployment logs below:
19:29:02.272 | 07/10/2015 | SYSTEM | Deploying application to 1 workers.
19:29:05.450 | 07/10/2015 | SYSTEM | Provisioning CloudHub worker...
19:29:07.277 | 07/10/2015 | SYSTEM | Deploying application to 1 workers.
19:29:08.325 | 07/10/2015 | SYSTEM | Provisioning CloudHub worker...
19:29:25.696 | 07/10/2015 | SYSTEM | Starting CloudHub worker at 52.74.74.41 ...
19:29:32.862 | 07/10/2015 | SYSTEM | Starting CloudHub worker at 54.169.12.90 ...
19:30:49.027 | 07/10/2015 | SYSTEM | Worker(54.169.12.90): Starting your application...
19:30:50.715 | 07/10/2015 | INFO |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Initializing app 'myapp' +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
19:30:50.958 | 07/10/2015 | INFO | Monitoring enabled: true
19:30:50.958 | 07/10/2015 | INFO | Registering ping flow injector...
19:30:50.975 | 07/10/2015 | INFO | Creating the PersistentQueueManager. Asynchronous VM queues will be persistent.
19:30:50.984 | 07/10/2015 | INFO | About to create the PersistentQueueManager with environment: scope: 55824160e4b0f5ecab93e207_559f73dfe4b02b744ec0f626 bucket: ch-persistent-queues-us-east-prod accessKey: AKIAIYBZZZI7M2PAT4WQ region: ap-southeast-1 privateKey: no
19:30:53.393 | 07/10/2015 | INFO | Apply ap-southeast-1 to the client com.amazonaws.services.sqs.AmazonSQSClient#5c6c69b6
19:30:53.393 | 07/10/2015 | INFO | New sqsClient is created com.amazonaws.services.sqs.AmazonSQSClient#5c6c69b6
19:30:53.412 | 07/10/2015 | INFO | Apply ap-southeast-1 to the client com.amazonaws.services.s3.AmazonS3Client#4a1e0237
19:30:53.469 | 07/10/2015 | INFO | New s3Client is created com.mulesoft.ch.queue.sqs.clients.S3Client#20d271f6
19:30:53.470 | 07/10/2015 | INFO | Done with SQSQueueManagementService() instance
19:30:53.505 | 07/10/2015 | INFO | Initialising RegistryBroker
19:30:53.760 | 07/10/2015 | INFO | Refreshing org.mule.config.spring.MuleArtifactContext#44b5763: startup date [Fri Jul 10 11:30:53 UTC 2015]; root of context hierarchy
19:30:57.438 | 07/10/2015 | WARN | Schema warning: Use of element <response-builder> is deprecated. HTTP transport is deprecated and will be removed in Mule 4.0. Use HTTP module instead..
19:30:57.630 | 07/10/2015 | WARN | Schema warning: Use of element <header> is deprecated. HTTP transport is deprecated and will be removed in Mule 4.0. Use HTTP module instead..
19:30:58.537 | 07/10/2015 | INFO | Using files for tx logs /opt/mule/mule-3.6.2-R43/./.mule/myapp/queue-tx-log/tx1.log and /opt/mule/mule-3.6.2-R43/./.mule/myapp/queue-tx-log/tx2.log
19:30:58.538 | 07/10/2015 | INFO | Using files for tx logs /opt/mule/mule-3.6.2-R43/./.mule/myapp/queue-xa-tx-log/tx1.log and /opt/mule/mule-3.6.2-R43/./.mule/myapp/queue-xa-tx-log/tx2.log
19:30:58.632 | 07/10/2015 | INFO | Initialising model: _muleSystemModel
19:30:59.749 | 07/10/2015 | INFO | Initialising flow: myappFlow
19:30:59.749 | 07/10/2015 | INFO | Initialising exception listener: org.mule.exception.DefaultMessagingExceptionStrategy#4fe27195
19:30:59.856 | 07/10/2015 | INFO | Initialising service: myappFlow.stage1
19:31:00.389 | 07/10/2015 | INFO | Registering the queue=55824160e4b0f5ecab93e207_559f73dfe4b02b744ec0f626_2D1531125C052F65252D3BF0500990 with display name=seda.queue(myappFlow.stage1)
19:31:00.402 | 07/10/2015 | INFO | Configured Mule using "org.mule.config.spring.SpringXmlConfigurationBuilder" with configuration resource(s): "[ConfigResource{resourceName='/opt/mule/mule-3.6.2-R43/apps/myapp/myapp.xml'}]"
19:31:00.403 | 07/10/2015 | INFO | Configured Mule using "org.mule.config.builders.AutoConfigurationBuilder" with configuration resource(s): "[ConfigResource{resourceName='/opt/mule/mule-3.6.2-R43/apps/myapp/myapp.xml'}]"
19:31:00.430 | 07/10/2015 | INFO | Apply ap-southeast-1 to the client com.amazonaws.services.sqs.AmazonSQSClient#1c5e47e5
19:31:00.430 | 07/10/2015 | INFO | New sqsClient is created com.amazonaws.services.sqs.AmazonSQSClient#1c5e47e5
19:31:00.448 | 07/10/2015 | INFO | Apply ap-southeast-1 to the client com.amazonaws.services.s3.AmazonS3Client#7b8419b2
19:31:00.468 | 07/10/2015 | INFO | New s3Client is created com.mulesoft.ch.queue.sqs.clients.S3Client#2c611f62
19:31:00.473 | 07/10/2015 | INFO | Done with SQSQueueManagementService() instance
19:31:00.504 | 07/10/2015 | INFO |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Starting app 'myapp' +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
19:31:00.561 | 07/10/2015 | INFO | Starting ResourceManager
19:31:00.561 | 07/10/2015 | INFO | Starting persistent queue manager
19:31:00.562 | 07/10/2015 | INFO | Started ResourceManager
19:31:00.574 | 07/10/2015 | INFO | Successfully opened CloudHub ObjectStore
19:31:00.590 | 07/10/2015 | INFO |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ DevKit Extensions (0) used in this application +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
19:31:00.591 | 07/10/2015 | INFO | Starting model: _muleSystemModel
19:31:00.594 | 07/10/2015 | INFO | Starting flow: myappFlow
19:31:00.595 | 07/10/2015 | INFO | Starting service: myappFlow.stage1
19:31:00.881 | 07/10/2015 | WARN | HTTP transport is deprecated and will be removed in Mule 4.0. Use HTTP module instead.
19:31:00.947 | 07/10/2015 | INFO | Initialising connector: connector.http.mule.default
19:31:01.156 | 07/10/2015 | INFO | No suitable connector in application, new one is created and used.
19:31:01.158 | 07/10/2015 | INFO | Connected: HttpConnector
{
name=connector.http.mule.default
lifecycle=initialise
this=5f68f6a9
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[http]
serviceOverrides=<none>
}
19:31:01.158 | 07/10/2015 | INFO | Starting: HttpConnector
{
name=connector.http.mule.default
lifecycle=initialise
this=5f68f6a9
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[http]
serviceOverrides=<none>
}
19:31:01.159 | 07/10/2015 | INFO | Starting connector: connector.http.mule.default
19:31:01.169 | 07/10/2015 | INFO | Initialising flow: ____ping____flow_____76326334
19:31:01.169 | 07/10/2015 | INFO | Initialising exception listener: org.mule.exception.DefaultMessagingExceptionStrategy#66551924
19:31:01.186 | 07/10/2015 | INFO | Initialising service: ____ping____flow_____76326334.stage1
19:31:01.380 | 07/10/2015 | INFO | Starting flow: ____ping____flow_____76326334
19:31:01.381 | 07/10/2015 | INFO | Starting service: ____ping____flow_____76326334.stage1
19:31:01.387 | 07/10/2015 | INFO | Registering listener: ____ping____flow_____76326334 on endpointUri: http://localhost:7557/____ping____
19:31:01.459 | 07/10/2015 | INFO | Loading default response transformer: org.mule.transport.http.transformers.MuleMessageToHttpResponse
19:31:01.542 | 07/10/2015 | INFO | Initialising: 'null'. Object is: HttpMessageReceiver
19:31:01.559 | 07/10/2015 | INFO | Connecting clusterizable message receiver
19:31:01.563 | 07/10/2015 | WARN | Localhost is being bound to all local interfaces as specified by the "mule.tcp.bindlocalhosttoalllocalinterfaces" system property. This property may be removed in a future version of Mule.
19:31:01.575 | 07/10/2015 | INFO | Starting clusterizable message receiver
19:31:01.575 | 07/10/2015 | INFO | Starting: 'null'. Object is: HttpMessageReceiver
19:31:01.644 | 07/10/2015 | INFO | Listening for requests on http://localhost:8085
19:31:01.645 | 07/10/2015 | INFO | Mule is embedded in a container already launched by a wrapper.Duplicates will not be registered. Use the org.tanukisoftware.wrapper:type=WrapperManager MBean instead for control.
19:31:01.675 | 07/10/2015 | INFO | Attempting to register service with name: Mule.myapp:type=Endpoint,service="____ping____flow_____76326334",connector=connector.http.mule.default,name="endpoint.http.localhost.7557.ping"
19:31:01.676 | 07/10/2015 | INFO | Registered Endpoint Service with name: Mule.myapp:type=Endpoint,service="____ping____flow_____76326334",connector=connector.http.mule.default,name="endpoint.http.localhost.7557.ping"
19:31:01.686 | 07/10/2015 | INFO | Registered Connector Service with name Mule.myapp:type=Connector,name="connector.http.mule.default.1"
19:31:01.689 | 07/10/2015 | INFO |
**********************************************************************
* Application: myapp *
* OS encoding: /, Mule encoding: UTF-8 *
* *
* Agents Running: *
* DevKit Extension Information *
* Batch module default engine *
* Clustering Agent *
* JMX Agent *
**********************************************************************
19:31:01.754 | 07/10/2015 | INFO | Registering the queue=55824160e4b0f5ecab93e207_559f73dfe4b02b744ec0f626_515E160FC244EB5157E8E7C8B162E0 with display name=seda.queue(____ping____flow_____76326334.stage1)
19:31:02.561 | 07/10/2015 | SYSTEM | Worker(54.169.12.90): Your application has started successfully.
19:31:09.295 | 07/10/2015 | INFO | Mule system health monitoring started for your application.
19:31:59.537 | 07/10/2015 | SYSTEM | Worker(52.74.74.41): Starting your application...
19:32:01.200 | 07/10/2015 | INFO |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Initializing app 'myapp' +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
19:32:01.525 | 07/10/2015 | INFO | Creating the PersistentQueueManager. Asynchronous VM queues will be persistent.
19:32:01.526 | 07/10/2015 | INFO | About to create the PersistentQueueManager with environment: scope: 55824160e4b0f5ecab93e207_559f73dfe4b02b744ec0f626 bucket: ch-persistent-queues-us-east-prod accessKey: AKIAIYBZZZI7M2PAT4WQ region: ap-southeast-1 privateKey: no
19:32:07.054 | 07/10/2015 | INFO |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Stopping app 'myapp' +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
19:32:07.056 | 07/10/2015 | INFO | Stopping flow: ____ping____flow_____807729792
19:32:07.056 | 07/10/2015 | INFO | Removing listener on endpointUri: http://localhost:7557/____ping____
19:32:07.057 | 07/10/2015 | INFO | Stopping: 'null'. Object is: HttpMessageReceiver
19:32:07.059 | 07/10/2015 | INFO | Disposing: 'null'. Object is: HttpMessageReceiver
19:32:07.059 | 07/10/2015 | INFO | Stopping service: ____ping____flow_____807729792.stage1
19:32:12.028 | 07/10/2015 | SYSTEM | Worker(52.74.74.41): Your application has started successfully.
19:32:15.041 | 07/10/2015 | INFO | Mule system health monitoring started for your application.
19:32:33.488 | 07/10/2015 | INFO | Stopping flow: myappFlow
19:32:33.489 | 07/10/2015 | INFO | Stopping service: myappFlow.stage1
19:32:37.061 | 07/10/2015 | SYSTEM | Application was updated successfully with zero downtime. The new version of your application has been launched and the old version has been stopped.
19:32:37.545 | 07/10/2015 | SYSTEM | Your application is started.
I found the solution to my problem. From the deployment logs:
19:31:01.644 | 07/10/2015 | INFO | Listening for requests on http://localhost:8085
The host should have been the default All interfaces [0.0.0.0], and the port 8085 is not allowed on CloudHub. And I must use the default port 8081.
I was running multiple Mule apps on my local machine at the same time, that's why I had used a different port for this app, and I wasn't aware that port 8085 is not allowed on CloudHub.
Thank you all for the response.
for the port please try using ${http.port} or ${https.port} as per your requirement when deployed to Cloud hub.
It worked for me.

compute engine load balancer UDP/DNS responses dropped

Have been testing out GCE and the load balancing capabilities - however have been seeing some unexpected results.
The trial configuration involves 2 instances acting as DNS resolvers in a target pool with a 3rd test instance. There is also a http server running on the hosts. No health check scripts have been added.
DNS request to individual instance public IP (from ANY) - OK
HTTP request to individual instance public IP (from ANY) - OK
HTTP request to load balance IP (from ANY) - OK
DNS request to load balance IP (from an instance in the target pool) - OK
DNS request to load balance IP (from an instance in the same network - but not in the target pool) - NOK
DNS request to load balance IP (other) - NOK
I can see in the instance logs that the DNS request arrive for all cases and are distributed evenly - though the replies don't seem to get back to the originator.
The behavior seems unexpected. I've played with the session affinity with similar results - though the default behavior is the most desired option.
Have hit a wall. Are there some ideas to try?
Information on the setup:
$ gcutil listhttphealthchecks
+------+------+------+
| name | host | port |
+------+------+------+
$ gcutil listtargetpools
+----------+-------------+
| name | region |
+----------+-------------+
| dns-pool | us-central1 |
+----------+-------------+
$ gcutil listforwardingrules
+---------+-------------+-------------+
| name | region | ip |
+---------+-------------+-------------+
| dns-tcp | us-central1 | 8.34.215.45 |
+---------+-------------+-------------+
| dns-udp | us-central1 | 8.34.215.45 |
+---------+-------------+-------------+
| http | us-central1 | 8.34.215.45 |
+---------+-------------+-------------+
$ gcutil getforwardingrule dns-udp
+---------------+----------------------------------+
| name | dns-udp |
| description | |
| creation-time | 2013-12-28T12:28:05.816-08:00 |
| region | us-central1 |
| ip | 8.34.215.45 |
| protocol | UDP |
| port-range | 53-53 |
| target | us-central1/targetPools/dns-pool |
+---------------+----------------------------------+
$ gcutil gettargetpool dns-pool
+------------------+-------------------------------+
| name | dns-pool |
| description | |
| creation-time | 2013-12-28T11:48:08.896-08:00 |
| health-checks | |
| session-affinity | NONE |
| failover-ratio | |
| backup-pool | |
| instances | us-central1-a/instances/dns-1 |
| | us-central1-b/instances/dns-2 |
+------------------+-------------------------------+
[#dns-1 ~]$ curl "http://metadata/computeMetadata/v1/instance/network-interfaces/?recursive=true" -H "X-Google-Metadata-Request: True"
[{"accessConfigs":[{"externalIp":"162.222.178.116","type":"ONE_TO_ONE_NAT"}],"forwardedIps":["8.34.215.45"],"ip":"10.240.157.97","network":"projects/763472520840/networks/default"}]
[#dns-2 ~]$ curl "http://metadata/computeMetadata/v1/instance/network-interfaces/?recursive=true" -H "X-Google-Metadata-Request: True"
[{"accessConfigs":[{"externalIp":"8.34.215.162","type":"ONE_TO_ONE_NAT"}],"forwardedIps":["8.34.215.45"],"ip":"10.240.200.109","network":"projects/763472520840/networks/default"}]
$ gcutil getfirewall dns2
+---------------+------------------------------------+
| name | dns2 |
| description | Allow the incoming service traffic |
| creation-time | 2013-12-28T10:35:18.185-08:00 |
| network | default |
| source-ips | 0.0.0.0/0 |
| source-tags | |
| target-tags | |
| allowed | tcp: 53 |
| allowed | udp: 53 |
| allowed | tcp: 80 |
| allowed | tcp: 443 |
+---------------+------------------------------------+
The instances are CentOS and have their iptables firewalls disabled.
Reply from instance in target pool
#dns-1 ~]$ nslookup test 8.34.215.45 | grep answer
Non-authoritative answer:
#dns-1 ~]$
Reply from other instance in target pool
#dns-2 ~]$ nslookup test 8.34.215.45 | grep answer
Non-authoritative answer:
#dns-2 ~]$
No reply from instance not in the target pool on the load balanced IP. However it gets a reply from all other interfaces
#dns-3 ~]$ nslookup test 8.34.215.45 | grep answer
#dns-3 ~]$
#dns-3 ~]$ nslookup test 8.34.215.162 | grep answer
Non-authoritative answer:
#dns-3 ~]$ nslookup test 10.240.200.109 | grep answer
Non-authoritative answer:
#dns-3 ~]$ nslookup test 10.240.157.97 | grep answer
Non-authoritative answer:
#dns-3 ~]$ nslookup test 162.222.178.116 | grep answer
Non-authoritative answer:
-- Update --
Added a health check so that the instances wouldn't be marked as UNHEALTHY. However got the same result.
$ gcutil gettargetpoolhealth dns-pool
+-------------------------------+-------------+--------------+
| instance | ip | health-state |
+-------------------------------+-------------+--------------+
| us-central1-a/instances/dns-1 | 8.34.215.45 | HEALTHY |
+-------------------------------+-------------+--------------+
| us-central1-b/instances/dns-2 | 8.34.215.45 | HEALTHY |
+-------------------------------+-------------+--------------+
-- Update --
Looks like the DNS service is not responding with the same IP that the request came in on. This is for sure be the reason it doens't appear to be responding.
0.000000 162.222.178.130 -> 8.34.215.45 DNS 82 Standard query 0x5323 A test.internal
2.081868 10.240.157.97 -> 162.222.178.130 DNS 98 Standard query response 0x5323 A 54.122.122.227
Looks like the DNS service is not responding with the same IP that the request came in on. This is for sure be the reason it doens't appear to be responding.
0.000000 162.222.178.130 -> 8.34.215.45 DNS 82 Standard query 0x5323 A test.internal
2.081868 10.240.157.97 -> 162.222.178.130 DNS 98 Standard query response 0x5323 A 54.122.122.227

Is it possible to view RabbitMQ message contents directly from the command line?

Is it possible to view RabbitMQ message contents directly from the command line?
sudo rabbitmqctl list_queues lists the queues.
Is there any command like sudo rabbitmqctl list_queue_messages <queue_name>?
You should enable the management plugin.
rabbitmq-plugins enable rabbitmq_management
See here:
http://www.rabbitmq.com/plugins.html
And here for the specifics of management.
http://www.rabbitmq.com/management.html
Finally once set up you will need to follow the instructions below to install and use the rabbitmqadmin tool. Which can be used to fully interact with the system.
http://www.rabbitmq.com/management-cli.html
For example:
rabbitmqadmin get queue=<QueueName> requeue=false
will give you the first message off the queue.
Here are the commands I use to get the contents of the queue:
RabbitMQ version 3.1.5 on Fedora linux using https://www.rabbitmq.com/management-cli.html
Here are my exchanges:
eric#dev ~ $ sudo python rabbitmqadmin list exchanges
+-------+--------------------+---------+-------------+---------+----------+
| vhost | name | type | auto_delete | durable | internal |
+-------+--------------------+---------+-------------+---------+----------+
| / | | direct | False | True | False |
| / | kowalski | topic | False | True | False |
+-------+--------------------+---------+-------------+---------+----------+
Here is my queue:
eric#dev ~ $ sudo python rabbitmqadmin list queues
+-------+----------+-------------+-----------+---------+------------------------+---------------------+--------+----------+----------------+-------------------------+---------------------+--------+---------+
| vhost | name | auto_delete | consumers | durable | exclusive_consumer_tag | idle_since | memory | messages | messages_ready | messages_unacknowledged | node | policy | status |
+-------+----------+-------------+-----------+---------+------------------------+---------------------+--------+----------+----------------+-------------------------+---------------------+--------+---------+
| / | myqueue | False | 0 | True | | 2014-09-10 13:32:18 | 13760 | 0 | 0 | 0 |rabbit#ip-11-1-52-125| | running |
+-------+----------+-------------+-----------+---------+------------------------+---------------------+--------+----------+----------------+-------------------------+---------------------+--------+---------+
Cram some items into myqueue:
curl -i -u guest:guest http://localhost:15672/api/exchanges/%2f/kowalski/publish -d '{"properties":{},"routing_key":"abcxyz","payload":"foobar","payload_encoding":"string"}'
HTTP/1.1 200 OK
Server: MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact)
Date: Wed, 10 Sep 2014 17:46:59 GMT
content-type: application/json
Content-Length: 15
Cache-Control: no-cache
{"routed":true}
RabbitMQ see messages in queue:
eric#dev ~ $ sudo python rabbitmqadmin get queue=myqueue requeue=true count=10
+-------------+----------+---------------+---------------------------------------+---------------+------------------+------------+-------------+
| routing_key | exchange | message_count | payload | payload_bytes | payload_encoding | properties | redelivered |
+-------------+----------+---------------+---------------------------------------+---------------+------------------+------------+-------------+
| abcxyz | kowalski | 10 | foobar | 6 | string | | True |
| abcxyz | kowalski | 9 | {'testdata':'test'} | 19 | string | | True |
| abcxyz | kowalski | 8 | {'mykey':'myvalue'} | 19 | string | | True |
| abcxyz | kowalski | 7 | {'mykey':'myvalue'} | 19 | string | | True |
+-------------+----------+---------------+---------------------------------------+---------------+------------------+------------+-------------+
I wrote rabbitmq-dump-queue which allows dumping messages from a RabbitMQ queue to local files and requeuing the messages in their original order.
Example usage (to dump the first 50 messages of queue incoming_1):
rabbitmq-dump-queue -url="amqp://user:password#rabbitmq.example.com:5672/" -queue=incoming_1 -max-messages=50 -output-dir=/tmp
If you want multiple messages from a queue, say 10 messages, the command to use is:
rabbitmqadmin get queue=<QueueName> ackmode=ack_requeue_true count=10
This is how it looks on front interface avalable on http://localhost:15672 :
If you don't want the messages requeued, just change ackmode to ack_requeue_false.
you can use RabbitMQ API to get count or messages :
/api/queues/vhost/name/get
Get messages from a queue. (This is not an HTTP GET as it will alter the state of the queue.) You should post a body looking like:
{"count":5,"requeue":true,"encoding":"auto","truncate":50000}
count controls the maximum number of messages to get. You may get fewer messages than this if the queue cannot immediately provide them.
requeue determines whether the messages will be removed from the queue. If requeue is true they will be requeued - but their redelivered flag will be set.
encoding must be either "auto" (in which case the payload will be returned as a string if it is valid UTF-8, and base64 encoded otherwise), or "base64" (in which case the payload will always be base64 encoded).
If truncate is present it will truncate the message payload if it is larger than the size given (in bytes).
truncate is optional; all other keys are mandatory.
Please note that the publish / get paths in the HTTP API are intended for injecting test messages, diagnostics etc - they do not implement reliable delivery and so should be treated as a sysadmin's tool rather than a general API for messaging.
http://hg.rabbitmq.com/rabbitmq-management/raw-file/rabbitmq_v3_1_3/priv/www/api/index.html
a bit late to this, but yes rabbitmq has a build in tracer that allows you to see the incomming messages in a log. When enabled, you can just tail -f /var/tmp/rabbitmq-tracing/.log (on mac) to watch the messages.
the detailed discription is here http://www.mikeobrien.net/blog/tracing-rabbitmq-messages

iOS APNS Messages not arriving until app reinstall

I have an app that is using push notifications with apples APNS.
Most of the time it works fine, however occasionally (at random it seems, I havent been able to find any verifiable pattern) the messages just dont seem to be getting to the phone.
The messages are being recieved by APNS but just never delivered. However when I reinstall the app or restart the iPhone they seem to arrive.
Im not sure if this is a problem within my app or not, as even when the app is closed (and handling of the notification should rest completely with the Operating System no notification is recieved until a restart/reinstall is done.
The feedback service yields nothing, and NSLogging the received notification within the app also yields nothing (like the notification never makes it to the app)
EDIT:
Some additional information, as nobody seems to know whats going on.
I am using the sandbox server, with the app signed with the developer provisioning profile, so theres no problems there. And the App recieves the notifications initially.
The problem seems to be that when the app doesnt recieve anything when its in the background for about 90s-120s it just stops receiving anything until it is reinstalled.
Even double tapping home and stopping the app that way doesnt allow it to recieve notifications in the app closed state. Which I would have thought would have eliminated problems with the apps coding entirely, since at that point its not even running.
I timed it to see after how long it stops recieving notifications. There are 3 trials here.
==================================Trial 1=====================================
| Notification Number | Time since Last | Total Time | Pass/fail |
| 1 | 6s | 6s | Pass |
| 2 | 30s | 36s | Pass |
| 3 | 60s | 96s | Pass |
| 4 | 120s | 216s | Fail |
==============================================================================
==================================Trial 2=====================================
| Notification Number | Time since Last | Total Time | Pass/fail |
| 1 | 3s | 3s | Pass |
| 2 | 29s | 32s | Pass |
| 3 | 60s | 92s | Pass |
| 4 | 91s | 183s | Fail |
==============================================================================
==================================Trial 3=====================================
| Notification Number | Time since Last | Total Time | Pass/fail |
| 1 | 1s | 1s | Pass |
| 2 | 30s | 61s | Pass |
| 3 | 30s | 91s | Pass |
| 4 | 30s | 121s | Pass |
| 5 | 30s | 151s | Pass |
| 6 | 30s | 181s | Pass |
| 7 | 30s | 211s | Pass |
| 8 | 30s | 241s | Pass |
| 9 | 60s | 301s | Pass |
| 10 | 120s | 421s | Fail |
==============================================================================
Does anyone have any idea what could be going on here.
Another Edit:
Just tested the problem across multiple devices, and its happening on all of them, so its definately not a device issue. The notifications stop coming through even when the app has never been openened. Could the programming within the app effect how the push notifications are received even when its never been open?
It appears this may have been an issue outside of my control, as everything is now working fine, with zero changes.
Going to blame apple or some sort of networking problem somewhere inbetween.