I'm seeing this message in my AMQ log
[pid: ][ActiveMQ Transport: ssl:///127.0.0.1:56866] 21 Apr 2022 12:36:27 WARN Queue - Usage(default:store:queue://com.queue.error:store) percentUsage=99%, usage=10746143122, limit=10737418240, percentUsageMinDelta=1%;Parent:Usage(default:store) percentUsage=100%, usage=10746143122, limit=10737418240, percentUsageMinDelta=1%: Persistent store is Full, 100% of 10737418240. Stopping producer (ID:scpfnlp001p.dcswins.com-43963-1647218102788-1:38887:1:1) to prevent flooding queue://com.queue.error. See http://activemq.apache.org/producer-flow-control.html for more info (blocking for: 1202s)
This doesn't make sense to me since my queues all have memory usage at 0..
I monitor the queues with JMX and here is what it is reporting.
"EnqueueCount":846
"MemoryPercentUsage":0
"ConsumerCount":2
"DequeueCount":846
"ExpiredCount":0
"Name":"com.queue.input.keyword"}
{"QueueSize":42
"EnqueueCount":1690
"MemoryPercentUsage":0
"ConsumerCount":0
"DequeueCount":1686
"ExpiredCount":0
"Name":"com.queue.inflight"}
{"QueueSize":0
"EnqueueCount":840
"MemoryPercentUsage":0
"ConsumerCount":0
"DequeueCount":840
"ExpiredCount":0
"Name":"com.queue.output"}
{"QueueSize":264
"EnqueueCount":77
"MemoryPercentUsage":0
"ConsumerCount":0
"DequeueCount":0
"ExpiredCount":0
"Name":"com.queue.error"}
{"QueueSize":0
"EnqueueCount":845
"MemoryPercentUsage":0
"ConsumerCount":1
"DequeueCount":845
"ExpiredCount":0
"Name":"com.queue.reader.keyword"}
{"QueueSize":4
"EnqueueCount":698
"MemoryPercentUsage":0
"ConsumerCount":1
"DequeueCount":848
"ExpiredCount":0
"Name":"com.queue.reader"}
{"QueueSize":0
"EnqueueCount":849
"MemoryPercentUsage":0
"ConsumerCount":8
"DequeueCount":849
"ExpiredCount":0
"Name":"com.queue.input"}
{"QueueSize":2
"EnqueueCount":843
"MemoryPercentUsage":0
"ConsumerCount":0
"DequeueCount":843
"ExpiredCount":0
"Name":"com.queue.keyword.output"}]
"CurrentStatus":"Good"
"DataDirectory":"\/var\/mqbroker\/bin\/activemq-data"
"Persistent":true
"status":"success"}
The output queue can have large objects so I would expect the memoryUsage to be > 0
Here is my destinatio policy
<policyEntry queue=">" producerFlowControl="false" optimizedDispatch="true" maxPageSize="1000">
Can someone explain why the store would fill up and memory usage be 0.. This make no sense to me
Related
My work system consists of Spring web applications and it uses Redis as a transaction counter and it conditionally blocks transaction requests.
The transaction is as follows:
Check whether or not data exists. (HGET)
If it doesn't, saves new one with count 0 and set expiration time. (HSET, EXPIRE)
Increases a count value. (INCRBY)
If the increased count value reaches a specific configured limit, it sets the transaction to 'blocked' (HSET)
The limit value is my company's business policy.
Such read and write operations are requested one after another, immediately.
Currently, I use one Redis instance at one machine. (only master, no replications.)
I want to get Redis HA, so I need slave insntaces but at the same time, I want to have all reads and writes to Redis only to master insntaces because of slave data relication latency.
After some research, I found that it is a good idea to have a proxy server to use Redis HA. However, with proxy, it seems impossible to use only the master instances to receive requests and the slaves only for failover.
Is it possible??
Thanks in advance.
What you need is Redis Sentinel.
With Redis Sentinel, you can get the master address from sentinel, and read/write with master. If master is down, Redis Sentinel will do failover, and elect a new master. Then you can get the address of the new master from sentinel.
As you're going to use Lettuce for Redis cluster driver, you should set read preference to Master and things should be working fine a sample code might look like this.
LettuceClientConfiguration lettuceClientConfiguration =
LettuceClientConfiguration.builder().readFrom(ReadFrom.MASTER).build();
RedisClusterConfiguration redisClusterConfiguration = new RedisClusterConfiguration();
List<RedisNode> redisNodes = new ArrayList<>();
redisNodes.add(new RedisNode("127.0.0.1", 9000));
redisNodes.add(new RedisNode("127.0.0.1", 9001));
redisNodes.add(new RedisNode("127.0.0.1", 9002));
redisNodes.add(new RedisNode("127.0.0.1", 9003));
redisNodes.add(new RedisNode("127.0.0.1", 9004));
redisNodes.add(new RedisNode("127.0.0.1", 9005));
redisClusterConfiguration.setClusterNodes(redisNodes);
LettuceConnectionFactory lettuceConnectionFactory =
new LettuceConnectionFactory(redisClusterConfiguration, lettuceClientConfiguration);
lettuceConnectionFactory.afterPropertiesSet();
See in action at Redis Cluster Configuration
I have set maxmemory of 4 G to redis server and eviction policy is set to volatile-lru. Currently it is using about 4.41G of memory. I don't know how this is possible. As the eviction policy is set so it should start evicting keys as soon as memory hits to max-memory.
I am running redis in cluster mode having the configuration of 3 master and replication factor of 1. This is happening on only one of slave redis.
The output of
redis-cli info memory
is :-
# Memory
used_memory:4734647320
used_memory_human:4.41G
used_memory_rss:4837548032
used_memory_rss_human:4.51G
used_memory_peak:4928818072
used_memory_peak_human:4.59G
used_memory_peak_perc:96.06%
used_memory_overhead:2323825684
used_memory_startup:1463072
used_memory_dataset:2410821636
used_memory_dataset_perc:50.93%
allocator_allocated:4734678320
allocator_active:4773904384
allocator_resident:4844134400
total_system_memory:32891367424
total_system_memory_human:30.63G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:4294967296
maxmemory_human:4.00G
maxmemory_policy:volatile-lru
allocator_frag_ratio:1.01
allocator_frag_bytes:39226064
allocator_rss_ratio:1.01
allocator_rss_bytes:70230016
rss_overhead_ratio:1.00
rss_overhead_bytes:-6586368
mem_fragmentation_ratio:1.02
mem_fragmentation_bytes:102920560
mem_not_counted_for_evict:0
mem_replication_backlog:1048576
mem_clients_slaves:0
mem_clients_normal:1926964
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
It is important to understand that the eviction process works like this:
A client runs a new command, resulting in more data added.
Redis checks the memory usage, and if it is greater than the
maxmemory limit , it evicts keys according to the policy.
A new command is executed, and so forth.
So we continuously cross the boundaries of the memory limit, by going over it, and then by evicting keys to return back under the limits.
If a command results in a lot of memory being used (like a big set intersection stored into a new key) for some time the memory limit can be surpassed by a noticeable amount.
Reference: https://redis.io/topics/lru-cache
I have Celery-based task queue with RabbitMQ as the broker. I am processing about 100 messages per day. I have no backend set up.
I start the task master like this:
broker = os.environ.get('AMQP_HOST', None)
app = Celery(broker=broker)
server = QueueServer((default_http_host, default_http_port), app)
... and I start the worker like this:
broker = os.environ.get('AMQP_HOST', None)
app = Celery('worker', broker=broker)
app.conf.update(
CELERYD_CONCURRENCY = 1,
CELERYD_PREFETCH_MULTIPLIER = 1,
CELERY_ACKS_LATE = True,
)
The server runs correctly for quite some time, but after about two weeks it suddenly stops. I have tracked the stoppage down to RabbitMQ no longer receiving messages due to memory exhaustion:
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: vm_memory_high_watermark set. Memory used:252239992 allowed:249239961
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: =WARNING REPORT==== 25-Feb-2016::02:01:39 ===
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: memory resource limit alarm set on node rabbit#e654ac167b10.
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: **********************************************************
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: *** Publishers will be blocked until this alarm clears ***
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: **********************************************************
The problem is I cannot figure out what needs to be configured differently to prevent this exhaustion. Obviously somewhere something is not being purged, but I don't understand what.
For instance, after about 8 days, rabbitmqctl status shows me this:
{memory,[{total,138588744},
{connection_readers,1081984},
{connection_writers,353792},
{connection_channels,1103992},
{connection_other,2249320},
{queue_procs,428528},
{queue_slave_procs,0},
{plugins,0},
{other_proc,13555000},
{mnesia,74832},
{mgmt_db,0},
{msg_index,43243768},
{other_ets,7874864},
{binary,42401472},
{code,16699615},
{atom,654217},
{other_system,8867360}]},
... when it was first started it was much lower:
{memory,[{total,51076896},
{connection_readers,205816},
{connection_writers,86624},
{connection_channels,314512},
{connection_other,371808},
{queue_procs,318032},
{queue_slave_procs,0},
{plugins,0},
{other_proc,14315600},
{mnesia,74832},
{mgmt_db,0},
{msg_index,2115976},
{other_ets,1057008},
{binary,6284328},
{code,16699615},
{atom,654217},
{other_system,8578528}]},
... even when all the queues are empty (except one job currently processing):
root#dba9f095a160:/# rabbitmqctl list_queues -q name memory messages messages_ready messages_unacknowledged
celery 61152 1 0 1
celery#render-worker-lg3pi.celery.pidbox 117632 0 0 0
celery#render-worker-lkec7.celery.pidbox 70448 0 0 0
celeryev.17c02213-ecb2-4419-8e5a-f5ff682ea4b4 76240 0 0 0
celeryev.5f59e936-44d7-4098-aa72-45555f846f83 27088 0 0 0
celeryev.d63dbc9e-c769-4a75-a533-a06bc4fe08d7 50184 0 0 0
I am at a loss to figure out how to find the reason for memory consumption. Any help would be greatly appreciated.
Logs say that you use 252239992 bytes, which is about 250Mb, which is not so high.
How many memory do you have on this machine and what is vm_memory_high_watermark value for rabbitmq? (you can check it by running rabbitmqctl eval "vm_memory_monitor:get_vm_memory_high_watermark().")
Maybe you should just increase watermark.
Another option can be making all your queues lazy https://www.rabbitmq.com/lazy-queues.html
You don't seem to be generating a huge volume of messages so the 2GB memory consumption seems strangely high. Nonetheless you could try getting rabbitmq to delete old messages - in your celery configuration set
CELERY_DEFAULT_DELIVERY_MODE = 'transient'
I try to configure ActiveMQ for the following behavior: when broker exceeds its memory limit, it should store message in persistence storage. If use the following configuration:
BrokerService broker = new BrokerService();
broker.setBrokerName("activemq");
KahaDBPersistenceAdapter persistence = new KahaDBPersistenceAdapter();
persistence.setDirectory(new File(config.getProperty("amq.persistenceDir", "amq")));
broker.setPersistenceAdapter(persistence);
broker.setVmConnectorURI(new URI("vm://activemq"));
broker.getSystemUsage().getMemoryUsage().setLimit(64 * 1024 * 1024L);
broker.getSystemUsage().getStoreUsage().setLimit(1024 * 1024 * 1024 * 100L);
broker.getSystemUsage().getTempUsage().setLimit(1024 * 1024 * 1024 * 100L);
PolicyEntry policyEntry = new PolicyEntry();
policyEntry.setCursorMemoryHighWaterMark(50);
policyEntry.setExpireMessagesPeriod(0L);
policyEntry.setPendingDurableSubscriberPolicy(new StorePendingDurableSubscriberMessageStoragePolicy());
policyEntry.setMemoryLimit(64 * 1024 * 1024L);
policyEntry.setProducerFlowControl(false);
broker.setDestinationPolicy(new PolicyMap());
broker.getDestinationPolicy().setDefaultEntry(policyEntry);
broker.setUseJmx(true);
broker.setPersistent(true);
broker.start();
However, this does not work. ActiveMQ still consumes as much memory as needed to store the full queue. I also tried to remove PolicyEntry, that caused broker to stop producers after memory limit is reached. I could find nothing in documentation about what I am doing wrong.
we use a storeCursor and set the memory limit as follows...this will limit the amount of memory for all queues to 100MB...
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" producerFlowControl="false" memoryLimit="100mb">
<pendingQueuePolicy>
<storeCursor/>
</pendingQueuePolicy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
make sure you set the "destinations" that your policy should apply against...in my XML examples this is done using queue=">", but your example is using a new PolicyMap()...try calling policyEntry.setQueue(">") instead to apply to all queues or add specific destinations to your PolicyMap, etc.
see this test for a full example...
https://github.com/apache/activemq/blob/master/activemq-unit-tests/src/test/java/org/apache/activemq/PerDestinationStoreLimitTest.java
I'm currently investigating a memory problem in my broker network.
According to JConsole the ActiveMQ.Advisory.TempQueue is taking up 99% of the configured memory when the broker starts to block messages.
A few details about the config
Default config for the most part. One open stomp+nio connector, one open openwire connector. All brokers form a hypercube (one on-way connection to every other broker (easier to auto-generate)). No flow-control.
Problem details
The webconsole shows something like 1974234 enqueued and 45345 dequeued messages at 30 consumers (6 brokers, one consumer and the rest is clients that use the java connector). As far as I know the dequeue count should be not much smaller than: enqueued*consumers. so in my case a big bunch of advisories is not consumed and starts to fill my temp message space. (currently I configured several gb as temp space)
Since no client actively uses temp queues I find this very strange. After taking a look at the temp queue I'm even more confused. Most of the messages look like this (msg.toString):
ActiveMQMessage {commandId = 0, responseRequired = false, messageId = ID:srv007210-36808-1318839718378-1:1:0:0:203650, originalDestination = null, originalTransactionId = null, producerId = ID:srv007210-36808-1318839718378-1:1:0:0, destination = topic://ActiveMQ.Advisory.TempQueue, transactionId = null, expiration = 0, timestamp = 0, arrival = 0, brokerInTime = 1318840153501, brokerOutTime = 1318840153501, correlationId = null, replyTo = null, persistent = false, type = Advisory, priority = 0, groupID = null, groupSequence = 0, targetConsumerId = null, compressed = false, userID = null, content = null, marshalledProperties = org.apache.activemq.util.ByteSequence#45290155, dataStructure = DestinationInfo {commandId = 0, responseRequired = false, connectionId = ID:srv007210-36808-1318839718378-2:2, destination = temp-queue://ID:srv007211-47019-1318835590753-11:9:1, operationType = 1, timeout = 0, brokerPath = null}, redeliveryCounter = 0, size = 0, properties = {originBrokerName=broker.coremq-behaviortracking-675-mq-01-master, originBrokerId=ID:srv007210-36808-1318839718378-0:1, originBrokerURL=stomp://srv007210:61612}, readOnlyProperties = true, readOnlyBody = true, droppable = false}
After seeing these messages I have several questions:
Do I understand correctly that the origin of the message is a stomp connection?
If yes, how can a stomp connection create temp queues?
Is there a simple reason why the advisories are not consumed?
Currently I sort of postponed the problem by deactivating the bridgeTempDestinations property on the network connectors. this way the messages are not spread and they fill the temp space much slower. If I can not fix the source of these messages I would at least like to stop them from filling the store:
Can I drop these unconsumed messages after a certain time?
what consequences can this have?
UPDATE: I monitored my cluster some more and found out that the messages are consumed. They are enqueued and dispatched but the consumers (the other cluster nodes as mutch as java consumers that use the activemq lib) fail to acknowledge the messages. so they stay in the dispatched messages queue and this queue grows and grows.
This is an old thread but in case somebody runs into it having the same problem, you might want to check out this post: http://forum.spring.io/forum/spring-projects/integration/111989-jms-outbound-gateway-temporary-queues-never-deleted
The problem in that link sounds similar, i.e. temp queues producing large amount of advisory messages. In my case, we were using temp queues to implement synchronous request/response messaging but the volume of advisory messages caused ActiveMQ to spend most of its time in GC and eventually throw a GC Overhead Limit Exceeded Exception. This was on v5.11.1. Even though we closed connection, session, producer, consumer the temp queue would not be GC'd and would continue receiving advisory messages.
The solution was to explicitly delete the temp queues when cleaning up the other resources (see https://docs.oracle.com/javaee/7/api/javax/jms/TemporaryQueue.html)
If you are not using this advisory topic - you may want to turn it off as it's suggested at http://activemq.2283324.n4.nabble.com/How-to-disable-advisory-for-gt-topic-ActiveMQ-Advisory-TempQueue-td2356134.html
Dropping the advisory messages will not have any consequences - since those are just the messages meant for system health analysis and statistics.