I have Celery-based task queue with RabbitMQ as the broker. I am processing about 100 messages per day. I have no backend set up.
I start the task master like this:
broker = os.environ.get('AMQP_HOST', None)
app = Celery(broker=broker)
server = QueueServer((default_http_host, default_http_port), app)
... and I start the worker like this:
broker = os.environ.get('AMQP_HOST', None)
app = Celery('worker', broker=broker)
app.conf.update(
CELERYD_CONCURRENCY = 1,
CELERYD_PREFETCH_MULTIPLIER = 1,
CELERY_ACKS_LATE = True,
)
The server runs correctly for quite some time, but after about two weeks it suddenly stops. I have tracked the stoppage down to RabbitMQ no longer receiving messages due to memory exhaustion:
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: vm_memory_high_watermark set. Memory used:252239992 allowed:249239961
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: =WARNING REPORT==== 25-Feb-2016::02:01:39 ===
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: memory resource limit alarm set on node rabbit#e654ac167b10.
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: **********************************************************
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: *** Publishers will be blocked until this alarm clears ***
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: **********************************************************
The problem is I cannot figure out what needs to be configured differently to prevent this exhaustion. Obviously somewhere something is not being purged, but I don't understand what.
For instance, after about 8 days, rabbitmqctl status shows me this:
{memory,[{total,138588744},
{connection_readers,1081984},
{connection_writers,353792},
{connection_channels,1103992},
{connection_other,2249320},
{queue_procs,428528},
{queue_slave_procs,0},
{plugins,0},
{other_proc,13555000},
{mnesia,74832},
{mgmt_db,0},
{msg_index,43243768},
{other_ets,7874864},
{binary,42401472},
{code,16699615},
{atom,654217},
{other_system,8867360}]},
... when it was first started it was much lower:
{memory,[{total,51076896},
{connection_readers,205816},
{connection_writers,86624},
{connection_channels,314512},
{connection_other,371808},
{queue_procs,318032},
{queue_slave_procs,0},
{plugins,0},
{other_proc,14315600},
{mnesia,74832},
{mgmt_db,0},
{msg_index,2115976},
{other_ets,1057008},
{binary,6284328},
{code,16699615},
{atom,654217},
{other_system,8578528}]},
... even when all the queues are empty (except one job currently processing):
root#dba9f095a160:/# rabbitmqctl list_queues -q name memory messages messages_ready messages_unacknowledged
celery 61152 1 0 1
celery#render-worker-lg3pi.celery.pidbox 117632 0 0 0
celery#render-worker-lkec7.celery.pidbox 70448 0 0 0
celeryev.17c02213-ecb2-4419-8e5a-f5ff682ea4b4 76240 0 0 0
celeryev.5f59e936-44d7-4098-aa72-45555f846f83 27088 0 0 0
celeryev.d63dbc9e-c769-4a75-a533-a06bc4fe08d7 50184 0 0 0
I am at a loss to figure out how to find the reason for memory consumption. Any help would be greatly appreciated.
Logs say that you use 252239992 bytes, which is about 250Mb, which is not so high.
How many memory do you have on this machine and what is vm_memory_high_watermark value for rabbitmq? (you can check it by running rabbitmqctl eval "vm_memory_monitor:get_vm_memory_high_watermark().")
Maybe you should just increase watermark.
Another option can be making all your queues lazy https://www.rabbitmq.com/lazy-queues.html
You don't seem to be generating a huge volume of messages so the 2GB memory consumption seems strangely high. Nonetheless you could try getting rabbitmq to delete old messages - in your celery configuration set
CELERY_DEFAULT_DELIVERY_MODE = 'transient'
Related
Every now and then we received a large set of timeouts (around peak time for website traffic) with lots of logs in the following form:
Timeout performing GET (5000ms)
next: GET ObjectPageView.120.633.0
inst: 21
qu: 0
qs: 0
aw: False
bw: SpinningDown
rs: ReadAsync
ws: Idle
in: 0
last-in: 0
cur-in: 0
sync-ops: 456703
async-ops: 1
conn-sec: 72340.11
mc: 1/1/0
mgr: 10 of 10 available
IOCP: (Busy=0 Free=1800 Min=600 Max=1800)
WORKER: (Busy=720 Free=1080 Min=600 Max=1800)
v: 2.6.90.64945
What do sync-ops and conn-sec stand for? The rest of the numbers seem fine, but these seem high and I'm not entirely sure what they are describing.
These are statistic about the current connection:
"sync-ops" a count of synchronous operations (as opposed to async-ops for asynchronous operations) performed on the current connection.
"conn-sec" is the current duration of the current connection (from connected until now)
I have a rather busy RabbitMQ setup which at peak times becomes extremely slow accepting new connections (RabbitMQ 3.9.14)
I've tried fine tuning /etc/sysctl.conf as found on a guide on the RabbitMQ website
fs.file-max = 10000000
fs.nr_open = 10000000
fs.inotify.max_user_watches=524288
net.core.somaxconn = 4096
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_keepalive_time=30
net.ipv4.tcp_keepalive_intvl=10
net.ipv4.tcp_keepalive_probes=4
net.ipv4.ip_local_port_range = 10000 64000
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
net.netfilter.nf_conntrack_max=1048576
And also played around with the rabbitmq.conf options to see if anything would have an impact, however that is unfortunately not the case
num_acceptors.tcp = 32
channel_max = 4096
tcp_listen_options.backlog = 512
tcp_listen_options.nodelay = true
tcp_listen_options.linger.on = true
tcp_listen_options.linger.timeout = 0
tcp_listen_options.sndbuf = 196608
tcp_listen_options.recbuf = 196608
collect_statistics_interval = 60000
Due to the nature of my setup (PHP), every time messages are being published to RabbitMQ, a new connection is created, I wish I could do long-standing connections but that is beyond of what PHP is designed for
During peak activity, some connections take up to 7 seconds to open, once the connection is established however, the messages publishing performance is just fine.
I feel like I've exhausted all the logical options that I'm aware of. Is there any other tweaks that I can attempt to change in order to improve the connection performance of the node? The server load is low-ish, sitting at 15% peak. Disabling the management interface had negligible impact
Update: At first, when I've updated to RabbitMQ 3.10.5, I thought that the issue was solved, however that was not the case, it just gave us a bit more headroom.
The real cause was our high churn-rate (200/s+), during my conversation in the RabbitMQ slack channel it became apparent that a high churn rate would block the event loop and cause the spikes seen above.
The solution for us was to use a proxy to re-use connections instead of opening a new one every time we publish something:
https://github.com/cloudamqp/amqproxy
This has effectively resolved our issue.
I have an application with a high load for batch read operations. My Aerospike cluster (v 3.7.2) has 14 servers, each one with 7GB RAM and 2 CPUs in Google Cloud.
By looking at Google Cloud Monitoring Graphs, I noticed a very unbalanced load between servers: some servers have almost 100% CPU load, while others have less than 50% (image below). Even after hours of operation, the cluster unbalanced pattern doesn't change.
Is there any configuration that I could change to make this cluster more homogeneous? How to optimize node balancing?
Edit 1
All servers in the cluster have the same identical aerospike.conf file:
Aerospike database configuration file.
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
paxos-recovery-policy auto-reset-master
pidfile /var/run/aerospike/asd.pid
service-threads 32
transaction-queues 32
transaction-threads-per-queue 32
batch-index-threads 32
proto-fd-max 15000
batch-max-requests 200000
}
logging {
# Log file must be an absolute path.
file /var/log/aerospike/aerospike.log {
context any info
}
}
network {
service {
#address any
port 3000
}
heartbeat {
mode mesh
mesh-seed-address-port 10.240.0.6 3002
mesh-seed-address-port 10.240.0.5 3002
port 3002
interval 150
timeout 20
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace test {
replication-factor 3
memory-size 5G
default-ttl 0 # 30 days, use 0 to never expire/evict.
ldt-enabled true
storage-engine device {
file /data/aerospike.dat
write-block-size 1M
filesize 180G
}
}
Edit 2:
$ asinfo
1 : node
BB90600F00A0142
2 : statistics
cluster_size=14;cluster_key=E3C3672DCDD7F51;cluster_integrity=true;objects=3739898;sub-records=0;total-bytes-disk=193273528320;used-bytes-disk=26018492544;free-pct-disk=86;total-bytes-memory=5368709120;used-bytes-memory=239353472;data-used-bytes-memory=0;index-used-bytes-memory=239353472;sindex-used-bytes-memory=0;free-pct-memory=95;stat_read_reqs=2881465329;stat_read_reqs_xdr=0;stat_read_success=2878457632;stat_read_errs_notfound=3007093;stat_read_errs_other=0;stat_write_reqs=551398;stat_write_reqs_xdr=0;stat_write_success=549522;stat_write_errs=90;stat_xdr_pipe_writes=0;stat_xdr_pipe_miss=0;stat_delete_success=4;stat_rw_timeout=1862;udf_read_reqs=0;udf_read_success=0;udf_read_errs_other=0;udf_write_reqs=0;udf_write_success=0;udf_write_err_others=0;udf_delete_reqs=0;udf_delete_success=0;udf_delete_err_others=0;udf_lua_errs=0;udf_scan_rec_reqs=0;udf_query_rec_reqs=0;udf_replica_writes=0;stat_proxy_reqs=7021;stat_proxy_reqs_xdr=0;stat_proxy_success=2121;stat_proxy_errs=4739;stat_ldt_proxy=0;stat_cluster_key_err_ack_dup_trans_reenqueue=607;stat_expired_objects=0;stat_evicted_objects=0;stat_deleted_set_objects=0;stat_evicted_objects_time=0;stat_zero_bin_records=0;stat_nsup_deletes_not_shipped=0;stat_compressed_pkts_received=0;err_tsvc_requests=110;err_tsvc_requests_timeout=0;err_out_of_space=0;err_duplicate_proxy_request=0;err_rw_request_not_found=17;err_rw_pending_limit=19;err_rw_cant_put_unique=0;geo_region_query_count=0;geo_region_query_cells=0;geo_region_query_points=0;geo_region_query_falsepos=0;fabric_msgs_sent=58002818;fabric_msgs_rcvd=57998870;paxos_principal=BB92B00F00A0142;migrate_msgs_sent=55749290;migrate_msgs_recv=55759692;migrate_progress_send=0;migrate_progress_recv=0;migrate_num_incoming_accepted=7228;migrate_num_incoming_refused=0;queue=0;transactions=101978550;reaped_fds=6;scans_active=0;basic_scans_succeeded=0;basic_scans_failed=0;aggr_scans_succeeded=0;aggr_scans_failed=0;udf_bg_scans_succeeded=0;udf_bg_scans_failed=0;batch_index_initiate=40457778;batch_index_queue=0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0;batch_index_complete=40456708;batch_index_timeout=1037;batch_index_errors=33;batch_index_unused_buffers=256;batch_index_huge_buffers=217168717;batch_index_created_buffers=217583519;batch_index_destroyed_buffers=217583263;batch_initiate=0;batch_queue=0;batch_tree_count=0;batch_timeout=0;batch_errors=0;info_queue=0;delete_queue=0;proxy_in_progress=0;proxy_initiate=7021;proxy_action=5519;proxy_retry=0;proxy_retry_q_full=0;proxy_unproxy=0;proxy_retry_same_dest=0;proxy_retry_new_dest=0;write_master=551089;write_prole=1055431;read_dup_prole=14232;rw_err_dup_internal=0;rw_err_dup_cluster_key=1814;rw_err_dup_send=0;rw_err_write_internal=0;rw_err_write_cluster_key=0;rw_err_write_send=0;rw_err_ack_internal=0;rw_err_ack_nomatch=1767;rw_err_ack_badnode=0;client_connections=366;waiting_transactions=0;tree_count=0;record_refs=3739898;record_locks=0;migrate_tx_objs=0;migrate_rx_objs=0;ongoing_write_reqs=0;err_storage_queue_full=0;partition_actual=296;partition_replica=572;partition_desync=0;partition_absent=3228;partition_zombie=0;partition_object_count=3739898;partition_ref_count=4096;system_free_mem_pct=61;sindex_ucgarbage_found=0;sindex_gc_locktimedout=0;sindex_gc_inactivity_dur=0;sindex_gc_activity_dur=0;sindex_gc_list_creation_time=0;sindex_gc_list_deletion_time=0;sindex_gc_objects_validated=0;sindex_gc_garbage_found=0;sindex_gc_garbage_cleaned=0;system_swapping=false;err_replica_null_node=0;err_replica_non_null_node=0;err_sync_copy_null_master=0;storage_defrag_corrupt_record=0;err_write_fail_prole_unknown=0;err_write_fail_prole_generation=0;err_write_fail_unknown=0;err_write_fail_key_exists=0;err_write_fail_generation=0;err_write_fail_generation_xdr=0;err_write_fail_bin_exists=0;err_write_fail_parameter=0;err_write_fail_incompatible_type=0;err_write_fail_noxdr=0;err_write_fail_prole_delete=0;err_write_fail_not_found=0;err_write_fail_key_mismatch=0;err_write_fail_record_too_big=90;err_write_fail_bin_name=0;err_write_fail_bin_not_found=0;err_write_fail_forbidden=0;stat_duplicate_operation=53184;uptime=1001388;stat_write_errs_notfound=0;stat_write_errs_other=90;heartbeat_received_self=0;heartbeat_received_foreign=145137042;query_reqs=0;query_success=0;query_fail=0;query_abort=0;query_avg_rec_count=0;query_short_running=0;query_long_running=0;query_short_queue_full=0;query_long_queue_full=0;query_short_reqs=0;query_long_reqs=0;query_agg=0;query_agg_success=0;query_agg_err=0;query_agg_abort=0;query_agg_avg_rec_count=0;query_lookups=0;query_lookup_success=0;query_lookup_err=0;query_lookup_abort=0;query_lookup_avg_rec_count=0
3 : features
cdt-list;pipelining;geo;float;batch-index;replicas-all;replicas-master;replicas-prole;udf
4 : cluster-generation
61
5 : partition-generation
11811
6 : edition
Aerospike Community Edition
7 : version
Aerospike Community Edition build 3.7.2
8 : build
3.7.2
9 : services
10.0.3.1:3000;10.240.0.14:3000;10.0.3.1:3000;10.240.0.27:3000;10.0.3.1:3000;10.240.0.5:3000;10.0.3.1:3000;10.240.0.43:3000;10.0.3.1:3000;10.240.0.30:3000;10.0.3.1:3000;10.240.0.18:3000;10.0.3.1:3000;10.240.0.42:3000;10.0.3.1:3000;10.240.0.33:3000;10.0.3.1:3000;10.240.0.24:3000;10.0.3.1:3000;10.240.0.37:3000;10.0.3.1:3000;10.240.0.41:3000;10.0.3.1:3000;10.240.0.13:3000;10.0.3.1:3000;10.240.0.23:3000
10 : services-alumni
10.0.3.1:3000;10.240.0.42:3000;10.0.3.1:3000;10.240.0.5:3000;10.0.3.1:3000;10.240.0.13:3000;10.0.3.1:3000;10.240.0.14:3000;10.0.3.1:3000;10.240.0.18:3000;10.0.3.1:3000;10.240.0.23:3000;10.0.3.1:3000;10.240.0.24:3000;10.0.3.1:3000;10.240.0.27:3000;10.0.3.1:3000;10.240.0.30:3000;10.0.3.1:3000;10.240.0.37:3000;10.0.3.1:3000;10.240.0.43:3000;10.0.3.1:3000;10.240.0.33:3000;10.0.3.1:3000;10.240.0.41:3000
I have a few comments about your configuration. First, transaction-threads-per-queue should be set to 3 or 4 (don't set it to the number of cores).
The second has to do with your batch-read tuning. You're using the (default) batch-index protocol, and the config params you'll need to tune for batch-read performance are:
You have batch-max-requests set very high. This is probably affecting both your CPU load and your memory consumption. It's enough that there's a slight imbalance in the number of keys you're accessing per-node, and that will reflect in the graphs you've shown. At least, this is possibly the issue. It's better that you iterate over smaller batches than try to fetch 200K records per-node at a time.
batch-index-threads – by default its value is 4, and you set it to 32 (of a max of 64). You should do this incrementally by running the same test and benchmarking the performance. On each iteration adjust higher, then down if it's decreased in performance. For example: test with 32, +8 = 40 , +8 = 48, -4 = 44. There's no easy rule-of-thumb for the setting, you'll need to tune through iterations on the hardware you'll be using, and monitor the performance.
batch-max-buffer-per-queue – this is more directly linked to the number of concurrent batch-read operations the node can support. Each batch-read request will consume at least one buffer (more if the data cannot fit in 128K). If you do not have enough of these allocated to support the number of concurrent batch-read requests you will get exceptions with error code 152 BATCH_QUEUES_FULL . Track and log such events clearly, because it means you need to raise this value. Note that this is the number of buffers per-queue. Each batch response worker thread has its own queue, so you'll have batch-index-threads x batch-max-buffer-per-queue buffers, each taking 128K of RAM. The batch-max-unused-buffers caps the memory usage of all these buffers combined, destroying unused buffers until their number is reduced. There's an overhead to allocating and destroying these buffers, so you do not want to set it too low compared to the total. Your current cost is 32 x 256 x 128KB = 1GB.
Finally, you're storing your data on a filesystem. That's fine for development instances, but not recommended for production. In GCE you can provision either a SATA SSD or an NVMe SSD for your data storage, and those should be initialized, and used as block devices. Take a look at the GCE recommendations for more details. I suspect you have warnings in your log about the device not keeping up.
It's likely that one of your nodes is an outlier with regards to the number of partitions it has (and therefore number of objects). You can confirm it with asadm -e 'asinfo -v "objects"'. If that's the case, you can terminate that node, and bring up a new one. This will force the partitions to be redistributed. This does trigger a migration, which takes quite longer in the CE server than in the EE one.
For anyone interested, Aerospike Enterpirse 4.3 introduced 'uniform-balance' which homogeneously balances data partitions. Read more here: https://www.aerospike.com/blog/aerospike-4-3-all-flash-uniform-balance/
When I check the webserver mod_status /server-status I noticed that there a bunch of threads in state ..reading..
Doing a strace on a thread this is what actually happens when the thread is in ..reading..
...
...
semop(327681, {{0, 1, SEM_UNDO}}, 1) = 0
gettimeofday({1452260985, 867058}, NULL) = 0
getsockname(156, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("172.31.9.248")}, [16]) = 0
fcntl(156, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(156, F_SETFL, O_RDWR|O_NONBLOCK) = 0
gettimeofday({1452260985, 867479}, NULL) = 0
read(156, 0x558f4c26e9d8, 8000) = -1 EAGAIN (Resource temporarily unavailable)
poll([{fd=156, events=POLLIN}], 1, 300000) = 1 ([{fd=156, revents=POLLIN}])
read(156, "", 8000) = 0
gettimeofday({1452261254, 669634}, NULL) = 0
gettimeofday({1452261254, 669691}, NULL) = 0
shutdown(156, SHUT_WR) = 0
poll([{fd=156, events=POLLIN}], 1, 2000) = 1 ([{fd=156, revents=POLLIN|POLLHUP}])
read(156, "", 512) = 0
close(156) = 0
read(6, 0x7fff901f67e7, 1) = -1 EAGAIN (Resource temporarily unavailable)
gettimeofday({1452261254, 670341}, NULL) = 0
semop(327681, {{0, -1, SEM_UNDO}}, 1) = 0
...
...
When the thread are in ..waiting.. the strace stops at the following line:
poll([{fd=156, events=POLLIN}], 1, 300000) = 1 ([{fd=156, revents=POLLIN}])
The apache config value of "Timout" is set to 30 in this case that reflects the value "300000".
This is the timeout value it waits, lowering the configuration value will change the value shown above and it will make the timeout faster.
From my new knowledge in using strace it looks to me that it tries to get a socket to lookup a internal IP-adress. But that is not successful.
The setting "HostnameLookups" is off.
Checking our production environment shows that it has the same patterns when Apache stops in ..reading.. but then it shows a IPV6 address pattern.
Example:
getsockname(154, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::ffff:172.31.3.239", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
And then stops at "poll" and then get the "(Timeout)" as in the example above.
But is there some input why it stops in ..waiting.. ?
Does the "Resource temporarily unavailable" message leave any clue?
If it matters, Apache is running on EC2 instances behind a ELB in Amazon cloud.
Update:
Here is a image of how a Production server looks right now with thread states. Lots of ..reading..
Image of Apache thread states
We also are running lots of VirtualHosts on the servers if that gives any clue why this happens.
Closest thread on the World Wide Web I fund with the same problem is this one: http://apache-http-server.18135.x6.nabble.com/Apache-Hangs-Server-Status-shows-all-Reading-td4751342.html
The threads stuck in ..reading.. was caused by a mismatch of "Idle Timeout" on connection settings in ELB compared to the setting of Keepalivetimout in http.conf
The connection timeout set in ELB was a lot longer than the Keepalivetimout set in Apache configuration. This results in that the Elastic Load Balancer will try to keep open connections while Apache want's to close it.
See here http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/config-idle-timeout.html
After changing the ELB settings to match the settings in Apache config (60 seconds currently) gives the result that I have not got a long list of threads stick in state of R (Reading). They are now set in state K (Keep alive).
And this looks more like the expected behavior of threads.
It is polling its sockets waiting for one or more of them to become readable, or for the read timeout to expire.
But is there some input why it stops in ..waiting.. ?
There isn't any input. That's why it blocks.
I am using celery with a rabbitmq backend. It is producing thousands of queues with 0 or 1 items in them in rabbitmq like this:
$ sudo rabbitmqctl list_queues
Listing queues ...
c2e9b4beefc7468ea7c9005009a57e1d 1
1162a89dd72840b19fbe9151c63a4eaa 0
07638a97896744a190f8131c3ba063de 0
b34f8d6d7402408c92c77ff93cdd7cf8 1
f388839917ff4afa9338ef81c28aad75 0
8b898d0c7c7e4be4aa8007b38ccc00ea 1
3fb4be51aaaa4ac097af535301084b01 1
This seems to be inefficient, but further I have observed that these queues persist long after processing is finished.
I have found the task that appears to be doing this:
#celery.task(ignore_result=True)
def write_pages(page_generator):
g = group(render_page.s(page) for page in page_generator)
res = g.apply_async()
for rendered_page in res:
print rendered_page # TODO: print to file
It seems that because these tasks are being called in a group, they are being thrown into the queue but never being released. However, I am clearly consuming the results (as I can view them being printed when I iterate through res. So, I do not understand why those tasks are persisting in the queue.
Additionally, I am wondering if the large number queues that are being created is some indication that I am doing something wrong.
Thanks for any help with this!
Celery with the AMQP backend will store task tombstones (results) in an AMQP queue named with the task ID that produced the result. These queues will persist even after the results are drained.
A couple recommendations:
Apply ignore_result=True to every task you can. Don't depend on results from other tasks.
Switch to a different backend (perhaps Redis -- it's more efficient anyway): http://docs.celeryproject.org/en/latest/userguide/tasks.html
Use CELERY_TASK_RESULT_EXPIRES (or on 4.1 CELERY_RESULT_EXPIRES) to have a periodic cleanup task remove old data from rabbitmq.
http://docs.celeryproject.org/en/master/userguide/configuration.html#std:setting-result_expires