I have an application that contain some set of 20 daemons always running and I have set logging\tracing on all the processes and sometimes during start of the services , I see that logs rollover unexpectedly before the fixed size. I mean that if a log file maximum size is set to some 20 MB and during the start up of services some times some of the logs have size less than 20 MB and result in creation of new log file.
Please can some one kindly provide the reason.
Thanks in advance.
Regards
Persistence.
Related
I have flume configuration with rabbitmq source, file channel and solr sink. Sometimes sink becomes so busy and file channel is filling up. At that time ChannelFullException is being thrown by file channel. After 500 number of ChannelFullException are thrown flume stuck and never responds and recover itself. I want to learn that, where does 500 value come from? How can I change it? 500 is strict because when flume stucks, I count exceptions and I find 500 number of ChannelFullException log line everytime.
You are walking into a typical producer-consumer problem, where one is working faster than the other. In your case, there are two possibilities (or a combination of both):
RabbitMQ is sending messages faster than Flume can process.
Solr cannot ingest messages fast enough so that they remain stuck in Flume.
The solution is to send messages more slowly (i.e. throttle RabbitMQ) or tweak Flume so that it can process messages faster. I think the last thing is what you want. Furthermore, the unresponsiveness of Flume is probably caused by the java heap size being full. Increase the heap size and try again until the error disappears.
# Modify java maximum memory size
vi bin/flume-ng
JAVA_OPTS="-Xmx2048m"
Additionally, you can also increase the number of agents, channels, or the capacity of those channels. This would naturally cause a higher footprint on the java heap size, so try that first.
# Example configuration
agent1.channels = ch1
agent1.channels.ch1.type = memory
agent1.channels.ch1.capacity = 10000
agent1.channels.ch1.transactionCapacity = 10000
agent1.channels.ch1.byteCapacityBufferPercentage = 20
agent1.channels.ch1.byteCapacity = 800000
I don't know where the exact number 500 comes from, a wild guess would be that when there are 500 exceptions thrown the java heap size is full and Flume does not respond.
Another possibility is that the default configuration above causes it to be exactly 500. So try tweaking it so if it ends up being different or better, that it does not occur anymore.
I'm using express app and I need to save whatever logs it is producing. Any logging middleware will work (winston, simple-node-logger etc). But there are strict requirements: A log file should not exceed 50mb. When it does reach this size it should be zipped and stored for history data. Only 20 log zips must exist at a time. Simply logging into a file and limiting it's size is easy enough by just setting up winston config. But how do I set up size monitoring and zipping feature AND limit the amount of history logs? All of this has to work simultaniously with express app running. Thanks!
I have to test load testing for 32000 users, duration 15 minutes. And I have run it on command line mode. Threads--300, ramp up--100, loop 1. But after showing some data, it is freeze. So I can't get the full report/html. Even i can't run for 50 users. How can I get rid of this. Please let me know.
From 0 to 2147483647 threads depending on various factors including but not limited to:
Hardware specifications of the machine where you run JMeter
Operating system limitations (if any) of the machine where you run JMeter
JMeter Configuration
The nature of your test (protocol(s) in use, the size of request/response, presence of pre/post processors, assertions and listeners)
Application response time
Phase of the moon
etc.
There is no answer like "on my macbook I can have about 3000 threads" as it varies from test to test, for GET requests returning small amount of data the number will be more, for POST requests uploading huge files and getting huge responses the number will be less.
The approach is the following:
Make sure to follow JMeter Best Practices
Set up monitoring of the machine where you run JMeter (CPU, RAM, Swap usage, etc.), if you don't have a better idea you can go for JMeter PerfMon Plugin
Start your test with 1 user and gradually increase the load at the same time looking into resources consumption
When any of monitored resources consumption starts exceeding reasonable threshold, i.e. 80% of maximum available capacity stop your test and see how many users were online at this stage. This is how many users you can simulate from particular this machine for particular this test.
Another machine or test - repeat from the beginning.
Most probably for 32000 users you will have to go for distributed testing
If your test "hangs" even for smaller amount of users (300 could be simulated even with default JMeter settings and maybe even in GUI mode):
take a look at jmeter.log file
take a thread dump and see what threads are doing
We are using rabbitmq (3.6.6) to send analysis (millions) to different analyzers. These are very quick and we were planning on use the rabbit-message-plugin to schedule monitorizations over the analyzed elements.
We were thinking about rabbitmq-delayed-exchange-plugin, already made some tests and we need some clarification.
Currently:
We are scheduling millions of messages
Delays range from a few minutes to 24 hours
As previously said, these are tests, so we are using a machine with one core and 4G of RAM which has also other apps running on it.
What happened with a high memory watermark set up at 2.0G:
RabbitMQ eventually (a day or so) starts consuming 100% (only one core) and does not respond to the management interface nor rabbitmqctl. This goes on for at least 18 hours (always end up killing, deleting mnesia delayed file on disk - about 100 / 200 MB - and restarting).
What happened with a high memory watermark set up at 3.6G:
RabbitMQ was killed by kernel, because of high memory usage (4 GB hardware) about a week after working like this.
Mnesia file for delayed exchange is about 1.5G
RabbitMQ cannot start anymore giving to the below trace (we are assuming that because of being terminated by a KILL messages in the delay somehow ended up corrupted or something
{could_not_start,rabbit,
rabbitmq-server[12889]: {{case_clause,{timeout,['rabbit_delayed_messagerabbit#rabbitNode']}},
rabbitmq-server[12889]: [{rabbit_boot_steps,'-run_step/2-lc$^1/1-1-',1,
And right now we are asking ourselves: Are we a little over our heads using rabbit delayed exchange plugin for this volumes of information? If we are, then end of the problem, rethink and restart, but if not, what could be an appropiate hardware and/or configuration setup?
RabbitMQ delayed exchange plugin is not properly designed to store millions of messages.
It is also documented to the plugin page
Current design of this plugin doesn't really fit scenarios with a high
number of delayed messages (e.g. 100s of thousands or millions). See
72 for details.
Read also here: https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/issues/72
This plugin is often used as if RabbitMQ was a database. It is not.
Sessions on my ColdFusion server appear to be timing out every 20 minutes for one of my apps, even though I have high (on the order of many hours) timeouts set for both idletimeout and this.SessionTimeout in the CFC.
These timeouts occur regardless of whether I visit the pages during that 20 minute period — in other words, the sessions are not even idle for 20 minutes, it's just that 20 minutes after login, the user becomes unauthenticated again — the value of #IsUserLoggedIn()# becomes NO and the value for #GetAuthUser()# becomes blank.
I'm wondering if anyone has run into this before and if there are any fixes.
Also, it's not clear in the documentation how ColdFusion determines that the user and login session are idle. It would be great to know where this session data is stored and, ideally, to peek at it and see what might be causing this strange behavior.
Do other applications on the same server have longer timeouts that are working?
If you do not, then it probably that you can set a maximum sessionTimeout in Cold Fusion Administrator. This is likely the cause.
Configuring and using session variables (CF9)
Specify a maximum session time-out. Application code cannot set a time-out greater than this value. The default value for this time-out is two days.
Also, can you edit your question to provide some code? Show us your your application configuration.
Also, is there a chance you have an application with the same name and different timeout configuration that is causing a conflict. Honestly this is just a ballpark guess because I'm very careful with application names.