jvm 1 | WARN | Store limit is 102400 mb, whilst the data directory: C:\apach
e-activemq-5.8.0\bin\win32\..\..\data\kahadb only has 44093 mb of usable space
jvm 1 | ERROR | Temporary Store limit is 51200 mb, whilst the temporary data
directory: C:\apache-activemq-5.8.0\bin\win32\..\..\data\localhost\tmp_storage o
nly has 44093 mb of usable space
It's telling you that your configured limits don't fit with the amount of disk space you have available at the store location. This can lead to Broker failure in older versions as the limits are not lowered automatically to match the disk space, in the latest release the broker will lower the limits. When you see this it means you should either rethink you store location, or your broker config.
Related
We are using a cluster of ActiveMQ 5.11.1 nodes (guarded by Zookeepers). Nodes use ReplicatedLevelDB storage. An application is able to produce and consume messages, but starting some time ago we've noticed a very weird issue.
It seems like ActiveMQ logs are deleted, but their FDs are opened (by ActiveMQ Java process) so Linux cannot clean those files. In the end we have a space leakage which is bad.
[root#server dirty.index#] lsof | grep -o "/home/.*" | grep deleted | sort | uniq
/home/activemq/activemq-data/000000126ecb3f49.log (deleted)
/home/activemq/activemq-data/00000012750b4590.log (deleted)
[root#server activemq-data#] lsof | grep -o "/home/.*" | grep deleted | wc -l
280
That happens only on the master node. After node restart, a new master is elected and all those files are removed. The new master has the same issue.
We've enabled TRACE log level for ActiveMQ - no luck, nothing suspicious (well, or we're missing something). Queues aren't big, 5-6 messages at max. All messages are consumed quickly. There are no obvious ERROR messages. APM also doesn't show anything suspicious
ReplicatedLevelDB config:
<persistenceAdapter>
<replicatedLevelDB
directory="activemq-data"
replicas="5"
bind="tcp://0.0.0.0:61619"
zkAddress="xx.xxx.xx.30:2181,xx.xxx.xx.31:2181,xx.xxx.xx.32:2181,xx.xxx.xx.33:2181,xx.xxx.xx.34:2181"
zkPassword=""
zkSessionTimeout="3s"
zkPath="/xxx02"
sync="quorum_mem"
hostname="some.server"
/>
</persistenceAdapter>
No recent changes in ActiveMQ config.
We're stuck at the moment. What could we check more?
The LevelDB store in ActiveMQ has been deprecated for a couple years now and has seen no community support or maintenance. Likely you've run into a latent bug in the implementation which will not get fixed most likely as LevelDB will likely be removed completely in 5.17.0 release. I'd suggest moving to the KahaDB store or looking into ActiveMQ Artemis if you need replication and HA.
Screenshot of my memory status
Hi I'm getting a error when I try to run the TPCDS- Benchmark query
Memory Limit Exceeded by fragment: 9944e21b4d6634c0:1
HDFS_SCAN_NODE (id=2) could not allocate 1.95 KB without exceeding limit.
Process: memory limit exceeded. Limit=256.00 MB Total=286.62 MB Peak=380.11 MB
My computer has 10GB of RAM. However impala seems to be allocated only 256MB.
I have tried to increase memory limit on startup using mem_limit command but it doesn't do the trick.
I was able to solve my problem via Cloudera Manager.
Go to cloudera Manager Services > Impala > Configuration.
Under configuration search for "Memory" in the search bar. You will find the option to increase memory of impala daemon which can be setup appropriately.
We have a test server which hosts lots of test applications. when there are lots of process (or threads) running, we found new process or thread cannot be created:
for C program: "cannot fork, resource unavailable"
for java program: it throws exception "OutOfMemory, unable to create native thread"
I think it is due to the hard limit to the maximum number of process. I tried to set ulimit -n 255085. ulimit shows below:
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 90000
pipe size (512 bytes, -p) 10
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 255085
virtual memory (kbytes, -v) unlimited
but it doesn't work. I tried to run many processes at same time with different users and all they stops with same error at same time. therefore, I think there is a "limit" to the whole system regardless to the users logged in.
Your system looks to be out of virtual memory. In such case, there is no point to raise the number of processes.
Increase the swap area size to allow more processes to run.
Make sure you have enough RAM to run all these processes otherwise performance will suffer.
I have a 64 bit Windows 7 machine with a 32 bit JVM. I have used the following commands to check whether my JVM is 32-bit or 64-bit
System.getProperty("sun.arch.data.model");
System.getProperty("os.arch");
The application I'm working on requires more heap space than the default 256 MB.So I have set the initialHeapSize=256 MB and maximumHeapSize=1490 MB in my Websphere Application Server 8.0.However after setting these properties in admin console,I'm not able to start the application server .It says "Error occured during start-up". If i set the maximumHeapSize=1230 MB, then the application server starts.But I have a requirement where i need the heap size to be increased upto 1900 MB. I'm completely out of options. Please help !!
Thanks
Can you tell what error you are seeing in WAS logs probably in native_stderr.log or native_stdout.log file when the value set to "1490mb" ?
Are you specifying those values under "Application servers > server_name > Process definition > Java Virtual Machine" section ?
-vt
Note: These opinions are my own
I am using delayed_jobs 3.0 gem for email notification in my rails3 app. It is deployed on linode using nginx+capistrano. My linode config is RAM of 512MB and storage space of 24GB. In that two instances are running.
Delayed jobs of second instance is giving the problem. After sometime it gets shutdown and needs manually restart. There are no errors on Production.log and delayed_jobs.log. When I issue command "free -m", it shows the result as:
total used free shared buffers cached
Mem: 496 415 80 0 5 45
-/+ buffers/cache: 364 131
Swap: 255 130 125
I am not able to find out the reason of it getting down and please suggest me the possible solution.