Found that KahaDB is installed in ActiveMQ by default. Found a db.data file, when I open it (its a config file), i saw this:
#
#Fri Aug 02 11:55:47 ART 2013
fileType=org.apache.activemq.store.kahadb.disk.page.PageFile
pageSize=4096
freePages=1
cleanShutdown=false
metaDataTxId=7
fileTypeVersion=1
lastTxId=52967
#
#Fri Aug 02 11:55:47 ART 2013
fileType=org.apache.activemq.store.kahadb.disk.page.PageFile
pageSize=4096
freePages=1
cleanShutdown=false
metaDataTxId=7
fileTypeVersion=1
lastTxId=52967
Repeated twice. Should i edit it with Notepad++ or another program? If not, if i add persistence attributes in there, should i add it twice too?
Is it possible to persist the message info by using KahaDB?
I would need something like:
TIMESTAMP, MESSAGE_ID, REPLY_TO
That kind of data only.
Tried using log4j and kahadb but didn't log what i needed actually.
This is my log4j.properties file
# Default log level
log4j.rootLogger=DEBUG, kahadb
# KahaDB configuration
log4j.appender.kahadb=org.apache.log4j.RollingFileAppender
log4j.appender.kahadb.file=logs/data/kahadb.log
log4j.appender.kahadb.maxFileSize=1024KB
log4j.appender.kahadb.maxBackupIndex=5
log4j.appender.kahadb.append=true
log4j.appender.kahadb.layout=org.apache.log4j.PatternLayout
log4j.appender.kahadb.layout.ConversionPattern=%d [%-15.15t] %-5p %-30.30c{1} - %m%n
log4j.logger.org.apache.activemq.store.kahadb.MessageDatabase=TRACE, kahadb
Thanks.
Its not really clear what your problem is here. By default the broker will use its internal KahaDB store to persist all Messages sent to a Queue or Durable Topic subscription if those messages are sent with the delivery mode of persistent. There's no reason to edit any of the KahaDB files such as db.data or db.log as this will corrupt your store. You can read about the store architecture and performance tuning the store here.
There is a wealth of info out there on KahaDB that can be found with just a simple Google search.
Related
We are using a cluster of ActiveMQ 5.11.1 nodes (guarded by Zookeepers). Nodes use ReplicatedLevelDB storage. An application is able to produce and consume messages, but starting some time ago we've noticed a very weird issue.
It seems like ActiveMQ logs are deleted, but their FDs are opened (by ActiveMQ Java process) so Linux cannot clean those files. In the end we have a space leakage which is bad.
[root#server dirty.index#] lsof | grep -o "/home/.*" | grep deleted | sort | uniq
/home/activemq/activemq-data/000000126ecb3f49.log (deleted)
/home/activemq/activemq-data/00000012750b4590.log (deleted)
[root#server activemq-data#] lsof | grep -o "/home/.*" | grep deleted | wc -l
280
That happens only on the master node. After node restart, a new master is elected and all those files are removed. The new master has the same issue.
We've enabled TRACE log level for ActiveMQ - no luck, nothing suspicious (well, or we're missing something). Queues aren't big, 5-6 messages at max. All messages are consumed quickly. There are no obvious ERROR messages. APM also doesn't show anything suspicious
ReplicatedLevelDB config:
<persistenceAdapter>
<replicatedLevelDB
directory="activemq-data"
replicas="5"
bind="tcp://0.0.0.0:61619"
zkAddress="xx.xxx.xx.30:2181,xx.xxx.xx.31:2181,xx.xxx.xx.32:2181,xx.xxx.xx.33:2181,xx.xxx.xx.34:2181"
zkPassword=""
zkSessionTimeout="3s"
zkPath="/xxx02"
sync="quorum_mem"
hostname="some.server"
/>
</persistenceAdapter>
No recent changes in ActiveMQ config.
We're stuck at the moment. What could we check more?
The LevelDB store in ActiveMQ has been deprecated for a couple years now and has seen no community support or maintenance. Likely you've run into a latent bug in the implementation which will not get fixed most likely as LevelDB will likely be removed completely in 5.17.0 release. I'd suggest moving to the KahaDB store or looking into ActiveMQ Artemis if you need replication and HA.
Just doing some testing on local machine, would like somewhere to inspect messages that are published and persisted by RabbitMQ (deliveryMode = 2). Or at least to have a time when messages was actually persisted. First try was RabbitMQ admin management, went trough all options and closest what i have found is following:
Database directory: /usr/local/var/lib/rabbitmq/mnesia/rabbit#localhost
There i can found many files with rdq extenstions and many logs file, but can't actually see nothing.
you can't, RabbitMQ uses a custom database and it is not possible to browse it.
you can only browse the RabbitMQ definitions as "queues", "users", "exchanges" etc.. but not the messages.
By default, the messages index is inside:
/usr/local/var/lib/rabbitmq/mnesia/rabbit#localhost/queues/HASHQUEUE
The only way it is as suggested by #Johansson
It's possible to manually inspect your message in the queue via the Management interface. Press on the queue that has the message, and then "Get message". If you mark it as "requeue", RabbitMQ puts it back to the queue in the same order.
https://www.cloudamqp.com/blog/2015-05-27-part3-rabbitmq-for-beginners_the-management-interface.html
When I start up all the redis-server of the redis cluster, all these servers continuously print logs like WSA_IO_PENDING clusterWriteDone
[9956] 03 Feb 18:17:25.044 # WSA_IO_PENDING writing to socket fd --------------------------------------------------------
[9956] 03 Feb 18:17:25.062 # clusterWriteDone written 2520 fd 15-------------------------------------------------------------
[9956] 03 Feb 18:17:25.545 # WSA_IO_PENDING writing to socket fd --------------------------------------------------------
[9956] 03 Feb 18:17:25.568 # WSA_IO_PENDING writing to socket fd -------------------------------------------------------- –
There is no way to specifically turn those "warnings" off in 3.2.x port of Redis for Windows as the logging statements use highest LL_WARNING level. This issue has been reported in my fork of that unmaintained MSOpenTech's repo (which I updated to Redis 4.0.2) and has been fixed by decreasing that level to LL_DEBUG. More details: https://github.com/tporadowski/redis/issues/14
This change will be included in the next release (4.0.2.3) or you can get the latest source code and build it for yourself.
Current releases can be found here: https://github.com/tporadowski/redis/releases
An issue was open in the official redis repo 10 months ago about that problem. Unfortunately it seems to be abandoned, and it hasn't been solved yet:
Redis cluster print "WSA_IO_PENDING writing to socket..." continuously, does it matter?
However, that issue may not be related to redis itself, but to the Windows Sockets API, as pointed out by Cy Rossignol in the comments. It's the winsock API that returns that status to the application, as seen in the documentation:
WSA_IO_PENDING (997)
Overlapped operations will complete later.
The application has
initiated an overlapped operation that cannot be completed
immediately. A completion indication will be given later when the
operation has been completed. Note that this error is returned by the
operating system, so the error number may change in future releases of
Windows.
Maybe it didn't get much attention because it's not a bug, although it's indeed an inconvenience that floods the system logs. In that case, you may not get help there.
Seems like there's no temporary fix. The Windows Redis fork is archived and I don't know if you could get any help there either.
Go on this location C:\Program Files\Redis
Open file redis.windows-service.conf in Notepad.
You will find a section like below:
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
# Specify the log file name. Also 'stdout' can be used to force
# Redis to log on the standard output.
logfile "Logs/redis_log.txt"
Here, you can change the value of loglevel as per your requirement. I think changing it to warning will solve this issue because it will log only essential errors.
This question is very closely related to this and this. With the SFTP connector streaming is on by default and cannot be turned off (version 3.5.2). If I have a flow like the below
1) Inbound SFTP connector with a large polling frequency (in hours for example)
2) File outbound to store the stream into a file in local storage
I would expect here that the SFTP inbound deletes the source file after the transfer is complete. (But this does not happen)
Reading the documentation I find the below
Auto Delete (Applies to inbound SFTP endpoint only) . Click this box to delete the file after it has been read. Note: If an error occurs when Mule is writing to the outbound endpoint, the file is not deleted. Auto Delete only works if both the inbound and outbound endpoints use the SFTP transport.
Is there a workaround for this ?, basically how can I delete the source file once I have downloaded it off the SFTP server ?
The above is mostly a issue with the documentation, SFTP auto-delete does work even if the outbound is not a SFTP connector. I have logged a JIRA about it here
Update:
I have determined the root cause of the issue, the output from SFTP connector when deployed on cloudhub returns an instance of class java.io.ByteArrayInputStream, however when the application is deployed via Anypoint studio the output is instance of class org.mule.transport.sftp.SftpInputStream.
To reproduce make sure the file size is less than 2 MB
This causes the file not to be deleted when deployed on Cloudhub.
Logs from cloudhub
Local deployment logs
2015-04-28 15:37:50 DEBUG SftpConnectionFactory:118 - Successfully connected to: sftp://decision_1:#####XXXXXXX:22/To_DI_Local
2015-04-28 15:37:50 DEBUG SftpClient:121 - Attempting to cwd to: /To_DI_Local
2015-04-28 15:37:50 DEBUG SftpConnector:121 - Successfully changed working directory to: /To_DI_Local
2015-04-28 15:37:50 DEBUG SftpMessageReceiver:121 - Routing file: ZCCR_EXTRACT_FINAL.CSV
**2015-04-28 15:37:50 INFO LoggerMessageProcessor:193 - InputSFTPEndpoint org.mule.transport.sftp.SftpInputStream**
2015-04-28 15:37:50 DEBUG SftpMessageReceiver:121 - Routed file: ZCCR_EXTRACT_FINAL.CSV
2015-04-28 15:37:50 DEBUG SftpMessageReceiver:121 - Polling. Routed all 1 files found at sftp://decision_1:#####XXXXXX:22/To_DI_Local
2015-04-28 15:37:50 INFO LoggerMessageProcessor:193 - InputSFTPEndpoint org.mule.transport.sftp.SftpInputStream
My Flow looks like the below
Update 2:
However if the file is a big one i think anything about 10MB then the return type is com.google.common.io.MultiInputStream and the file does get deleted.
Any ideas why Cloudhub would be behaving like this ?
I am newer to Mulesoft and I encountered the same issue. What I found was that Mulesoft puts a lock on the file that it is reading. For my particular scenario, I was reading a file and then uploading to Fusion (cloud). Mulesoft would not auto delete the file. When it polled the folder path again, it would see that same file there and try to read it again. What we had to do to force the release of the file, was add a Byte Array to String. It seems to have forced Mulesoft to read the file in its entirety and transform the payload which severed the lock on the file.
For asynchronous processing large amount of files it could be nice to store messages in a persistent storage to releave JVM heap and avoid data loss in case of system failure.
I configured file-queue-store, but unfortunatelly, I can not see msg files in the .mule/queuestore/myqueuename folder.
I feed the flow with files from smb:endpoint and send them to a cxf endpoint.
When I stop Mule ESB (version 3.2.0) properly during file processing, it writes a lot of .msg files to the queuestore. After restart it processes them one-by-one.
But, when I kill the JVM (to test a system failure, or OutOfMemoryError, etc.), there is no fies in the queuestore, so all of the is lost.
My question: Is it possible to force queuestore to store the messages on disk and delete them only when they fully processed?
Please advise. Thanks in advance.
Mule 3.2.0 was affected by this issue
You should consider upgrading.