I know that activemq-admin has an option to purge a queue, so I can easily purge a DLQ if I want to. But what I would like to do is dump the contents of a DLQ to a file before purging it (or manually remove each item and write it to a log). Is it possible to do this with activemq-admin, or is there another similar tool that's readily available? I've done a bit of searching but haven't come across anything, so I figure I'd ask here before trying to implement it on my own.
I would use the tool "A" for that task. It can dump the content of a queue to one file per message.
Example usage to read a queue and dump it to one file per message (dump.txt, dump-1.txt, .. dump-n.txt).
a -b tcp://example.org:61616 -g -c 1000 -o dump.txt ActiveMQ.DLQ
Disclaimer: I'm the author of this tool
Related
I'm using redis to store the userId as a key and the socketId as the value. What's more important is that the userId doesn't change, but the socketId constantly changes. So I want to edit the socketId value inside redis, but I'm not sure what node_redis command to use. I'm currently just editing by using .set(userId, mostRecentSocketId).
In addition, I haven't found a good node_redis API anywhere with a complete list of commands. I briefly looked at the redis-commands package, but it still doesn't seem to have a full list of complete commands.
Any help is appreciated; thanks in advance :)
The full list of Redis commands can be found at https://redis.io/commands. After finding a proper command it wouldn't be hard to find how is it proxied in binding ("api") you use.
Upd. To make it clear: you have Redis Server, its commands are listed at the doc I provided. Then you have redis-commands - it's a library for working with redis (I called it a "binding"). My point was that redis-commands may have not all the commands that redis-server can handle, and also the names of some commands can differ a bit. Some other bindings can offer slightly different set of commands. So it's better to examine the list of commands that Redis Server handles, and then select a binding that allowes calling that command (I guess all the bindings have set method)
Given a snapshot of an existing redis database in a dump.rdb (or in .json format) file, I want to restore this data in my own machine to run some tests on it.
Any pointers on how to do this would be greatly appreciated.
I have resorted to trying to parse the data in the dump.rdb and then save it in a redis DB manually. I feel like there is/should be a cleaner way.
If you want to restore the entire file, simply copy it to the right directory specified in redis.conf and restart redis server. But if you want to load a subset of keys/databases, you'd have to parse the dump file.
SO:
I continued doing it the "hacky" way and found that using the parser code found here:
https://github.com/sripathikrishnan/redis-rdb-tools was a great help.
using the parser sample code i could:
1) set up a redis client
2) use the parser to parse the data
3) use the client to "set" parsed data into a new redis database.
the rdd tools can also do that,
it work independantly of .rdb files and dump/restore working redis instances
it can apply merge, split, rename, search, filter, insert, delete on dumps and/or redis
How can we implement Master_slave configuration using RabbitMQ server.
I have read at many places and have experienced it myself that
"RabbitMQ Nodes under a Cluster can't really share same files except for the cookie file. Script itself makes sure that it creates folders and files names prefixed with "$NODE_ID$" while starting the broker so that all the files for that node will be created inside a single folder ill it. It basically creates two main folders inside folders does following thing:
a. db : Creates Folder named "$NODE_ID$"-mnesia and creates all db files inside it.
b. log : Creates files with name prefixed with "$NODE_ID$"
Even if we tweak the script for both nodes to point to same mnesia folder, 2nd instance of the broker will fail to start because of mnesia locking issue with following error :
{"init terminating in do_boot",
{{nocatch,{error,{cannot_start_application,mnesia,{killed,{mnesia_sup,start,[normal,[]]}}}}},[{init,start_it,1},{init,start_em,1}]}}
Crash dump was written to: erl_crash.dump init terminating in do_boot ()".
All I wanted to know is if in a sitation in which there are 2 nodes 'master' and 'slave' in a cluster and if master is down for some time, then for that time how can slave can come in picture for recieving and sending messages on behalf of master. Since the sharing of database is not possible.
Take a look at the guidelines for building a highly available cluster with DRDB and Pacemaker: http://www.rabbitmq.com/pacemaker.html
However, that's a bit difficult to set up, so you might prefer to wait for a few more weeks, as the next major release will include built in support for redundant queues for clusters. See more about that in the attachment here:
http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2011-June/013304.html
I have a .bat file shown below in which I want to redirect the whole contents present in my IDE to some text file.
D:\WindRiver\wrenv.exe -p vxworks653-2.2.3 run
D:\WindRiver\wrenv.exe -p vxworks653-2.2.3>C:\ThreePartition\output.txt
PAUSE
I am able to just get some partial output i.e I am unable to get the errors which are thrown during compilation or building process.
Is this correct or Can anyone suggest any other way??
Thanks a lot
Maddy
You can try this:
D:\WindRiver\wrenv.exe -p vxworks653-2.2.3 > C:\ThreePartition\output.txt 2>&1
You can find a good explanation here. Basically you need to redirect both stdout AND stderr to your file.
Best regards.
Your batch is redirecting all messages from wrenv.exe that are sent to the standard output.
I never used WinRiver but usually IDEs manage the console internally and don't log any messages on the standard output/error stream.
It is maybe possible to set the output of the console of the IDE though. If it is, try to set it to the standard output.
I think you want to combine both those lines into one:
D:\WindRiver\wrenv.exe -p vxworks653-2.2.3 run >C:\ThreePartition\output.txt
OK, looking at your posts here, here and here, it seems you want to log the compilation process. The command for that will be something like (all on one line):
make ThreePartition.mak >C:\ThreePartition\output.txt
Assuming there's a file called ThreePartition.mak.
The command you've been using so far is designed to simply open an interface where you can type commands, which is why you get no output. If you want to log simulation, or a kernel build, there is a file called vxworks_cli_tools_users_guide_6.6.pdf which describes the command line interface, including vxprj in full detail.
Also, are you really using a nant script to call a .vbs to call a .bat to call wrenv.exe? I'm sure there's a simpler way to do that.
Is there a way to force a Samba process to close a given file without killing it?
Samba opens a process for each client connection, and sometimes I see it holds open files far longer than needed. Usually i just kill the process, and the (windows) client will reopen it the next time it access the share; but sometimes it's actively reading other file for a long time, and i'd like to just 'kill' one file, and not the whole connection.
edit: I've tried the 'net rpc file close ', but doesn't seem to work. Anybody knows why?
edit: this is the best mention i've found of something similar. It seems to be a problem on the win32 client, something that microsoft servers have a workaround for; but Samba doesn't. I wish the net rpc file close <fileid> command worked, I'll keep trying to find out why. I'm accepting LuckyLindy's answer, even if it didn't solve the problem, because it's the only useful procedure in this case.
This happens all the time on our systems, particularly when connecting to Samba from a Win98 machine. We follow these steps to solve it (which are probably similar to yours):
See which computer is using the file (i.e. lsof|grep -i <file_name>)
Try to open that file from the offending computer, or see if a process is hiding in task manager that we can close
If no luck, have the user exit any important network programs
Kill the user's Samba process from linux (i.e. kill -9 <pid>)
I wish there was a better way!
I am creating a new answer, since my first answer really just contained more questions, and really was not a whole lot of help.
After doing a bit of searching, I have not been able to find any current open bugs for the latest version of Samba, please check out the Samba Bug Report website, and create a new bug. This is the simplest way to get someone to suggest ideas as to how to possibly fix it, and have developers look at the issue. LuckyLindy left a comment in my previous answer saying that this is the way it has been for 5 years now, well the project is Open Source the best way to fix something that is wrong by reporting it, and or providing patches.
I have also found one mailing list entry: Samba Open files, they suggest adding posix locking=no to the configuration file, as long as you don't also have the files handed out over NFS not locking the file should be okay, that is if the file is being held is locked.
If you wanted too, you could write a program that uses ptrace and attaches to the program, and it goes through and unlocks and closes all the files. However, be aware that this might possibly leave Samba in an unknown state, which can be more dangerous.
The work around that I have already mentioned is to periodically restart samba as a work around. I know it is not a solution but it might work temporarily.
This is probably answered here: How to close a file descriptor from another process in unix systems
At a guess, 'net rpc file close' probably doesn't work because the interprocess communication telling Samba to close the file winds up not being looked at until the file you want to close is done being read.
If there isn't an explicit option in samba, that would be impossible to externally close an open file descriptor with standard unix interfaces.
Generally speaking, you can't meddle with a process file descriptors from the outside. Yet as root you can of course do that as you seen in that phrack article from 1997: http://www.phrack.org/issues.html?issue=51&id=5#article - I wouldn't recommend doing that on a production system though...
The better question in this case would be why? Why do you want to close a file early? What purpose does it ultimately have to close the file? What are you attempting to accomplish?
Samba provides commands for viewing open files and closing them.
To list all open files:
net rpc file -U ADadmin%password
Replace ADadmin and password with the credentials of a Windows AD domain admin. This gives you a file id, username of who's got it open, lock status, and the filename. You'll frequently want to filter the results by piping them through grep.
Once you've found a file you want to close, copy its file id number and use this command:
net rpc file close fileid -U ADadmin%password
I needed to accomplish something like this, so that I could easily unmount devices I happened to be sharing. I wrote this quick bash script:
#!/bin/bash
PIDS_TO_CLOSE=$(smbstatus -L | tail -n-3 | grep "$1" | cut -d' ' -f1 - | sort -u | sed '/^$/$
for PID in $PIDS_TO_CLOSE; do
kill $PID
done
It takes a single argument, the paths to close:
smbclose /media/drive
Any path that matches that argument (by grep) is closed, so you should be pretty specific with it. (Only files open through samba are affected.) Obviously, you need root to close files opened by other users, but it works fine for files you have open. Note that as with any other force closing of a file, data corruption can occur. As long as the files are inactive, it should be fine though.
It's pretty ugly, but for my use-case (closing whole mount points) it works well enough.