I'm investigating issues with redis and a high number of clients (above the default limit of 10000). Although the CLIENT LIST command works fine, one can't do much with it. I would like to save it to a file, so as to run some metrics (sort by ip, time, etc).
Unfortunately, it's not possible with redis-cli, since CLIENT LIST > ~/clients.txt throw a syntax error.
Is there a way to save client list for later use?
Try it from the terminal shell,
>>> redis-cli "CLIENT LIST" > ~/clients.txt
Related
I am in the process of migrating a very large multisite installation to newer OS platforms. Running ClearCase 9. In one particular migration stage all the VOBs appear to have migrated correctly, ct lsvob -s -host xxxx shows no VOBs remaining on the old server, but now I am getting packets stuck in the incoming bin on that old server. I assume it has to do with devs who still had views open before the migration, but the problem is that mt lspacket is complaining that it cannot find a VOB with a single UUID in the registry. Packets are piling up, and they are all complaining about the same UUID, so I assume they are all related to one VOB. ct lsvob -uuid xxxx says it cannot find a VOB with that UUID.
How would I go about correcting this?
Looking at multitool lspacket, check if a multitool lspacket –long /usr/tmp/packet1 (one of the packet listed by multitool lspacket) helps (a bit like the old CC7.0 multitool lspacket -l -dump)
If this is linked to dev views, check if you can get the a cleartool rmview --force -vob \avob -uuid an_uuid is still possible, to make sure there is no view referencing the old Vob.
The packets are getting routed to the old server by the other sites. It has nothing to do with developer views.
#VonC's lspacket -long answer will give you the name of the sending replica... Where you'll have to describe the target replica to see what it currently thinks the host is for the moved replica.
In the interim, you can copy/move the sync packets to the new server and the should import fine.
Assuming that you use the default jobs, and don't use -out to change the default packet names, running run multitool lspacket on the receiving host, you will show you names like "sh_o_sync_P50-rep_2022-11-14T160519-0500_17508." In this case, "P50-rep" is the name of the SENDING replica.
You will also see a line reading:
VOB family identifier is: 19fd6066.dbf111e1.9886.44:37:e6:60:fc:96
cleartool lsvob -family {above UUID} will identify the VOB whose sync packet this is.
* \bc-linuxtest \\this-is-the-vob-server-host\vobstore\bc-linuxtest.vbs public (replicated)
You can then combine that information to locate the sending site since the describe would look something like this:
replica "P50-rep"
created 2018-04-10T08:50:15-04:00 by CC VOB Admin (vobadm2.ccusers#Bullwinkle)
"Test replica 3."
replica type: unfiltered
master replica: P50-rep#\bc-linuxtest
request for mastership: enabled
owner: PROD\vobadm
group: PROD\ccusers
host: "this-is-the-vob-server-host"
identities: preserved
permissions: preserved
Once you go there, you will be able to see what IT thinks the replica host is, and then we can make it know where the replica is now... By hook or by crook if need be. However, the "by crook" method would mean that you need to open a support case to get the tool and the steps to use it.
My guess is that the problem replica is:
The problem replica is self mastering, and
Does not send updates to at least one "upstream" replica.
Trying to run a few simple tasks via celery. The header of the workerfile looks like
from celery import Celery, group
from time import sleep
celery = Celery('workerprocess', broker='redis://localhost:6379/0', backend='redis://localhost:6379/0')
After passing the jobs, I am trying to read the results like this.
jobresult=group(signatureList).apply_async()
while not jobresult.ready():sleep(30) #Line 6
The code is running perfectly in my desktop. The configuration is Python 3.6.7 and 4.15.0-20-generic #21-Ubuntu SMP.
When I try to run the same thing on my staging server (with the worker node running there too in the background), #Line 6 above throws the following error.
kombu.exceptions.DecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
From the log, it appears the task is sent to the redis queue, is executed correctly, but the main process can not perform any operation on the jobresult variable to retrieve the output of the calculations. I have checked by pinging the redis-cli, and it echoes PONG. The server configuration (the ones which I think are relevant) are Python 3.5.2 and 4.4.0-89-generic #112-Ubuntu.
Since the task is already running in my local desktop, I guess it is a matter of dependency. But the error does not give any clue about what other libraries should I install, whether using PIP or apt-get. I looked up, Kombu (don't know what it does, but guess something important) and it is already installed. So how
to resolve this?
This is the output from the redis-cli from a comment to the question. I am not sure what it means, though.
127.0.0.1:6379> lrange celery 0 0
(empty list or set)
127.0.0.1:6379>
If your workerprocess is supposed to return utf8-encoded responses then this likely is not a missing dependency, but either a different library version or something wrong with the celery workerprocess as set up on your server.
There is a known problem with celery returning error messages that are not compatible with utf-8 encoding, though the specs say they should be. There are also multiple documented bugs in older versions (fixed in newer versions) that used wrong or mismatched encodings, especially in handling json.
The unfortunate result is that you are seeing a report complaining that the first character of the result (0x80) is invalid, rather than seeing the actual error (or mis-encoded data) being returned.
To debug this, activate enough logging to see the actual data or error result being returned, and work from there.
Alternatively, you may be able to treat the inbound data as binary, rather than utf8, which would allow bytes to come through unscathed. They still won't be readable as utf8 or ascii characters, but at least you will receive them.
You can see a number of ways in which other folks have handled unexpectedly non-utf8 data here.
One possibility is that default encoding for python is different between your local and server.
You can get default encoding by doing
python -c 'import sys; print(sys.getdefaultencoding())'
on your local and server.
If both are not same, most general way to change default encoding is the environment variable
export PYTHONIOENCODING=new_encoding
But it depends on the environment.
Below link explains more ideas to change encoding.
c.f. Changing default encoding of Python?
I'm using the ssh.net client to connect to a sftp server that identifies itself as maverick_20, which appears to be the the closed-source offering from Sshtools. When I attempt to read bytes out of a file in stream mode, I have a general exception that bubbles up containing the string 'read from 13 for 32755 from 32772 not supported', which I believe is being returned to me from the server. That message is meaningless to me, but the server certainly allows me to seek() to different positions in the file without issue.
Googling the phrase suspiciously returns a list of ssh error codes on the WinSCP site, though that phrase does not occur in the page. As the source code for the NG product is not available, I can't investigate the issue that way.
Is the Maverick server broken in some way? I can't imagine what sort conditions would allow seek and complete file reads, but fail in this specific way.
I am trying to implement an online terminal UI with jsch as backend.
I need to display the userinfo ie [username#Machine ~]$ information in the UI.
Since the outputstream simply sends the bytes, it is difficult to distinguish the userinfo with the real command output. Is there any way to distinguish the same?
In general, no.
If you have a shell channel, all you see is the output from the user's remote shell, including the prompt and the actual command output. You can try to parse that. In simple cases this will work, but in general it is impossible, as every command could output a prompt-like string.
The username should be known to you (it should be the same as you used for login), the server name is a bit trickier.
An idea worth exploring might be to set a special prompt delimited by character sequences which are unlikely to occur in "normal" command output – set the PROMPT variable in your shell.
You could circumvent that problem by not using actually a shell channel, but an individual exec channel for each command – but then you'll have to interpret commands like cd yourself and keep track of the current directory, and add a cd command before the actual command in each exec channel. You might want to have an sftp channel open in parallel to keep track of the directories (and list files, and so on).
I have this code which gives me all of the information I need regarding tasks, information etc. I have it all shelled into a VB program and I want to be able to run this from one computer and have it return the data from all computers on the domain.
I am lost as to what to add next.
Dim sCommand As String
'all processes here, ipconfig, java info, etc etc
sCommand = "java.exe -version2 > C:\Info.txt && ipconfig >> C:\INfo.txt"
Shell("cmd.exe /c" & sCommand)
I have script that will list all users on the domain, can I implement that or is there an easier way?
Edit: If I could search the entire domain for a specific file that would work too.
At the moment I just need all the data returned to a text file, I am not worried about it being sorted, or how long a process like this would take.
thanks a bunch
You could do one of two things.
1) You could use WMI to get both the network config off the remote machines and execute a process on the remote machine.
Or
2) You could use PsExec to kick off a command on a remote machine and pipe that out. I personally wouldn't use shell to execute a command as it's pretty poor really. If I was going to kick off a process locally I'd use this, and use StdOut to grab the output from the shell, parse it to give you something you can work with instead of piping the output to a file locally and then reading it later.
EDIT
So you want to do all this from one central location? If you don't want to use PSExec, you'll have to use WMI to create a process on a remote machine to run the java.exe, but you can't redirect the output, you'll have to pipe to a file and read the file in another step.