On my machine I've configured 2 instances of ProFTPd daemon with:
2 different PidFile
2 different listening port (21 & 2100)
2 different list of allowed users/path/permission/...
Everything is working as expected but the problem that I have to distinguish them on processlist as both are shown with same name:
> ps -ef | grep ftp
nobody 22480 1 0 09:31 ? 00:00:00 proftpd: (accepting connections)
nobody 24545 1 0 09:41 ? 00:00:00 proftpd: (accepting connections)
Is there a way to distinguish them with 2 different names in order to be able to kill and restart only one of them and not both?
You might combine the ps output with lsof, e.g. lsof -p <proftpd-pid>, and use lsof's output to see the listening port/address, and so distinguish the processes that way.
Related
I am trying to connect my RedisInsightsv2 client to a cluster of redis instances.
When the redis instance hasn't joined the cluster yet, redisinsights is able to make a connection.
After the cluster is created however, new connections from the GUI, just fail.
I have 3 shards with 1 replica each:
redis-cli -h 10.9.9.4 -p 7001 --cluster create 10.9.9.4:7001 10.9.9.5:7002 10.9.9.6:7003 10.9.9.4:7004 10.9.9.5:7005 10.9.9.6:7006 --cluster-replicas 1 -a Password
The cluster gets successfully created with the right shards and everything.
I can even verify using the CLUSTER NODES command
root ➜ ~ $ redis-cli -h 10.9.9.4 -p 7004 -a Password
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.9.9.4:7004> CLUSTER NODES
5b77b776f0ed08b4f34b0fe3e48b609e4bd8400e 10.9.9.6:7003#17003 master - 0 1662318446553 3 connected 10923-16383
a42f44163b046273ca02b1fc99ed93cf6188f65e 10.9.9.5:7002#17002 master - 0 1662318446755 2 connected 5461-10922
d2b21a37b62283a6cfbd5fb436df505ddc31aea8 10.1.1.10:7001#17001 master - 0 1662318445549 1 connected 0-5460
2cd5783411ceea96b4006b596942cc49484884ab 10.9.9.5:7005#17005 slave d2b21a37b62283a6cfbd5fb436df505ddc31aea8 0 1662318445750 1 connected
61541ad0455539335f27d5a90a5a8e504b3dea5f 10.1.1.11:7004#17004 myself,slave 5b77b776f0ed08b4f34b0fe3e48b609e4bd8400e 0 1662318445000 3 connected
c00d264a625998e89becb9334a1f4ea9d2057a0d 10.9.9.6:7006#17006 slave a42f44163b046273ca02b1fc99ed93cf6188f65e 0 1662318445550 2 connected
10.9.9.4:7004>
However, when trying to connect to any of these in the UI I get the following errors:
9/4/2022, 12:03:31 PM | ERROR | TimeoutInterceptor | Request Timeout. GET /api/instance/9e253e74-0091-44b8-bf8c-29ff0f4f0275/connect | {"stack":[{}]}
9/4/2022, 12:03:41 PM | ERROR | TimeoutInterceptor | Request Timeout. GET /api/instance/9e253e74-0091-44b8-bf8c-29ff0f4f0275/connect | {"stack":[{}]}
OR
9/4/2022, 12:16:17 PM | ERROR | KeysBusinessService | Failed to get keys with details info. Connection is closed.. | {"stack":[{}]}
9/4/2022, 12:16:18 PM | ERROR | ExceptionsHandler | Connection is closed. | {"stack":[{}]}
9/4/2022, 12:16:23 PM | ERROR | ExceptionsHandler | Connection is closed. | {"stack":[{}]}
This is the redis.conf that I use for 10.9.9.5:
port 7002
loadmodule /opt/redis-stack/lib/redisearch.so
loadmodule /opt/redis-stack/lib/redisgraph.so
loadmodule /opt/redis-stack/lib/redistimeseries.so
loadmodule /opt/redis-stack/lib/rejson.so
loadmodule /opt/redis-stack/lib/redisbloom.so
cluster-enabled yes
cluster-config-file cluster-node-2.conf
cluster-node-timeout 5000
dbfilename dump-2.rdb
maxmemory 1862mb
maxmemory-policy allkeys-lru
requirepass Password
masterauth Password
I've done a bunch of googling but I'm not able to determine why this is failing. Any help is appreciated!
RedisInsight version: 2.8.0
Running on: Windows 11
Cluster is running on remote machines part of my local network i.e.
10.9.9.0/24
please specify additional information:
what is your OS
what is the version of RedisInsight? (2.8.0?)
where is your cluster running? (is it local? k8s? any SSH tunnels?)
Can you try and see if you are able to connect using this debug build: https://drive.google.com/file/d/1od2uClDKb0649ixkgyRwXfqj8QLr0GXw/view?usp=sharing
Also please check and comment the logs if it is not working
This is the sanpshot of my /etc/hosts file
karpathy is master & client is slave
I have successfully done
SETUP PASSWORDLESS SSH
Mounted sudo mount -t nfs karpathy:/home/mpiuser/cloud ~/cloud
I can login to my client simply by ssh client
I have followed this blog
http://mpitutorial.com/tutorials/running-an-mpi-cluster-within-a-lan/
mpirun -np 5 -hosts karpathy ./cpi output
mpirun -np 5 -hosts client ./cpi
Getting Error
[mpiexec#karpathy] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
[mpiexec#karpathy] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:179): error waiting for event
[mpiexec#karpathy] main (./ui/mpich/mpiexec.c:397): process manager error waiting for completion
I hope you already have found the solution, in case you haven't I would suggest doing a couple of things.
1. disabling firewall on both the nodes by doing `
sudo ufw disable
`
2. Creating a file named as machinefile (or whatever u like) and storing the number of CPU's in both nodes along with the hostnames.
my machinefile contains:
master:8
slave:4
master and slave are the hostnames while 8 and 4 are the number of CPUs on each node.
to compile use
mpicc -o filename filename.cpp
to run use the machinefile as an argument
mpirun -np 12 -f machinefile ./filename
12 is th enumber of processes. Since both the nodes have 12 CPUs combined so it's better to divide the code on 12 processes.
Inside the perl script I'm running the below
my #lines = `ps -ef`;
And currently, when I output the array into my browser I can only see the following processes:
UID PID PPID C STIME TTY TIME CMD
root 1928 1 0 Feb18 ? 00:00:00 /usr/sbin/abrtd
apache 9198 9121 1 17:23 ? 00:00:00 /usr/bin/perl /var/www/cgi-bin/tbchecker.pl
apache 9199 9198 0 17:23 ? 00:00:00 ps -ef
I think the issue is that the apache user needs to have access to see all the processes running on the server, but am not sure.
Could anyone help point me in the right direction?
(OS is Linux centos 6.4)
Your assumption that the apache user "needs to have access to see all the processes running on the server" is false (on a well-configured Apache2 server).
It is recommended to run the apache httpd server processes under a special account with restricted priviledges.
If you really want to change this and you know the implications of it, you can configure Apche2 to run under the root account. The Apache documentation on http://httpd.apache.org/docs/2.4/ is your friend.
There is still the possibility to run the perl script as user root with the SUID bit set. Note that I have not tested whether it will give you the desired output; normal shell scripts can't be run in SUID mode (under Linux, at least) and the shell drops the additional priviledges before executing a command.
There are several computers connecting to one machine via telnet.. I want to find out which are all the systems/IPs which are connected to the machine via telnet... is it possible to find that out?
The netstat program will tell you what connections are active. You just need to grep the output for those established and connected to the telnet daemon.
sudo netstat --inet -p | grep "/telnetd" | grep ESTABLISHED
(or something very close to that -- I don't have a running telnetd service on my machine to verify the command -- you may have to look at the output of netstat directly and adjust the grep strings)
When starting multiple thin servers running Rails 3, is there any way to tell them
apart in the code?
For example, if I have a configuration like this:
port: 4000
pid: tmp/pids/thin.pid
servers: 2
Is there a way to tell whether the code is runnin on the process on port 4000 or 4001?
you can start 2 servers separately
thin start -p 4000
thin start -p 4001
:D
Supposing, that code, taht you posted is source of config/thin-config.yml
To start server with that parameters just do that:
thin start -C config/thin-config.yml
Yml files is the best way to configure server, but if you do not want to use them you can do that:
thin start -P tmp/pids/thin.pid -p 4000 -s 2