init.d values for separate redis-server instances - redis

I need to clear a concept. I have two redis servers running on a single VM. Server#1 connects via TCP, server#2 connects via a UNIX socket. I'm on the cusp of converting the TCP server to UNIX as well.
An excerpt from the init.d script for server#1 is:
DAEMON=/usr/bin/redis-server
DAEMON_ARGS=/etc/redis/redis.conf
NAME=redis-server
DESC=redis-server
RUNDIR=/var/run/redis
PIDFILE=$RUNDIR/redis-server.pid
The comparable excerpt from the init.d script for server#2 is (which has its own config):
DAEMON=/usr/bin/redis-server
DAEMON_ARGS=/etc/redis/redis-2.conf
NAME=redis2-server
DESC=redis2-server
RUNDIR=/var/run/redis
PIDFILE=$RUNDIR/redis2-server.pid
Both servers are currently up and running. My question is: how come DAEMON is kept the same for both servers? Why wasn't a separate executable needed?
I configured the two servers using config from various internet forums. While it works, I've failed to understand the significance of the DAEMON value, given it remains the same for both server instances. Is it because the executable is fed different config files, and this the same DAEMON is able to handle multiple server instances? Being a beginner, I'd really like some expert opinion on this. Thanks in advance.

Open terminal (or cmd). Now open it again. You have two copies open, but they are both using the same executable.
You're doing the same with redis: DAEMON is just saying where to find the program, and since you're happy to use the same version of redis for both, you can use the same path for both DAEMON values, each instance of which has its own ID stored in the PIDFILE, which is why they need to be different paths or they will interfere with each other.

Related

Redis Cluster or Replication without proxy

Is it possible to build one master (port 6378) + two slave (read only port: 6379, 6380) "cluster" on one machine and increase the performances (especially reading) and do not use any proxy? Can the site or code connect to master instance and read data from read-only nodes? Or if I use 3 instances of Redis I have to use proxy anyway?
Edit: Seems like slave nodes don't have any data, they try to redirect to master instance, but it is not correct way, am I right?
Definitely. You can code the paths in your app so writes and reads go to different servers. Depending on the programming language that you're using and the Redis client, this may be easier or harder to achieve.
Edit: that said, I'm unsure how you're running a cluster with a single master - the minimum should be 3.
You need to send a READONLY command after connecting to the slave before you could execute any read commands.
A READONLY command only affects during the current socket session which means you need this command for every TCP connection.

Connect to existing SFTP server instead of starting new SFTP subprocess

I'm thinking about writing a new SFTP server. The current SFTP servers are started for every session. If there are three users of SFTP, there are three SFTP servers. That's not what I want. I want one server where every new SFTP session is connected to. How to do this?
When you login the server to start an SFTP session, an SSH process is started and am SFTP subsystem is started as well. The SSH process takes care of the encryption etc. The io is done through the standard ports 0, 1 and 2 (stdin, stdout and stderr) of the SFTP process.
This all works when for every session there is a dedicated SFTP process. But how can I make it to work when there is one SFTP server I want to connect to. Via a "ssh-to-sftp-connect-agent"?
More information:
I want to use sftp version 6, which is better than version 3, which is used by openssh. The openssh community does not want to upgrade their sftp implementations:
https://bugzilla.mindrot.org/show_bug.cgi?id=1953
A very good open source sftp server is at:
http://www.greenend.org.uk/rjk/sftpserver/
and very usefull overview:
http://www.greenend.org.uk/rjk/sftp/sftpversions.html
This server us using sftp protocol version 6, but has (b)locking and handling of acl's not implemented. To implement these shared tables are necessary for all open files with their access flags and blocking mode by who for (b)locking to work. When every sftp session leads to another process with:
Subsystem sftp /usr/libexec/gesftpserver
(which is inevitable when you want to use any protocol higher than 3)
then a shared database is a sollution to handle locks and acl's.
Another sollution is that every new sftp session connects to one existing "super" sftp server, which is started at boot time. Simultaneous access, locking etc. is much easier to program.
Howto do this with this line:
Subsystem sftp /usr/libexec/exampleconnectagent
In the ideal case the agent enables the connection between the dedicated ssh process for the connection and the sftp-server, and terminates.
Long story, is this possible? Do I have to use the passing of fd's described here:
Can I share a file descriptor to another process on linux or are they local to the process?
Thanks in advance.
addition:
I'm working on a sftp file server listning to a server socket. clients can connect using the direct-streamlocal functionality to connect a channel to it in openssh. THis way I can have one server process for all clients and this is what I wanted in the first place.
The current SFTP servers are started for every session.
What do you mean by "current SFTP servers"? Which one specifically?
The OpenSSH (as the most widely used SSH/SFTP server), did indeed open a new subprocess for each SFTP session. And there's hardly any problem with that. Though the recent versions don't, anymore. With the (new) default configuration, an in-process SFTP server (aka internal-sftp) is used.
See OpenSSH: Difference between internal-sftp and sftp-server.
If you really want to get an answer to your question, you have to tell us, what SFTP/SSH server your question is about.
If it is indeed about OpenSSH:
Nothing needs to be done, the functionality is there already.
If you want to add your own implementation, you have to modify OpenSSH code, there's no way to plug it in. Just check how the internal-sftp is implemented.
The other way is using the agent architecture, as you have suggested yourself. If you want to take this approach and need some help, you should ask more specific question about inter-process communication and/or sharing file descriptors, and not about SFTP.

Get Number of connection from all host to my activemq broker

ActiveMQ broker setup:
Broker is running on machine: hostA
Clients from different host can connect to my broker instance running on hostA, there can be any number of client from any host.
Is there a way to find out how many clients are connected to broker and also list which tell me how many connection from each host is there to my broker.
I want to do this without making assumption about number of hosts.
I can do this by using lsof command and some parsing over output, but I am in situation where I can not use this.
Is there any feature provided by ActiveMQ command line utility activemq-admin.
You can get to pretty much any Mbean attribute ActiveMQ exposes via the activemq-admin. There are no attributes or operations that give you a quick count of connections from specific clients. You will have to do some work on your end to get all the details you want, but all the raw data is there.
Examples:
Broker Stats:
activemq-admin query --objname type=Broker,brokerName=localhost
Connection Stats
activemq-admin query --objname type=Broker,brokerName=localhost,connector=clientConnectors,connectorName=<transport connector name>,connectionViewType=clientId,connectionName=*
See full doc here.
NOTE: Documentation as of this writting has not be updated to take into account the Mbean changes made in AMQ. References to Object names in examples are not correct.
You can get the object name (or example sytax) from JMX (using jconsole or visual vm for example) from the MBeanInfo. Each object name wills stat something like org.apache.activemq:type. For the script, remove the "org.apache.activemq:" and you should be in business for any thing you need from JMX via the script.
I think you may also look into using Jolokia with your broker. Although not compatible with the activemq-admin script, you can reach everything you can from the activemq-admin script, but also have access to all of the operations. In the past I've heavily used the activemq-admin script for local monitoring/command line administration of the broker, but have started converting everything to hit the Jolokia service. But again, activemq-admin will give you a way to access what you are looking for here.

mass-restarting httpd on lots of EC2 instances

I am running a variable number of EC2 instances (CentOS 64) that contain an apache web server that caches a bunch of code in production mode.
Now every time I make some changes to the code (generally on a weekly basis) I have to log into each one of them instances and do a "su" then "service httpd restart"
Is there a way to automate this so that I can run a single command on one of the instances it would connect to all others and restart it? Getting really time consuming especially when the application has spawned some 20-30 instances on its own (happens on some days when we get high traffic)
Thanks!
Dancer's shell, dsh, is provided specifically to do this. No 'scripting' required. As #tix3 suggests, you should probably also convince sudo on those machines (configure /etc/sudoers using visudo) to configure them to accept your restart command.

Communicating between two processes on heroku (what port to use)

I have a Procfile like so:
web: bundle exec rails server -p $PORT
em: script/eventmachine
The em process fires up an eventmachine with start_server (port ENV['PORT']) and my web process occasionally needs to communicate with it.
My question is how does the web process know what port to communicate with it on? If I understand heroku correctly it assigns you a random port when the process starts up (and it can change if the ps is killed or restarted). Thanks!
According to Heroku documentation,
Two processes running on the same dyno can communicate over TCP/IP using whatever ports they want.
Two processes running on different dynos cannot communicate over TCP/IP at all. They need to use memcached, or the database, or one of the Heroku plugins, to communicate.
Processes are isolated and cannot communicate directly with each other.
http://www.12factor.net/processes
There are, however, a few other ways. One is to use a backing service such as Redis, or Postgres to act as an intermediary - another is to use FIFO to communicate.
http://en.wikipedia.org/wiki/FIFO
It is a good thing that your processes are isolated and share-nothing, but you do need to architecture your application slightly differently to accommodate this.
I'm reading this while on my commute to work. So I haven't tried anything with it (sorry) but this looks relevant and potentially awesome.
https://blog.heroku.com/archives/2013/5/2/new_dyno_networking_model