Connect to existing SFTP server instead of starting new SFTP subprocess - ssh

I'm thinking about writing a new SFTP server. The current SFTP servers are started for every session. If there are three users of SFTP, there are three SFTP servers. That's not what I want. I want one server where every new SFTP session is connected to. How to do this?
When you login the server to start an SFTP session, an SSH process is started and am SFTP subsystem is started as well. The SSH process takes care of the encryption etc. The io is done through the standard ports 0, 1 and 2 (stdin, stdout and stderr) of the SFTP process.
This all works when for every session there is a dedicated SFTP process. But how can I make it to work when there is one SFTP server I want to connect to. Via a "ssh-to-sftp-connect-agent"?
More information:
I want to use sftp version 6, which is better than version 3, which is used by openssh. The openssh community does not want to upgrade their sftp implementations:
https://bugzilla.mindrot.org/show_bug.cgi?id=1953
A very good open source sftp server is at:
http://www.greenend.org.uk/rjk/sftpserver/
and very usefull overview:
http://www.greenend.org.uk/rjk/sftp/sftpversions.html
This server us using sftp protocol version 6, but has (b)locking and handling of acl's not implemented. To implement these shared tables are necessary for all open files with their access flags and blocking mode by who for (b)locking to work. When every sftp session leads to another process with:
Subsystem sftp /usr/libexec/gesftpserver
(which is inevitable when you want to use any protocol higher than 3)
then a shared database is a sollution to handle locks and acl's.
Another sollution is that every new sftp session connects to one existing "super" sftp server, which is started at boot time. Simultaneous access, locking etc. is much easier to program.
Howto do this with this line:
Subsystem sftp /usr/libexec/exampleconnectagent
In the ideal case the agent enables the connection between the dedicated ssh process for the connection and the sftp-server, and terminates.
Long story, is this possible? Do I have to use the passing of fd's described here:
Can I share a file descriptor to another process on linux or are they local to the process?
Thanks in advance.
addition:
I'm working on a sftp file server listning to a server socket. clients can connect using the direct-streamlocal functionality to connect a channel to it in openssh. THis way I can have one server process for all clients and this is what I wanted in the first place.

The current SFTP servers are started for every session.
What do you mean by "current SFTP servers"? Which one specifically?
The OpenSSH (as the most widely used SSH/SFTP server), did indeed open a new subprocess for each SFTP session. And there's hardly any problem with that. Though the recent versions don't, anymore. With the (new) default configuration, an in-process SFTP server (aka internal-sftp) is used.
See OpenSSH: Difference between internal-sftp and sftp-server.
If you really want to get an answer to your question, you have to tell us, what SFTP/SSH server your question is about.
If it is indeed about OpenSSH:
Nothing needs to be done, the functionality is there already.
If you want to add your own implementation, you have to modify OpenSSH code, there's no way to plug it in. Just check how the internal-sftp is implemented.
The other way is using the agent architecture, as you have suggested yourself. If you want to take this approach and need some help, you should ask more specific question about inter-process communication and/or sharing file descriptors, and not about SFTP.

Related

init.d values for separate redis-server instances

I need to clear a concept. I have two redis servers running on a single VM. Server#1 connects via TCP, server#2 connects via a UNIX socket. I'm on the cusp of converting the TCP server to UNIX as well.
An excerpt from the init.d script for server#1 is:
DAEMON=/usr/bin/redis-server
DAEMON_ARGS=/etc/redis/redis.conf
NAME=redis-server
DESC=redis-server
RUNDIR=/var/run/redis
PIDFILE=$RUNDIR/redis-server.pid
The comparable excerpt from the init.d script for server#2 is (which has its own config):
DAEMON=/usr/bin/redis-server
DAEMON_ARGS=/etc/redis/redis-2.conf
NAME=redis2-server
DESC=redis2-server
RUNDIR=/var/run/redis
PIDFILE=$RUNDIR/redis2-server.pid
Both servers are currently up and running. My question is: how come DAEMON is kept the same for both servers? Why wasn't a separate executable needed?
I configured the two servers using config from various internet forums. While it works, I've failed to understand the significance of the DAEMON value, given it remains the same for both server instances. Is it because the executable is fed different config files, and this the same DAEMON is able to handle multiple server instances? Being a beginner, I'd really like some expert opinion on this. Thanks in advance.
Open terminal (or cmd). Now open it again. You have two copies open, but they are both using the same executable.
You're doing the same with redis: DAEMON is just saying where to find the program, and since you're happy to use the same version of redis for both, you can use the same path for both DAEMON values, each instance of which has its own ID stored in the PIDFILE, which is why they need to be different paths or they will interfere with each other.

How to halt provisioning of one VM until another VM is done?

Using Vagrant+Chef Solo I'm setting up two VMs: #1 is a TeamCity server, #2 is a TeamCity agent. Provisioning is done by first installing the TeamCity server package on VM #1, then the agent VM is booted and requests data from the server which is used to install the agent. That whole thing works fine.
But now I want to alter the server after the agent is done provisioning. I want to modify the server's database directly, to change an attribute that is only available after the agent has spun up. But is there a way for one VM's provisioning to trigger another VM? Once the agent is done I'd like to somehow resume provisioning the server, so I can make the database edit..
Any thoughts, recommendations, or feedback welcomed. I'm new to Vagrant, Chef, and TeamCity, so there's a chance I'm missing a much easier solution.
* Why do I want to edit the DB directly you may be wondering? TeamCity agents must be authorized before they can be used, and I want to do this programmatically. The solution I've found is to directly edit the DB, because authorization functionality is not exposed via the TeamCity REST API (as far as I can tell)
If you can test the agent is installed/answering, you may add a ruby block looping over this test before continuing the recipe execution.
This loop should have a sleep and a counter to avoid infinite loops.
I've no knowledge of teamcity, so can't tell if it's the best way.
In general, Chef is designed to manage your system, not simply provision it (though this is less true in the modern Cloud world with "golden image" strategies). Nonetheless, in your case, you best bet is to just setup chef-client as a service that runs every 15 minutes. Once the client has finished provisioning, the next run of the server will be able to authorize it.
If you really want to "trigger" the one from the other, you'd need either do that externally with something like etcd or consul, or you would need to setup an ssh keypair between the boxes and add a ruby_block on the client that either does the database modification directly, or calls chef-client on the server.

Get Number of connection from all host to my activemq broker

ActiveMQ broker setup:
Broker is running on machine: hostA
Clients from different host can connect to my broker instance running on hostA, there can be any number of client from any host.
Is there a way to find out how many clients are connected to broker and also list which tell me how many connection from each host is there to my broker.
I want to do this without making assumption about number of hosts.
I can do this by using lsof command and some parsing over output, but I am in situation where I can not use this.
Is there any feature provided by ActiveMQ command line utility activemq-admin.
You can get to pretty much any Mbean attribute ActiveMQ exposes via the activemq-admin. There are no attributes or operations that give you a quick count of connections from specific clients. You will have to do some work on your end to get all the details you want, but all the raw data is there.
Examples:
Broker Stats:
activemq-admin query --objname type=Broker,brokerName=localhost
Connection Stats
activemq-admin query --objname type=Broker,brokerName=localhost,connector=clientConnectors,connectorName=<transport connector name>,connectionViewType=clientId,connectionName=*
See full doc here.
NOTE: Documentation as of this writting has not be updated to take into account the Mbean changes made in AMQ. References to Object names in examples are not correct.
You can get the object name (or example sytax) from JMX (using jconsole or visual vm for example) from the MBeanInfo. Each object name wills stat something like org.apache.activemq:type. For the script, remove the "org.apache.activemq:" and you should be in business for any thing you need from JMX via the script.
I think you may also look into using Jolokia with your broker. Although not compatible with the activemq-admin script, you can reach everything you can from the activemq-admin script, but also have access to all of the operations. In the past I've heavily used the activemq-admin script for local monitoring/command line administration of the broker, but have started converting everything to hit the Jolokia service. But again, activemq-admin will give you a way to access what you are looking for here.

Perforce replica server that can write to main server and has build capability

I need to customize the Perforce server to achieve the following requirements:
I need a local replica server which gets synced with the main server in a different geographical location. I can have the same time zone settings for the local and main servers
The client should be able to commit to the replica server.
The replica server will have build capability as well as a test frame work that is run whenever a build is succesfull.
Once the build and test is succesfull the code should get committed to main server.
I know that the replica server provided by perforce is used as a readonly server which can't write to main server and the forwarding replica just forwards the commands to main server.
I can't use proxy server, as the local server should work even when the main server is offline.
Is it possible to do this? Can anyone point me to some articles which would help me to set up such a server
I had asked the same question in the Perforce forum, but the question is still under verification by moderators.
An edge/commit setup may meet your requirements, as an Edge Server handles some local operations associated with workspaces and work in progress.
As well as read-only commands, the following operations can be performed on an Edge Server:
syncing, checking out, merging, resolving, and reverting files
More information about edge/commit archetecture is available here:
http://www.perforce.com/perforce/doc.current/manuals/p4dist/chapter.distributed.html
You may also want to look at BuildFarm servers:
http://www.perforce.com/perforce/doc.current/manuals/p4dist/chapter.replication.html#DB5-72814
Hope this helps,
Jen!
Build Server doesn't allow build work spaces to submit files. If submitting files is required as part of the build process, consider the use of an edge server to support your automated build processes.
With the implementation of edge servers in 2013.2, we now recommend that you use an edge server instead of a build farm server.
Edge servers offer all the functionality of build farm servers and yet offload more work from the main server and improve performance, with the additional flexibility of being able to run write commands as part of the build process.

How can I securely transfer files

I need to automatically transfer an xml file from server A over internet to a server B. Ftp works fine, but should I be use a message queue instead?
It should be secure in the order that I won't lose messages and be able to log what is transferred.
You could use a message queue as well but not to transfer the files, just for keeping a queue of the files to be transferred. Then you can write a Service who uses sftp, https, ssh, or whatever other secure method to transfer the files. There are plenty of options. A common scenario to use is:
- Write a file to a given folder and a message to the message queue.
- The web service will be polling the message queue who will have a message with the filename to be transferred. If there is a file, use SECURE METHOD CHOSEN (see the links below), and do the transfer.
Well, you could simply avoid using message queue and use a secure client to connect to the server B from server A and do the transfer, here are some links that can help you:
How do I upload a file to an SFTP server in C# / .NET?
http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/bee2ae55-5558-4c5d-9b5c-fe3c17e3a190
http://social.msdn.microsoft.com/Forums/en-US/netfxnetcom/thread/f5d22700-552f-4214-81f5-fa43bfcc723d
Hope that helps
Use sftp whenever possible.
Use a POST over HTTPS - an implementation is available on every imaginable platform.
Of course, you need to check certificate validity, but this is also a part of the protocol itself; your part is to keep the certificates correct and secure.