How can I securely transfer files - file-upload

I need to automatically transfer an xml file from server A over internet to a server B. Ftp works fine, but should I be use a message queue instead?
It should be secure in the order that I won't lose messages and be able to log what is transferred.

You could use a message queue as well but not to transfer the files, just for keeping a queue of the files to be transferred. Then you can write a Service who uses sftp, https, ssh, or whatever other secure method to transfer the files. There are plenty of options. A common scenario to use is:
- Write a file to a given folder and a message to the message queue.
- The web service will be polling the message queue who will have a message with the filename to be transferred. If there is a file, use SECURE METHOD CHOSEN (see the links below), and do the transfer.
Well, you could simply avoid using message queue and use a secure client to connect to the server B from server A and do the transfer, here are some links that can help you:
How do I upload a file to an SFTP server in C# / .NET?
http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/bee2ae55-5558-4c5d-9b5c-fe3c17e3a190
http://social.msdn.microsoft.com/Forums/en-US/netfxnetcom/thread/f5d22700-552f-4214-81f5-fa43bfcc723d
Hope that helps

Use sftp whenever possible.

Use a POST over HTTPS - an implementation is available on every imaginable platform.
Of course, you need to check certificate validity, but this is also a part of the protocol itself; your part is to keep the certificates correct and secure.

Related

Connect to existing SFTP server instead of starting new SFTP subprocess

I'm thinking about writing a new SFTP server. The current SFTP servers are started for every session. If there are three users of SFTP, there are three SFTP servers. That's not what I want. I want one server where every new SFTP session is connected to. How to do this?
When you login the server to start an SFTP session, an SSH process is started and am SFTP subsystem is started as well. The SSH process takes care of the encryption etc. The io is done through the standard ports 0, 1 and 2 (stdin, stdout and stderr) of the SFTP process.
This all works when for every session there is a dedicated SFTP process. But how can I make it to work when there is one SFTP server I want to connect to. Via a "ssh-to-sftp-connect-agent"?
More information:
I want to use sftp version 6, which is better than version 3, which is used by openssh. The openssh community does not want to upgrade their sftp implementations:
https://bugzilla.mindrot.org/show_bug.cgi?id=1953
A very good open source sftp server is at:
http://www.greenend.org.uk/rjk/sftpserver/
and very usefull overview:
http://www.greenend.org.uk/rjk/sftp/sftpversions.html
This server us using sftp protocol version 6, but has (b)locking and handling of acl's not implemented. To implement these shared tables are necessary for all open files with their access flags and blocking mode by who for (b)locking to work. When every sftp session leads to another process with:
Subsystem sftp /usr/libexec/gesftpserver
(which is inevitable when you want to use any protocol higher than 3)
then a shared database is a sollution to handle locks and acl's.
Another sollution is that every new sftp session connects to one existing "super" sftp server, which is started at boot time. Simultaneous access, locking etc. is much easier to program.
Howto do this with this line:
Subsystem sftp /usr/libexec/exampleconnectagent
In the ideal case the agent enables the connection between the dedicated ssh process for the connection and the sftp-server, and terminates.
Long story, is this possible? Do I have to use the passing of fd's described here:
Can I share a file descriptor to another process on linux or are they local to the process?
Thanks in advance.
addition:
I'm working on a sftp file server listning to a server socket. clients can connect using the direct-streamlocal functionality to connect a channel to it in openssh. THis way I can have one server process for all clients and this is what I wanted in the first place.
The current SFTP servers are started for every session.
What do you mean by "current SFTP servers"? Which one specifically?
The OpenSSH (as the most widely used SSH/SFTP server), did indeed open a new subprocess for each SFTP session. And there's hardly any problem with that. Though the recent versions don't, anymore. With the (new) default configuration, an in-process SFTP server (aka internal-sftp) is used.
See OpenSSH: Difference between internal-sftp and sftp-server.
If you really want to get an answer to your question, you have to tell us, what SFTP/SSH server your question is about.
If it is indeed about OpenSSH:
Nothing needs to be done, the functionality is there already.
If you want to add your own implementation, you have to modify OpenSSH code, there's no way to plug it in. Just check how the internal-sftp is implemented.
The other way is using the agent architecture, as you have suggested yourself. If you want to take this approach and need some help, you should ask more specific question about inter-process communication and/or sharing file descriptors, and not about SFTP.

How to determine whether a file has passed anti-virus detection?

We have to develop a Java web service that is running on WebLogic Server 12.2.1 on a Windows Server 2008 R2 server. The web service allows clients to send files to it in BASE64 format, which the web service will then decode and then create actual files on the server with the decoded binary.
The server has Trend Micro OfficeScan Client installed, which I was told that it will scan for any file that is copied to the server. If the binary that I am writing to disk contains a virus, would the IO write fail immediately by the virus detection? I am not exactly sure when the virus scanning will take place. Is it immediately while a file is in the midst of being created on the server, or after a file has already been created on the server?
I need to know this because we want the web service to send an alert back to the client if the file that he sent contains a malware. Therefore how can the web service determine that no virus has been detected by Trend Micro OfficeScan Client?
Thanks.
If "realtime protection" option is enabled in the AV, then it will immediately detect the virus "after" the writing operation is completed.
The best way I can think of for your scenario, is to programmatically invoke the AV to scan the file, using command line options of the AV. Then, you'll know for sure that the AV has finished the scanning and get the scanning results as well.

how to stream with mule sftp connector

We have a soap service that accepts a small file so we receive that in memory and we need to then sftp that off to another server. What would the configuration be for that? How to take our String(xml file) and send it to the server? ( i assume sftp connector is the best way to go here, but how to configure it as it looks like it takes one file as a parameter and I need it to be fed bytes to send instead with a filename that we specify).
thanks,
Dean

WCF Streaming across proxy servers etc

All
Sorry if this is an obvious question but does WCF streaming work correctly from a client to an web server (using basicHttpBinding) if a proxy server is in the way?
I seem to remember reading that proxy servers can cache requests until they are ready (hence why sometimes a download doesn't respond for ages then suddenly completes) and I'm not sure if this will stop streaming working correctly.
Thanks
Probably too late for you, but from my interpretation of the web page below- no, streaming does not work when a proxy server is in the way.
http://msdn.microsoft.com/en-us/library/ms733742.aspx
The decision to use either buffered or streamed transfers is a local decision of the endpoint. For HTTP transports, the transfer mode does not propagate across a connection or to proxy servers and other intermediaries. Setting the transfer mode is not reflected in the description of the service interface. After generating a WCF client to a service, you must edit the configuration file for services intended to be used with streamed transfers to set the mode. For TCP and named pipe transports, the transfer mode is propagated as a policy assertion.

Getting result of a long running task with RabbitMQ

I have a scenario where a client sends an http request to download a file. The file needs to be dynamically generated and typically takes 5-15 seconds. Therefore I am looking into a solution that splits this operation in 3 http requests.
First request triggers the generation of the file.
The client polls the server every 5 seconds to check if file is ready to download
When the response to the poll request is positive, the client starts downloading the file
To implement this I am looking into Message Queue solutions like RabbitMQ. They seem to provide a reliable framework to run long running tasks asynchronously. However after reading the tutorials on RabbitMQ, I am not sure how will I receive the result of the operation.
Here is what I've in mind:
A front end server receives requests from clients and it posts messages to RabbitMQ as required. This front end server will have 3 endpoints
/generate
/poll
/download
When client invokes /generate with a GET parameter say request_uid=AAA, the front end server will post a message to RabbitMQ with the request_uid in the payload. Any free worker will subsequently receive this message and start generating the file corresponding to AAA.
Client will keep polling /poll with request_uid=AAA to check if task was complete.
When task is complete client will call /download with request_uid=AAA expecting to download the file.
The question is how will the /poll and /download handlers of the front end server will come to know about the status of the file generation job? How can RabbitMQ communicate the result of the task back to the producer. Or do I have to implement such mechanism outside RabbitMQ? (Consumer putting its results in a file /var/completed/AAA)
The easiest way to get started with AMQP, is to use a topic exchange, and to create queues which carry control messages. For instance you could have a file.ready queue and send messages with the file pathname when it is ready to pickup, and a file.error queue to report when you were unable to create a file for some reason. Then the client could use a file.generate queue to send the GET information to the server.
You hit the nail on the head with your last line:
(Consumer putting its results in a
file /var/completed/AAA)
Your server has to coordinate multiple jobs and the results of their work. Therefore you will need some form of "master repository" which contains an authoritative record of what has been finished already. Copying completed files into a special directory is a reasonable and simple way of doing exactly that.
It doesn't necessarily need RabbitMQ or any messaging solution, either. Your server can farm out jobs to those workers any way it wishes: by spawning processes, using a thread pool, or indeed by producing AMQP events which end up in a broker and get sucked down by "worker" queue consumers. It's up to your application and what is most appropriate for it.