Continuously sync changes from web server - ssh

I'm searching for a way to get my Files synchronized (task) from a web server (Ubuntu 14) to a local server (Windows Server). The web server creates small files, which the local Server needs. The web server is in a DMZ, accessible through SSH. Only the local server is able to access folders on web server. It tried using Programs like WinSCP, but I'm not able to set a "get"-Job.
Is there a way to do this with SSH on Windows server without login every few seconds? Or is there a better solution? In the Future Web-Services are possible, but at the moment I need a quick solution.

Either you need to schedule a regular frequent job, that connects and downloads changes.
Or you need to have continuously running process, that keeps the connection opened and regularly watches for changes.
There's hardly a better solution (that's still quick and easy to implement).
Example of continuous process implemented using WinSCP .NET assembly:
// Setup session options
SessionOptions sessionOptions = new SessionOptions {
Protocol = Protocol.Sftp,
HostName = "example.com",
UserName = "user",
Password = "mypassword",
SshHostKeyFingerprint = "ssh-rsa 2048 xxxxxxxxxxx...="
};
using (Session session = new Session())
{
// Connect
session.Open(sessionOptions);
while (true)
{
// Download changes
session.SynchronizeDirectories(
SynchronizationMode.Local, localPath, remotePath, false).Check();
// Wait 10 seconds
Thread.Sleep(10000);
}
}
You will need to add a better error handling and reconnect, if connection breaks.
If you do not want to implement this as (C#) application, you can use PowerShell script. For a complete solution, see
Keep local directory up to date (download changed files from remote SFTP/FTP server).

Related

Apache VFS SFTP Connection hangs

I am using Apache VFS to upload a file to an SFTP server, if the file is newer than the file on the server or doesn't exist there yet. The server connection uses SSH Keys for Authentication.
I am using the following java code (plus error handling etc.) to connect to the server and check the file modification date-time:
DefaultFileSystemManager manager = new DefaultFileSystemManager();
manager.addProvider("sftp", new SftpFileProvider());
manager.init();
FileSystemOptions opts = createDefaultOptions();
BytesIdentityInfo identityInfo = new BytesIdentityInfo(server.sshKey.getBytes(), null);
SftpFileSystemConfigBuilder.getInstance().setIdentityProvider(opts, identityInfo);
remoteFileObject = manager.resolveFile(new URI("sftp",server.UserName,server.HostName,server.Port,remoteFilePath,null,null).toString(), createDefaultOptions(server.Key));
FileContent content = remoteFileObject.getContent();
return content.getLastModifiedTime();
The SSH key is in the format -----BEGIN RSA PRIVATE KEY----- etc.; as exported by puttyGen under Conversions -> Export OpenSSH Key (i.e. the old format of OpenSSH key, not the new one).
I have tested this code on Windows, with a locally hosted SFTP server (i.e. also on the same Windows machine), and it works successfully.
I am now wanting to use this in a Linux environment (RHEL), connecting to an AWS Transfer SFTP server, secured using SSH keys as described.
I can connect successfully using the SFTP command from the Linux OS shell:
sftp -oIdentityFile=/path/to/test.ppk USER#xxx.xxx.xxx.xxx
But, when I try to run the java code, the code hangs on the call to manager.resolveFile.
After half an hour (I think - this might not be related), I get the following in /var/log/messages:
systemd-logind[1297]: Session 115360 logged out. Waiting for processes to exit.
systemd[1]: session-115360.scope: Succeeded.
systemd-logind[1297]: Removed session 115360.
I don't have SELinux enabled, so I don't think that's interfering in any way.
Can anyone help suggest what might be causing this?
There were a couple of things, as it turns out:
Timeout
The timeout can be set when you configure the SftpFileSystemConfigBuilder, by using the .setSessionTimeout(FileSystemOptions, Duration) method call. This reduces the timeout which, if nothing else, makes the issue easier to debug.
The Session comments in the messages log were not related to the issue. Instead, the issue happened because the exec channel is disabled on the SFTP server, but VFS is trying to use it. At a simple level, this can be disabled using setDisableDetectExecChannel on the SftpFileSystemConfigBuilder object - but you should know the implications of this before doing so.

Using Ratchet WebSockets in a Secure Environment is not working

I am using Ratchet WebSocket in a Windows-based server project that is entirely working in an insecure environment. That is to say that when I navigate my browser to http://www.example.com and connect to the websocket server using ws:// on port 8686 everything works spectacularly.
The server doesn't run through IIS - but instead is executed via php.exe in command prompt like this.
php wsocket-server.php [...parameters...]
However, if run the Ratchet Server and try to connect from https://www.example.com using wss:// the browser simply will not connect to the websocket server, despite the fact that the server starts up fine and the insecure site and connect via ws://
Now, I realize I need to utilize some additional code to include my SSL documentation. This is the relevant code I have in place:
use Ratchet\Server\IoServer;
use Ratchet\Http\HttpServer;
use Ratchet\WebSocket\WsServer;
$websocket_server = new WsServer();
if ($site_secure){
//RUN WSS (SECURE) SERVER
$options = [
'local_cert' => 'c:\inetpub\ssl\2c6fa1928847451c.crt',
'local_pk' => 'c:\inetpub\ssl\2c6fa1928847451c.key',
'allow_self_signed' => true,
'verify_peer' => false
];
$loop = React\EventLoop\Factory::create();
$websocket_server->enableKeepAlive($loop);
$app = new HttpServer($websocket_server);
$insecure_websockets = new \React\Socket\Server('0.0.0.0:'.$port, $loop);
$secure_websockets = new \React\Socket\SecureServer($insecure_websockets , $loop, $options);
$secure_websockets_server = new \Ratchet\Server\IoServer($app, $secure_websockets, $loop);
$secure_websockets_server->run();
}else{
//RUN WS (INSECURE) SERVER
$http_server = new HttpServer($websocket_server);
$server = IoServer::factory($http_server, $port);
$websocket->log ("Initializing ".(($site_secure) ? "Secure " : "Insecure ")."Server ($port)");
$server->run();
}
What I have tried
I have ensured the correct ports are all open in the windows firewall.
I have ensured nothing else is listening on the port using netstat
I have tried using nginx, on a minimal level. I'd prefer to NOT use this method if possible, and was having some initial problems with it starting up so I did not dedicate 100% to it at this time. Ideally, I'd like to use Ratchet's native abilities.
I have searched other similar posts both here and elsewhere, such as this.
I have tried a number of different ports, even the same 8686 as I use in the insecure connection
I am hoping someone can lend me an assist with an issue that has been driving me crazy for 2 weeks. At this point I feel like I'm just trying things to try them and I may be coding myself in circles.
Thank you in advance.
A browser is never going to connect to anything running on port 465. Especially not a WebSocket.
Establishing a WebSocket connection is specified in terms of the Fetch standard. As such, the specific exclusion of this port is found within the latter:
A port is a bad port if it is listed in the first column of the following table.
Port
Typical service
…
…
465
submission
…
…
Now, why are some ports blacklisted? This is a protection against cross-protocol scripting attacks, as once demonstrated (warning: NSFW links) against Firefox and against Safari. Port 465 has been (and still sometimes is) used for SMTP over (pure) TLS, so in this case, an XPS attack might trick a browser into sending mail on the user’s behalf. Blocking those ports is meant to prevent it. Of course, all bets are off when a service runs on a non-standard port.
To make the service available in a browser, all you need to do is change the port number.

PouchDB using remoteDB and not local DB instance by default in online mode not not in offline mode

I have attachments saved in a CouchDB database server-side that I want to replicate on a client-side app using PouchDB.
In my app.js file where I handle my application logic, this is at the top:
import PouchDB from 'pouchdb'
var remoteDB = new PouchDB('http://dev04:5984/documentation')
var db = new PouchDB('documentation')
remoteDB.replicate.to(db).on('complete', function () {
// yay, we're done!
}).on('error', function (err) {
// boo, something went wrong!
});
When I turn off access to port 5984 in my firewall, my file can no longer be retrieved and I lose access to my index.html file.
What I am expecting is for all contents of the database 'documentaion' to be copied to my local browser storage -- including pdfs and such -- so that when I turn off port 5984 and then hit refresh, I should still have access to the contents. What am I doing wrong? (see edit -- I figured out that the db is actually replicating, but the local instance isn't being preferred)
EDIT:
I've determined that the database is actually being replicated, but the local storage is only 'preferred' when the browser is in offline mode. So when I refresh while in online mode, with port 5984 blocked, the page is not found. But when I do the same in offline mode (once the contents have been cached already, of course), then the contents can be retrieved.
It would not be an ideal solution to ask my users to always work in offline mode. Is there some way to make the local PouchDB instance the default, even in online mode?
I write to the local db first and "sync" that to a remote db if it's available.
And I use app.cache to provide offline functionality and configure the app to run "offline first".
I read last week that Apple has finally implemented Service Workers in Safari so I'll be looking into implementing that as well now.

Dart integration test with VM server and dartium browser

I'm making a library that implements both server and client parts that interacts between them via websockets:
Server use example (ran in CLI):
Server srv = await new Server("localhost:1234");
srv.onNewClientConnected.listen(print("client connected"));
Client use example (ran in browser):
Client cli = await new Cliente("localhost:1234");
cli.sendCommand(...);
(Just by creating the instances, the client should be connected and the server noticed about that connection.)
I'd like to know what would be the best way to test their interactions? Could I check both objects internals with that method?
I would like something like this:
test(".echo should receive same input from server", (){
cli.echo("message");
expect(srv.lastMessageReceived, equals("echo: message"));
expect(cli.lastResponseReceived, equals("echo: message"));
expect(srv.amountMessagesReceived, equals(1));
});
If I understand correctly, I'm guessing you are trying to encapsulate https://www.dartlang.org/dart-vm/dart-by-example#websockets into helpers so that you have only instances when connected. However both operations (server side binding/listening/upgrade, client side connection) is asynchronous so you will never reach the state you want by just creating the instances (or you will need an additional asynchronous methods to be notified). I would suggest creating asynchronous helpers.
Assuming you accept only one client in your server
Server server = await Server.accept("localhost:1234");
Client side:
Client client = await Client.connect("localhost:1234");
By doing so, you will have only server and client instances when connected
I like the https://pub.dartlang.org/packages/web_socket_channel package which provide a good abstraction and allow me to test my web socket client logic that will run in the browser in a simple io test.
As for testing recommendations, I personally start my web socket server in setUpAll and create my client in setUp and user a similar logic that you propose (don't forget the await though as you will need to wait for the echo response). Again the web_socket_channel package has some good testing example that you can look at (https://github.com/dart-lang/web_socket_channel/tree/master/test)

FTP 425 Can't open data connection behind IIS

I need to write an application which connects to a FTP server. This FTP server does not allows passive mode connections. I can connect to the FTP server using Filezilla.
I have developed a C# WCF service which connects to this FTP server, using FTPWebRequest class.
Here are the basic settings of the FTPWebRequest object:
ftpreq.Proxy = null;
ftpreq.KeepAlive = true;
ftpreq.UsePassive = false;
When I run the WCF service from Visual Studio(Ctrl+F5) it connects to the FTP server and downloads required files without any issues
But when I host the service in my local IIS 7.5, it fails to connect to the FTP server with following error:
The remote server returned an error: (425) Can't open data connection.
After some googling on this problem, I tried playing around the firewall settings, but it was of no use. Not sure if it is related to some IIS security issue or something else.
Any help would be highly appreciated
This is my way:
Start\ Control Panel\ Windows Firewall\ Allowed Programs.
Here! Tick check box at your application to allow it through Windows FireWall.
Remember! Tick checkbox "Home/Work (private)" as well as checkbox "Public".
good luck!
here is complex solution
step 1
single ftp (non ftps) using two network channels (port and port ranges).
One (tcp:21) for auth, and second, used as data-channel for data.
When configuring ftp on iis, open 21 port for auth and control channel, then in iis control-panel on "FTP Firewall support" specify port range for data (5000-6000) and external ip of network interface married with ftp-server.
step 2
Finally, open tcp:21 in windows firewall and create custon rule, allowing connection on tcp:5000-6000. Restart system (or just restart services).
thats all.
ps. just remember, that ftp on iis using tcp:21 for control/auth and tcp:5000-6000 for data. tcp:5000-6000 may be changes in iis manager 1st, and opened in windows firewall second.
I commented this settings:
UseBinary, Proxy, KeepAlive, UsePassive
and now works good:
FtpWebRequest reqFTP;
reqFTP = (FtpWebRequest)WebRequest.Create(new Uri(ftpLocation + remoteDir));
//reqFTP.UseBinary = true;
reqFTP.Credentials = new NetworkCredential(ftpUser, ftpPassword);
reqFTP.Method = WebRequestMethods.Ftp.ListDirectory;
//reqFTP.Proxy = null;
//reqFTP.KeepAlive = false;
//reqFTP.UsePassive = false;
response = reqFTP.GetResponse();
reader = new StreamReader(response.GetResponseStream());
string line = reader.ReadLine();
This code is from microsoft forum: https://social.msdn.microsoft.com/Forums/en-US/079fb811-3c55-4959-85c4-677e4b20bea3/downloading-all-files-in-directory-ftp-and-c?forum=ncl
Same issue happened to me. So I did the following and it fixed it:
- I rebooted the whole FTP server
- After reboot, I launched Internet Information Services (IIS) 6.0 Manager. The Default FTP Site was stopped. I started it. And that fixed it!