PouchDB using remoteDB and not local DB instance by default in online mode not not in offline mode - replication

I have attachments saved in a CouchDB database server-side that I want to replicate on a client-side app using PouchDB.
In my app.js file where I handle my application logic, this is at the top:
import PouchDB from 'pouchdb'
var remoteDB = new PouchDB('http://dev04:5984/documentation')
var db = new PouchDB('documentation')
remoteDB.replicate.to(db).on('complete', function () {
// yay, we're done!
}).on('error', function (err) {
// boo, something went wrong!
});
When I turn off access to port 5984 in my firewall, my file can no longer be retrieved and I lose access to my index.html file.
What I am expecting is for all contents of the database 'documentaion' to be copied to my local browser storage -- including pdfs and such -- so that when I turn off port 5984 and then hit refresh, I should still have access to the contents. What am I doing wrong? (see edit -- I figured out that the db is actually replicating, but the local instance isn't being preferred)
EDIT:
I've determined that the database is actually being replicated, but the local storage is only 'preferred' when the browser is in offline mode. So when I refresh while in online mode, with port 5984 blocked, the page is not found. But when I do the same in offline mode (once the contents have been cached already, of course), then the contents can be retrieved.
It would not be an ideal solution to ask my users to always work in offline mode. Is there some way to make the local PouchDB instance the default, even in online mode?

I write to the local db first and "sync" that to a remote db if it's available.
And I use app.cache to provide offline functionality and configure the app to run "offline first".
I read last week that Apple has finally implemented Service Workers in Safari so I'll be looking into implementing that as well now.

Related

How can I replace the server in Web Component Tester

I have a project set up based around the Polymer Starter Kit, which includes Web-Component-Tester
This project includes php server code which I would also like to test by writing tests to run in the browser which will utilise the PHP server code through Ajax Calls.
This implies replacing the server that Web Component Tester is using ONLY when testing server side code. I hope to make a separate gulp task for this.
Unfortunately, I don't understand the relationship between WCT, Selenium and what ever server is run currently. I can see that WCT command starts Selenium, but I can't find out what the web server is and how that is started. I suspect it is WCT, because there is configuration of the mapping of directories to urls, but other than that I haven't a clue, despite trying to read the code.
Can someone explain how I go about making it run its own server when testing the client, but relying on an already set up web server (nginx) when running the server. I can set nginx to run from local host, or an other domain if that is a way to choose a different configuration.
EDIT: I have now found that runner/webserver.js starts an express server, and that urls get mapped so the base directory for the test runner and the bower_components directory both get mapped to the /components url.
What is currently confusing me is in what circumstances this gets run. It appears that loading plugins somehow does it, but my understanding from reading the code for this is tenuous.
The answer is that web component tester itself has a comment in the runner/config.js file.
In wct-conf.js, you can use registerHooks key into the Object that gets returned to add a function that does
registerHooks: function(wct) {
wct.hook('prepare:webserver', function(app, done) {
var proxy = require('express-http-proxy');
app.use('/api',
proxy('pas.dev', {
forwardPath: function(req, res) {
return require('url').parse(req.url).path;
}
})
);
done();
});
This register hook function allows you to provide a route (/api in my case) which this proxies to a server which can run the php scripts.

Simulate Access disable feature in Worklight , when worklight server itself is down.

I am trying show end users maintainence window such as "we are down please try later" and disable the application but my problem is what if my worklight server itself is down and not reachable and i cannot use the feature provided by worklight console,
Is there a way i make my app talk to a different server which returns back the below json data when a app is disabled , can i simulate this behaviour is this possible.
json recieved on access disabled in worklight :-
/*-secure-
{"WL-Authentication-Failure":{"wl_remoteDisableRealm":{"message”:”We are down, Please try again soon","downloadLink":null,"messageType":"BLOCK"}}}*/
I have some conceptual problems with this question.
Typically a production environment (simplified) would not consist of a single server serving your end-users... meaning, there would be a cluster of nodes, each node being a Worklight Server, and this cluster would be behind a load balancer that would direct the incoming requests. And so in a situation where a node is down for maintenance like in your scenario there would still be more servers able to serve - there would be no down time.
And thus at this point your suggestion to simulate a Remote Disable by sending it from another(?) Worklight Server seems not so much the correct path to take (it may even be simply wrong). Have you had this second Worklight Server, why wouldn't it just serve the apps business like usual? See again my first paragraph about clustering.
Now lets assume there is still a downtime, that affects all servers. The application's client logic should be able to handle failed connections to the Worklight Server. In such a case you should handle this in the WL.Client.connect()'s onFailure callback function to display a WL.SimpleDialog that looks just like a Remote Disable's dialog... or perhaps via the initOption.js's onConnectionFailure callback.
Bottom line: you cannot simulate the JSON that is sent back for the wl_RemoteDisable realm; it is part of a larger security mechanism.
Additionally though, perhaps a way to better handle maintenance mode on your server is to have the HTTP server return a specific HTTP status code, check for this code and display a proper message based on the returned HTTP status code.
To check for this code in a simple example:
Note: the getStatus method is available starting MobileFirst Platform Foundation 7.0 (formerly "Worklight").
function wlCommonInit(){
WL.Client.connect({onSuccess:success, onFailure:failure});
}
function success(response) {
// ...
}
function failure(response) {
if (response.getStatus() == "503") {
// site is down for maintenance - display a proper message.
} else if ...
}

Continuously sync changes from web server

I'm searching for a way to get my Files synchronized (task) from a web server (Ubuntu 14) to a local server (Windows Server). The web server creates small files, which the local Server needs. The web server is in a DMZ, accessible through SSH. Only the local server is able to access folders on web server. It tried using Programs like WinSCP, but I'm not able to set a "get"-Job.
Is there a way to do this with SSH on Windows server without login every few seconds? Or is there a better solution? In the Future Web-Services are possible, but at the moment I need a quick solution.
Either you need to schedule a regular frequent job, that connects and downloads changes.
Or you need to have continuously running process, that keeps the connection opened and regularly watches for changes.
There's hardly a better solution (that's still quick and easy to implement).
Example of continuous process implemented using WinSCP .NET assembly:
// Setup session options
SessionOptions sessionOptions = new SessionOptions {
Protocol = Protocol.Sftp,
HostName = "example.com",
UserName = "user",
Password = "mypassword",
SshHostKeyFingerprint = "ssh-rsa 2048 xxxxxxxxxxx...="
};
using (Session session = new Session())
{
// Connect
session.Open(sessionOptions);
while (true)
{
// Download changes
session.SynchronizeDirectories(
SynchronizationMode.Local, localPath, remotePath, false).Check();
// Wait 10 seconds
Thread.Sleep(10000);
}
}
You will need to add a better error handling and reconnect, if connection breaks.
If you do not want to implement this as (C#) application, you can use PowerShell script. For a complete solution, see
Keep local directory up to date (download changed files from remote SFTP/FTP server).

Web Deploy API (deploy .zip package) Clarification

I'm using the web deploy API to deploy a web package (.zip file, created by MSDeploy.exe) to programmatically roll the package out to a server (we need to do some other things before we release the package which is why we're not doing it all in one go using MSDeploy.exe).
Here's the code I have. My question is really to clarify what is happening when this is executed. In the package parameters XML file I have the application name specified ("Default Web Site") but that's about it, there's no other params are specified in there. From testing the server it appears the package gets deployed successfully but my question is are any other settings on the server I'm deploying to getting changed without my knowledge, are any default settings published etc.? Things like security settings, directory browsing etc. that I might not be aware of? The code here seems to deploy the package but I'm anxious about using this on a production environment when I'm so unsure of how this API works. The MS documentation is not helpful (more like non-existant, actually).
DeploymentChangeSummary changes;
string packageToDeploy = "C:/MyPackageLocation.zip";
string packageParametersFile = "C:/MyPackageLocation.SetParameters.xml";
DeploymentBaseOptions destinationOptions = new DeploymentBaseOptions()
{
UserName = "MyUsername",
Password = "MyPassword",
ComputerName = "localhost"
};
using (DeploymentObject deploymentObject = DeploymentManager.CreateObject(DeploymentWellKnownProvider.Package,
packageToDeploy))
{
deploymentObject.SyncParameters.Load(packageParametersFile);
DeploymentSyncOptions syncOptions = new DeploymentSyncOptions();
syncOptions.WhatIf = false;
//Deploy the package to the server.
changes = deploymentObject.SyncTo(destinationOptions, syncOptions);
}
If anyone could clarify that this snippet should deploy a package to a web site application on a server, without changing any existing server settings (unless specified in the SetParameters.xml file) that would be really helpful. Any good resources on using the API or an explanation of how web deployment works behind the scenes would also be much appreciated!
The setparameters file just controls the value for the parameters defined in the package. A package might be doing much more than that. Web deploy has a concept of providers and any given package can have one or more providers.
If you want to make sure that the package is not changing server side settings the best approach you can take is to use the API but make the packages be deployed via Web Management Service. This will give you two benefits:
You can control what providers you allow through.
You can add users and give restricted permissions to them to deploy to their site or their folder etc.
The alternate approach is to:
In the package manually look at the archive.xml and look for the providers in the package. As long as you dont see any of the following providers that can cause server settings change such as apphostconfig or webserver or regkey (this is not a comprehensive list) you should be good. Runcommand is a provider that allows you to execute batch scripts or commands. While it is a good provider for admins themselves you need to consider whether you want to allow packages with such providers to run.
You can do the above mentioned inspection in code by calling getchildren on the deployment object you create out of the package and inspect the providers and the provider paths.

Kohana Auth Library Deployment

My Kohana app runs perfectly on my local machine.
When I deployed my app to a server (and adjust the config files appropriately), I can no longer log into the app.
I've traced through the app login routine on both my local version and the server version and they both agree with each other all the way through until you get to the auth.php controller logged_in() routine where suddenly, at line 140 - the is_object($this->user) test - the $user object no longer exists!?!?!?
The login() function call that calls the logged_in() function successfully passes the following test, which causes a redirect to the logged_in() function.
if(Auth::instance()->login($user, $post['password']))
Yes, the password and hash, etc all work perfectly.
Here is the offending code:
public function logged_in()
{
if ( ! is_object($this->user))
{
// No user is currently logged in
url::redirect('auth/login');
}
etc...
}
As the code is the same between my local installation and the server, I reckon it must be some server setting that is messing with me.
FYI: All the rest of the code works because I have a temporary backdoor available that allows me to use the application (view pages of tables, etc) without being logged in.
Any ideas?
I solved the problem (DUH!).
The answer was that the cookie.php config file had $config['domain'] = 'localhost'. Setting this to the actual domain that the app is installed in magically made my life happy again!
Thanks everyone for your help and interest.