I am trying to setup replication between Cloudant and PouchDB.
I see that others have had issues. Are there any best practices to achieve this w/o a proxy?
Yeah, it should work out of the box. If it doesn't, the Cloudant folks are really helpful on the #cloudant freenode IRC channel.
If you suspect something is fundamentally wrong in the replication between PouchDB and Cloudant, you can sync from PouchDB to CouchDB and then from CouchDB to Cloudant. CouchDB is sort of the lingua franca.
Related
I was wondering about how deepstream decides to store an info in cache vs database if both of them are configured. Can this be decided by the clients?
Also, when using redis will it provide both cache and database functionality? I would be using amazon elastic cache with redis backend for the same.
It stores it in both, first in the cache in a blocking way and outside the critical path in the database in a non-blocking way.
Here's an animation illustrating this.
You can also find more information here: https://deepstream.io/tutorials/core/storing-data/
I am looking for the best way to achieve high availability for my organizations applications. Since they contain sensitive information, the applications must reside inside my organizations data centers.
I was thinking of using Google load balancing to direct requests to my servers, but I don't think they can be pointed at external servers, just Google VMs. Does anyone know if that's true?
My other thought was that I could use Google load balancing to point to Google VMs running Nginx and have that load balance between my data centers. Does anyone know if that is feasible? Under this scenario, can I terminate SSL on my servers, or does it have to terminate at the Google VM?
Unfortunately, you are correct: You cannot use Google Cloud's Network load-balancing with external servers.
You could do your second option, but I'd strongly suggest you reconsider the approach: too many moving parts, and for what benefit? If a server goes down you lose session state anyway, so maybe it'd be better for you to use DNS load balancing instead.
FYI I use Google LoadBalancing and AutoScaling, it works pretty good, but not perfect (frequent 502 burps), which is probably why it's still in "Beta".
I have to configure a (transactional) replication where it will have one distributor(publisher too) and more 6 servers that will be the subscribers.
The problem is: I never configured something like that, and after a whole week looking for a tutorial for do this, I decided to ask here because it's a task harder than I've imagined.
I already tried the MSDN tutorials, but without success. I know how to create a publication and subscribe it, but I don't know how to synchronize it with IIS(Internet Information Service).
The intention is: Sync it with IIS to the subscribers update the databases via web.
Note: I already tried ALL the MSDN tutorials. None of them worked.
So, I'm asking for any other way to do this.
Thanks in advance.
(I'm using Sql Server 2005 on Windows Server 2003) :)
Seems like web synchronization in SQL Server is available only for merge replication, not transactional too.
From MSDN:
Use the Web Synchronization Wizard to configure a Microsoft Internet
Information Services (IIS) server for Web synchronization, which
allows you to synchronize subscriptions to merge publications over an
Internet or intranet connection. For more information, see Web
Synchronization for Merge Replication and How to: Configure IIS
for Web Synchronization.
Much later edit, after reading comments.
Web synchronization supports indeed only merge replication.
In cases when the changes at the subscriber shouldn't propagate back to the publisher (design decision, whatever), the merge replication can be configured to mark published articles as read-only. Meaning that changes are not pushed back, similarly to transactional replication.
More to the point, merge replication supports a feature called Read-Only Articles (tables, etc), described here.
It says that:
Specify whether changes at the Subscriber are uploaded to the
Publisher. For applications in which some or all data should be
read-only at the Subscriber, download-only articles provide a
performance benefit.
I know LDAP is a Protocol but is there a way to monitor it?
I am using WhatsUps Gold monitoring and have been asked to look into LDAP monitors.
How can I set up monitoring for LDAP?
There is no standard for monitoring LDAP directory services, but most of the products support getting monitoring information via LDAP itself, under the "cn=monitor" suffix.
Servers such as OpenDJ (continuation of the OpenDS project, replacement of Sun DSEE) also have support for monitoring through SNMP and JMX.
Regards,
Ludovic.
I have been using cnmonitor (http://cnmonitor.sourceforge.net/) for some years with excellent results, although it's not perfect and there are some errors. You can see lots of statistics without almost doing anything: number of requests, searches, add, modifications, deletes, index status, replication, schema, etc. It is also compatible with many different LDAP servers (although I have only used it with 389 Directory Server).
How much memory and/or other resources does Apache web server use?
How much more are lightweight servers efficient?
Say appache vs. Mongoose Web Server
Neil Butterworth you out there?
Thanks.
Yes, lightweight servers are more efficient with memory and resources, as the term 'lightweight' would indicate. nginx is a popular one.
Apache's memory and resource usage depends a lot on what you're doing with it - which modules are loaded, what your PHP etc. scripts are doing. There's no single answer.
You have to take into account your specific task, and also the fact that almost every web server has some sort of specialization (a niche).
Apache is configurable and stable.
nginx is extremely fast, but works only with static context.
lighttpd is small, fast and does both static and dynamic context.
Mongoose is embeddable, small and easy to use.
There are many more web servers, I won't go through the whole list here. You need to decide which features do you require for your task, and make a choice accordingly.
Apache Httpd is great if you need lots of flexibility that is provided via various mods. If you're looking for straight-up file serving or proxying, then some lightweight options might be better. I manage the Maven Central repo that gets millions of hits a day and I have some experience with Nginx.