What happen if disable lease check in jackrabbit oak implementation - jcr

We are using jackrabbit oak in our application. we are getting below error while processing requests.
*"This oak instance failed to update the lease in time and can therefore no longer access this DocumentNodeStore."*
Any impact on application process if we disable lease check using jackrabbit oak implementation?
What is significance of lease check?

Related

Roll back Gcloud Redis upgrade

I like to upgrade the redis memorystore instance in our gcloud because 5.x (at least in Github) appears to have reached its end of life. It's being use for simple key value pairs, so I don't expect anything unexpected during the upgrade to 6.x. However management is nervous and wants a way to rollback the upgrade if there are issues. Is there a way to do this? The documentation appears to say that rollback is not possible. I plan to do the usual backup and then upgrade. The instance is just the basic.
In order to Upgrade the redis memorystore instance, follow the best practices mentioned in the Public Documentation as the following :
We recommend exporting your instance data before running a version upgrade operation.
Note that upgrading an instance is irreversible. You cannot downgrade the Redis version of a Memorystore for a Redis instance.
For Standard Tier instances, to increase the speed and reliability of your version upgrade operation, upgrade your instance during
periods of low instance traffic. To learn how to monitor instance
traffic, see Monitoring Redis instances.
As mentioned in the documentation which recommends you to enable RDB Snapshots.
Memorystore for Redis is primarily used as an in-memory cache. When
using Memorystore as a cache, your application can either tolerate
loss of cache data or can very easily repopulate the cache from a
persistent store.
However, there are some use cases where downtime for a Memorystore
instance, or a complete loss of instance data, can cause long
application downtimes. We recommend using the Standard Tier as the
primary mechanism for high availability. Additionally, enabling RDB
snapshots on Standard Tier instances provides extra protection from
failures that can cause cache flushes. The Standard Tier provides a
highly available instance with multiple replicas, and enables fast
recovery using automatic failover if the primary fails.
In some scenarios you may also want to ensure data can be recovered
from snapshot backups in the case of catastrophic failure of Standard
Tier instances. In these scenarios, automated backups and the ability
to restore data from RDB snapshots can provide additional protection
from data loss. With RDB snapshots enabled, if needed, a recovery is
made from the latest RDB snapshot.
For more information, you can refer to the documentation related to version upgrade behavior.

What does loopback health really check?

I added the component #loopback/health to my loopback4 server but I don't understand on what it's based to assume my server is up. I searched on https://loopback.io/doc/en/lb4/Health.html#add-custom-live-and-ready-checks and on google but I can't find any infos about how it's working.
Thanks for your light !
Without configuring any additional custom checks, #loopback/health only configures a Startup Check that keeps track when the REST server (which is a LifeCycleObserver) is started and shutdown. This is useful for infrastructure with existing tooling that consumes (e.g. Kubernetes, Cloud Foundry), or if the LoopBack 4 project does more beyond a REST server.
It is still an experimental package, and there are intentions to expand the scope to encompass other LifeCycleObservers of the LoopBack 4 app such as DataSources.

Failed Deployment in App Engine Google Cloud

I am deploying my nodejs application in google cloud app engine but it is giving error
This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time.
This request may thus take longer and use more CPU than a typical request for your application. -- when making request.
I had also saw some stackoverflow answers, but they didn't worked for me.
my app.yaml have this config
runtime: nodejs10
Can anyone help me out
You could add the following to your app.yaml:
inbound_services:
- warmup
And then implement a handler that will catch all warmup requests, so that your application doesn't get the full load. The full explanation is given here. Another detailed post about this topic can be found here.
Additionally you can also add automatic scaling options. You can play a bit with those to find the optimum for your application. Especially the latency related variables are important. Good to note that they can be set in a standard GAE environment.
automatic_scaling:
min_idle_instances: automatic
max_idle_instances: automatic
min_pending_latency: automatic
max_pending_latency: automatic
More scaling options can be found here.
The "request caused a new process to be started" notification usually occurred when there is no warm up request present in your application.
Can you try to implement a health check handler that only returns a ready status when the application is warmed up. This will allow your service to not receive traffic until it is ready.
Warning: Legacy health checks using the /_ah/health path are now
deprecated, and you should migrate to use split health checks.
Here you can find Split health checks for Nodejs
Liveness checks
Liveness checks confirm that the VM and the Docker container are
running. Instances that are deemed unhealthy are restarted.
path: "/liveness_check"
check_interval_sec: 30
timeout_sec: 4
failure_threshold: 2
success_threshold: 2
Readiness checks
Readiness checks confirm that an instance can accept incoming
requests. Instances that don't pass the readiness check are not added
to the pool of available instances.
path: "/readiness_check"
check_interval_sec: 5
timeout_sec: 4
failure_threshold: 2
success_threshold: 2
app_start_timeout_sec: 300
Edit
For App Engine Standard, which doesn't afford you that flexibility, hardware and software failures that cause early termination or frequent restarts can occur without prior warning. link
App Engine attempts to keep manual and basic scaling instances running
indefinitely. However, at this time there is no guaranteed uptime for
manual and basic scaling instances. Hardware and software failures
that cause early termination or frequent restarts can occur without
prior warning and can take considerable time to resolve; thus, you
should construct your application in a way that tolerates these
failures.
Here are some good strategies for avoiding downtime due to instance
restarts:
Reduce the amount of time it takes for your instances restart or for
new ones to start.
For long-running computations, periodically create
checkpoints so that you can resume from that state.
Your app should be "stateless" so that nothing is stored on the instance.
Use queues for performing asynchronous task execution.
If you configure your instances to manual scaling: Use load balancing across > multiple instances. Configure more instances than required to handle normal
traffic. Write fall-back logic that uses cached results when a manual
scaling instance is unavailable.
Instance Uptime

How to use redis with kong api gateway

We are using kong api gateway as a single gateway for all apis. we are facing latency issue with few of our api's (1500-2000ms). later when we checked, latency was being created because of the "rate limiting" plugin. When we disable the plugin, latency improves and the response is same as what we get directly from IP (close to 300ms approx).
I m trying to setup redis node to cache database queries, I m not sure how we can configure kong to read from redis itself. how we can cache the database queries to redis node.
We are using postgresql for kong.
I think maybe you are trying to do a couple different things at once.
First, rate-limiting: what is the value for your config.policy parameter? The Kong documentation has three values: "local (counters will be stored locally in-memory on the node), cluster (counters are stored in the datastore and shared across the nodes) and redis (counters are stored on a Redis server and will be shared across the nodes)."
If you are seeing high latency, and your config.policy is set to cluster or redis, it might be due to latency between Kong and postgres/redis (depending on what policy you're using). If you are using rate-limiting just to prevent abuse, using the 'local' policy is faster. (There's more about this at the Kong documentation.)
The other question is about caching: Kong Enterprise has a built-in caching plugin, but for Kong Community, since it's built on top of Nginx, you can do caching with Nginx. This link might help you.
There is a community custom plugin out there that enables the use of caching with redis without the need to use the Kong Enterprise -> https://github.com/globocom/kong-plugin-proxy-cache
Maybe you could combine that with rate limiting to achieve the desired latency performance or use this plugin as inspiration.

Is the RavenDB subscription storage a central point of failure for NServiceBus?

I am evaluating using NServiceBus as a SOA mechanism in our product. I'm looking into using the publish/subscribe pattern and my understanding is that the subscription service will store all subscriptions.
Does that mean that if my RavenDB server goes down then my publishers lose the ability to send to subscribers? Or is there a way for the publishers to cache the subscribers it has and if RavenDB were to go down then it would deliver to its known subscribers?
You can run the RavenDB server as a replicated node, to avoid this being a single point of failure.
The general pattern is for an endpoint to have a master node that acts as worker and distributor, and then the master node uses a Raven installation on that same server to store its subscriptions and saga storage.
So, it is a point of failure for that one endpoint, but other endpoints in the distributed system will use the Raven installs on their own servers. Thus, the system is kept distributed and the entire system does not have a single point of failure. RavenDB enables this because it is fairly easy to install it on any server.
Contrast this to SQL Server, which is frequently centralized, scaled up to the max, and even clustered in order to provide high availability. (Read: expensive!)
You can also run RavenDB in a Windows failover cluster where the nodes use a shared SAN for the RavenDB data files. If the active node dies, another takes over. Since the data is stored on the SAN, you shouldn't even notice it except the time it takes to start up the RavenDB windows service on the new node. Check out http://ravendb.net/docs/server/administration/fmc_configuration
This is also the recommended setup for High Availability when running with Distributors. http://docs.particular.net/nservicebus/scalability-and-ha/distributor/