Changing Express gateway config in runtime - config

I want to know if we can change the remote express gateway config from some other service(Might or might not be behind the gateway). Is there an API exposed for admins, to enable changing config without having to change the docker image of eg?
Our use case is we have an infrastructure based on tenants and want to change the config in run time without having container restarts or image changes. The documentation says config changes will be a hot reloaded.
If the above is not possible can you suggest what is the best alternative to change files in a remote docker container from other service.
Thanks in advance.

Yes, the Express Gateway Admin API has endpoints to add, remove, list, or change the following entities:
Policies
Service Endpoints
API Endpoints
Pipelines
I have not used them, but the documentation suggests that they update the gateway.config.yaml configuration file.

Related

ExpressJS detect connections from kubernetes services

I have a kubernetes cluster with a NodeJS micro-service. The micro-service has an endpoint that is accessible from inside the cluster as well as outside of the cluster through an nginx reverse proxy.
I would like to be able to detect inside this endpoint whether the incoming request comes from inside or outside the cluster.
Since I would like the deployment of this NodeJS app to be flawless no matter which kubernetes platform I deploy it, I don't want to use static IP check. I have few ideas but don't know which approach is the best and how to implement it.
Detect the local address and check if it's in the same IP prefix (not sure if nginx will return it's own IP so that might not work)
Detect if it comes from nginx - not sure if there is a default header that can't be removed by the client or anything that can be detected from ExpressJS
Detect somehow if it comes from a service inside the cluster (maybe there is a header or something that would indicate me that it's part of the same cluster)
I would like to know which idea (maybe not in this list) to pick and how I implement it in ExpressJS.

Deploy a dynamic Service Gateway for Lagom in production

I have developed a set of Lagom microservices. The development environment provides with default Service Gateway and Service Locator.
In a production environment I would like my services to:
register to a service registry
be available to a web app through a service locator that uses this registry
What should I use as Service Registry / Service Locator / Service Gateway ?
A simple NGINX would be a reasonable service gateway but it implies a very static configuration based on redirect rules (no actual registration).
I cannot find any code sample on this subject and the documentation is very poor (it describes well development tools but doesn't help when it comes to actual production).
The documentation on that area is vague on purpose because the ecosystem is very vast and changes fast.
You could, for example, use Consul or ZooKeeper to keep track of the instances that are runnning for each service and where they are running (where means IP:PORT). Then you would need to use a Consul-based or a ZooKeeper-based Service Locator instance. The preferred target deployment environment these days is Kubernetes (in any of its flavors) so the service location is based on DNS-SRV lookups on the DNS server provided by k8s. The registration step happens automatically in a k8s setup for each pod so you won't need to care for that.
Then, the reverse proxy on the edge capable of directing each request to the appropriate process is a plain-old HTTP proxy that can check your service location (or cache the service location information). These days the recommendation is configuring an Ingress/Route (for k8s or OpenShift) edge proxy for each of your lagom services.
See the guide on Deploying a Lagom application to Openshift for a thorough explanation.

Jelastic configure firewall

I'm using Jelastic for my application and I just installed the Apache for it. The problem is that I need to set up a firewall for it, like iptables or other, after all is a web application and it needs security.
How can I do that?
The host said to me, that the only way is to use VDS and I should configure a VDS for me, installing Apache, FTP and transfer my application to there.
But I can believe that there is no way to protect the Apache.
Thank you in advance.
The available options vary depending on your hosting provider. For example, the Jelastic platform gives hosting providers and private cloud customers the ability to define a set of default firewall rules for each newly provisioned node.
Additionally, since Jelastic 4.1, there is an option for the provider to define additional custom firewall rules for any specific container. At the moment this functionality is only accessible from the provider's side, so it means you need to work with your provider's support team.
If you don't want to do that, or your chosen Jelastic provider does not offer good support, you can either:
Use an unmanaged node type in your Jelastic environments, such as the Elastic VPS or Docker nodes. Here you have full root access to define whatever firewall rules you desire.
Use application server rules to restrict access according to IP. E.g. inside your httpd.conf (which you already have full access to customise)
In the recent release, Jelastic introduced a possibility to manage inbound and outbound firewall rules on the container level right through the interface. The detailed instruction is here.

Spring Cloud Config: differentiate configs for service instances

Spring Cloud Config serves configuration based on app name, profile and label.
I'm wondering, how to differentiate configuration of two instances of same service on same environment. They should basically have the same configuration, but, for example, I would like to allow on testing environment running them on the same host, so I need different ports.
How you solve this? Using fake profiles (dev-1, dev2)? Or is there some better way?
There is no facility for individual instance configuration as you noted. You could do a fake profile thing. But for port, why not just set server.port?

Hosting a continuosly running Console application

Azure VM, Cloud service or Web job?
I have a configurable console application which runs continuosly. Currently it is running on a VM and consumes lot of memory (it is basically doing data mining).
The current requirement is to have multiple instances of this application with different set of configuration which can be changed by specific users.
So where should I host this application such that the configuration can be modified using some front end which provides access managements(like Sharepoint),ability to stop it/restart (like WCF service) without logging on the VM?
I am open to any suggestions/ideas. Thanks
I don't think there's any sold answer to this question as there is the preference variable but for what it's worth, if it were up to me I would deploy it against individual azure VM's for each specific set of users. That way if the server resources went up because of config changes the user group made it is isolated to that group, and with azure, will scale automatically to meet the resource demand. Then just build a little .net web app to allow user to authenticate and change configuration settings.
You could expose an "admin" endpoint for your service (obviously you need authentication here!) that:
1. can return the current configuration
2. accept new configuration
3. restart the service (if needed). Stopping the service will be harder, since that leaves the question on how to start it again.
Then you need to write your own (or use a 3-party (like sharepoint or a CMS)) application that will handle your users and under the hood consume your "admin" endpoint.
Edit: The hosting part: If I understand you correctly your app is just an console application today, and you don't know how to host it? Well, there are many answers to that question. If you have a operations department go talk to them, if you are on your own play around and see what fits you and your environment best!
My tip: go for a http/https protocol/interface - just because there are many web host out there, and you can easy find tools for that protocol. if you are on the .NET platform check out Web.API or OWASP
Azure now has Machine learning to process data mining.
You should check if it's suit to you.
Otherwise, you can use Webjob:
Allow you to have multiple instances of your long time running job (Webjon scaling out).
AppSettings can be change from the Azure Portal or using the Azure Management API