How do you run a docker containerized application behind an application gateway in Azure - api

I've been searching the help forums and the only documentation I've seen on how to do this is to create the gateway and then spin up vm's to run your application. We are using docker containers and I'm not sure of how to proceed. Additionally, is it possible to block off all access to applications behind a gateway and only have them be accessible through the gateway? Thanks a lot.

Related

Is it Possible to Invoke Cloud Run with Gmail-Auth?

I want to ask a conceptional question and take advices about possible system design if possible.
The plan is basically authenticating specific Gmail users to use my serverless backend application. I'm thinking about either forwarding users directly to my VPC or I can authenticate them in my host-provider server and then after forward them to the VPC (or directly Cloud Run service?).
I'd be really glad if someone experienced can lead me about concepts and suggest design ideas about this.
As commented by#John Hanley, your question has concepts that do not exist.
To invoke Cloud Run authentication to specific users to use your serverless backend application, go through below required possible systems designs :
1)Initially design how to describe IAM roles that are associated with Cloud Run, and list the permissions that are contained in each role.
2)Design how to secure and Configure Cloud Run to limit access to Cloud Run service with Identity aware Proxy(IAP).
3)Design how to create a Serverless VPC Access connector and also know how to use IAP for TCP forwarding within a VPC Service Controls perimeter.
4)Step by step implementation of how to use IAP to secure portal access without using a Virtual Private Network (VPN). IAP simplifies implementing a zero-trust access model and takes less time than a VPN for remote workers both on-premises and in cloud environments with a single point of control for managing access to your apps.
Solution to the what I had in mind was could be accomplished by Identity-Aware Proxy.

Apache web server and microservices with Docker

I have a few spring boot microservices running on Docker, and Apache web server (also running on Docker) for all the static stuff. The microservices are consumed by the web browser. Problem is, I don't know how I should reference the microservices from html or javascript:
the microservice runs on a different port
also might run on a different host
the browser complains about links
Googling the problem points me toward Netflix eureka or Apache Camel, but I'm not sure these are the right solutions.
Let's first think about deployment. You mention that the Docker containers might run on different machines. I recommend using container orchestrators like Docker Swarm or Kubernetes to manage a cluster and communication between microservices (typically via DNS).
Generally, you want to hide all your microservices behind one API path. The outside world does not need to know that your server application consists of multiple microservices. You can use a simple reverse proxy for this. I personally like Traefik because you can configure the routing paths in the Docker ecosystem via labels.
You say you consume the microservice APIs with a browser, so is it a web client application? If so, I recommend serving it as Docker container as well and embed it into the routing by using relative paths. E.g. UI is served as / and microservices as /api/{service}/{path}. Then the UI application can use relative paths because they are served by the same reverse proxy and such under the same URL (=> no CORS issues). Additionally, you can deploy to any IP, the routing stays the same and does not have to be adjusted

Will there be support to establish a private connection to Azure AKS

My client is currently evulating AKS which seems to be really promising. Our current platform is based on Azure VM's we provision ourselves. We would like to create private communication between both our existing platform and the managed AKS cluster but so far that does not seem to be supported yet.
Some example use cases for us are:
- Proxying incoming HTTP traffic via our main entrypoint, a Varnish server, to the new AKS environment so we don't have to change url's
- Accessing non publically exposed API's from the AKS environment
Right now the AKS cluster is it's a different subscription and resource group than other parts of our platform. The main reason we we can't connect though seems to be that it's not possible to specify which private IP range should be used when creating an AKS cluster.
Is there support planned for this or is there a reliable workaround?
Thanks for the inquiry, there's a workaround for the stated case, it's through the use of ACS Engine, "ACS Engine, for Azure Container Service Engine, is a CLI tool that helps to generate Azure Resource Manager templates to deploy Docker enabled clusters on Microsoft Azure. It works with all the orchestrators supported by ACS: Docker Swarm, Mesosphere DC/OS and Kubernetes"
So using this solution will allow you to integrate Azure Container Service Cluster into an existing Virtual Network.More details and step by step guide can be found here: https://blogs.msdn.microsoft.com/jcorioland/2017/01/10/how-to-integrate-a-new-azure-container-service-cluster-into-an-existing-virtual-network-using-acs-engine/

How do you make an Express.js API accessible from the Internet?

I have an Express API server running on localhost on my own machine. How do I make it accessible from the Internet and not just my own machine?
Preferably, it would be deployed on AWS.
In AWS there are multiple ways of hosting your express application based on flexibility vs convenience.
AWS Elastic Beanstalk:
This will provide you more convenience by creating an autoscaling and loadbalancing environment with version management and roll back support from one place in AWS web console. Also provide you IDE support for deployments and CLI commands for CI/CD support.
AWS ECS:
If you plans to dockerize your application(Which I highly recommend) you can use AWS ECS to manage your docker cluster with container level Autoscaling and loadbalancing support for more convenience. This also provides CLI for CI/CD.
AWS EC2:
If you need more flexibility, you can get a virtual server in AWS and also manually configure autoscaling and loadbalancing which I prefer as the least option simply for a web app since you have to do most of the things manually.
All this services will provide you with publicly accessible URL if you configure them properly to grant access from outside. You need to configure networking and security groups properly either exposing the loadbalancer or instance IP/DNS URL to the outside.

Remote Docker Host Authentication

Hi I'm currently working on a side project. In this project I'll have a central server that will need to connect to several remote docker daemons. My problem is with authentication.
Given that the project will be hosted on Digitalocean, my first thought suggested that I'll accept only connections from the private networking interface. The problem is that that interface is accessible by all other servers in the same datacenter.
Second thought is to allow only requests from the central server using the DOCKER_HOST config, the problem is that if I understand correctly the if the private IP of the centeral server get known, the IP can be spoofed.
Third thought is to enable TLS ( https://docs.docker.com/articles/https/ ), I've never dealt with those things before and the tutorial is unclear for me, I lack the knowledge of the terminologies and it's being used heavily.
So basically the problem is that I have a central client and multiple remote docker hosts, what is the best way to connect to them? Thank you.
EDIT: I managed to solve the problem using HTTP authentication by running nginx as a proxy in front of the docker daemon.
My understand is you are trying to build a docker cluster, which can manage all nodes from one single central server.
this is very likely docker's Docker Swarm project, from their doc, they give some simple idea how this is work:
open a TCP port on each node for communication with the swarm manager
install Docker on each node
create and manage TLS certificates to secure your swarm
Sorry this should post as a comment but I do not have enough rep to do that.