Net Core 3.1 gRPC Server on Azure Container Instance only listening port 80 - azure-container-instances

I'm testing the gRPC using Visual Studio 2019 16.7.2 on Windows 10
64bits, creating the gRPC Server with Net Core 3.1 and template 3.1.8
Greeting Service sample.
Also creating the client with Net Core 3.1 with Google.Protobuf 3.13.0, Grpc.Net.Client 2.31.0 and Grpc.Tools 2.31.0
It run ok on Linux Container using Docker Desktop for Windows 10 64bits.
Then I Deployed the container imagen on Azure Container Register, after I created an Azure Container Instance using the same Linux image, adding ports: 443, 5001 as images shows, but it only listening calls on the port 80.
container Properties contaniner Log
The Docker file:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
EXPOSE 5001
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["GrpcService/GrpcService.csproj", "GrpcService/"]
RUN dotnet restore "GrpcService/GrpcService.csproj"
COPY . .
WORKDIR "/src/GrpcService"
RUN dotnet build "GrpcService.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "GrpcService.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "GrpcService.dll"]
The Azure Container Status

As I have read VS uses a Developer Certificate to enable SSL on localhost, this is the reason why on Docker Desktop ports are opened; but when the Docker image is built for deployment the ASP base image is merged with the gRPC server, since there is no Certificate only port 80 is opened
The solution is to use a Reverse Proxy: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-container-group-ssl
Based on the Nginx gRPC support: https://www.nginx.com/blog/nginx-1-13-10-grpc/
some tweaks are need on the nginx.conf file
server {
listen [::]:5001 ssl http2;
listen 5001 ssl http2;
server_name localhost;
ssl_protocols TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 24h;
keepalive_timeout 300;
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
ssl_certificate /etc/nginx/ssl.crt;
ssl_certificate_key /etc/nginx/ssl.key;
location / {
grpc_pass grpc://localhost:80;
}
}
Also adding an DNS Label on the Deploy YAML is useful, since the ip address may change on container images updates
ipAddress:
dnsNameLabel: mydnslabel
ports:
- port: 5001
protocol: TCP
type: Public

First, all the images should test in your local machine and work well as you expect. Then you can deploy them in Azure.
The second, the logs just show about port 80, because in default, port 80 is used to access from the browser and the logs are set by yourself. It does not mean the application cannot listen to other ports. You need to check the listening ports inside the container instance.
And the third, the Container Instance does not support the port mapping. So you need to expose the ports exposed in the Dockerfile, and the ports should be listened by your application as you set.

Related

Install SSL on a Nginx server in a azure VM

I have issued an SSL certificate and now I tried to access using ssh to the config file to install my certificate issued, but it's showing an error, can you please tell me how can I install my SSL certificate on portal.azure.com, I have NGINX server
As far as I know, you can't install an SSL certificate in Azure VM via the portal but you can use cloud-init to install packages and write files, or to configure users and security.
When you create a VM, certificates and keys are stored in the protected /var/lib/waagent/ directory. To automate adding the certificate to the VM and configuring the web server, use cloud-init. In this example, you install and configure the NGINX web server. You can use the same process to install and configure Apache.
Create a file named cloud-init-web-server.txt and paste the following configuration:
#cloud-config
package_upgrade: true
packages:
- nginx
write_files:
- owner: www-data:www-data
- path: /etc/nginx/sites-available/default
content: |
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/mycert.cert;
ssl_certificate_key /etc/nginx/ssl/mycert.prv;
}
runcmd:
- secretsname=$(find /var/lib/waagent/ -name "*.prv" | cut -c -57)
- mkdir /etc/nginx/ssl
- cp $secretsname.crt /etc/nginx/ssl/mycert.cert
- cp $secretsname.prv /etc/nginx/ssl/mycert.prv
- service nginx restart
Ref: https://learn.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-secure-web-server

Phoenix in Production on EC2 not rendering in HTTPS with AWS Load Balancer

I have followed this tutorial to set up my phoenix app on EC2, and later I added the load balancer for SSL.
I used ACM (Amazon Certificate Manager) to get the public certificate and applied on the Amazon Load Balancer (ALB).
I'm still a bit fuzzy on the port mapping, so I suppose it might be the cause.
# config/prod.exs
host = System.get_env("HOST") || "example.com"
config :app_web, AppWeb.Endpoint,
force_ssl: [rewrite_on: [:x_forwarded_proto]],
load_from_system_env: true,
http: [port: 80],
url: [host: host, port: 80],
url: [host: host, port: 443, scheme: "https"],
server: true,
secret_key_base: System.get_env("SECRET_KEY_BASE")
# docker-compose.yml
version: '2'
services:
kroo:
image: [image url]
environment:
- HOST=0.0.0.0
ports:
- '443:443'
- '80:80'
$ docker ps
PORTS
0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp
$ docker logs
01:56:30.177 [info] Running AppWeb.Endpoint with cowboy 2.7.0 at 0.0.0.0:80 (http)
01:56:30.177 [info] Access AppWeb.Endpoint at https://example.com
Running Release tasks
[]
01:56:31.316 [info] Already up
01:56:33.085 [info] Plug.SSL is redirecting GET / to https://example.com with status 301
When I don't include force_ssl: [rewrite_on: [:x_forwarded_proto]], I'm able to have the page displayed fine in http, but when I include force_ssl, it redirects the https which is working fine, but I'm getting unable to connect error.
My confusion is that, since the load balancer is taking care of the SSL, I don't have the key and the certificate for SSL, which is why I don't have https: [] option in prod.exs.
Could someone point out what I'm doing wrong here?
Thanks
UPDATE: I finally got it working, below is my working configs in case anyone would find it helpful.
# config/prod.exs
# https config is not needed since ALB is handling the SSL
# Phoenix app serving in http is fine
config :app_web, AppWeb.Endpoint,
load_from_system_env: true,
http: [port: 8080],
url: [host: "example.com"],
server: true,
secret_key_base: System.get_env("SECRET_KEY_BASE")
# docker-compose.yml
# map phoenix port 8080 to docker 8080
ports:
- '8080:8080'
Since I'm not providing SSL certificates, but I still want to force ssl, like #jamesvl suggested in answer, use your load balancer to redirect http traffic to https.
If you need help setting up SSL on ALB, I followed this guide
If somehow your app still not showing up under your domain, make sure that you have an A Record with an alias map to the DNS name of your load balancer
I would suggest setting the listen port of your docker container to something other than 80, and don't listen on 443 at all.
Rationale
I think the issue may lie in the fact that your http: configuration is listening on port 80.
With force_ssl: enabled, you're indicating that you want http connections to go to port 443, but when something arrives on 443 (via the load balancer), you send it to your (listening) port 80... which redirects it back to 443?
Fix
Let Phoenix listen on an arbitrary port (say... 4010) for http only connections. (Since the load balancer does your SSL termination, all your communication with the load balancer will be over http.) This involves changing your Docker container to forward connections to that port as well - you don't want to listen on 80 or 443 at all in your container.
Your url: configuration would then be looking only at headers, redirecting http requests to https as needed.
By the way, Amazon's ALB can also do 80 -> 443 redirection for you if you setup the rules; this saves Phoenix from even having to have a config url: setup for port 80 at all

Running an apache container on a port > 1024

I've built a docker image based on httpd:2.4. In my k8s deployment I've defined the following securityContext:
securityContext:
privileged: false
runAsNonRoot: true
runAsUser: 431
allowPrivilegeEscalation: false
In order to get this container to run properly as non-root apache needs to be configured to bind to a port > 1024, as opposed to the default 80. As far as I can tell this means editing Listen 80 in httpd.conf to Listen {Some port > 1024}.
When I want to run the docker image I've build normally (i.e. on default port 80) I have the following port settings:
deployment
spec.template.spec.containers[0].ports[0].containerPort: 80
service
spec.ports[0].targetPort: 80
spec.ports[0].port: 8080
ingress
spec.rules[0].http.paths[0].backend.servicePort: 8080
Given these settings the service becomes accessible at the host url provided in the ingress manifest. Again, this is without the changes to httpd.conf. When I make those changes (using Listen 8000), and add in the securityContext section to the deployment, I change the various manifests accordingly:
deployment
spec.template.spec.containers[0].ports[0].containerPort: 8000
service
spec.ports[0].targetPort: 8000
spec.ports[0].port: 8080
ingress
spec.rules[0].http.paths[0].backend.servicePort: 8080
Yet for some reason, when I try to access a URL that should be working I get a 502 Bad Gateway error. Have I set the ports correctly? Is there something else I need to do?
Check if pod is Running
kubectl get pods
kubectl logs pod_name
Check if the URL is accessible within the pod
kubectl exec -it <pod_name> -- bash
$ curl http://localhost:8000
If the above didn't work, check your httpd.conf.
Check with the service name
kubectl exec -it <ingress pod_name> -- bash
$ curl http://svc:8080
You can check ingress logs too.
In order to get this container to run properly as non-root apache
needs to be configured to bind to a port > 1024, as opposed to the
default 80
You got it, that's the hard requirement in order to make the apache container running as non-root, therefore this change needs to be done at container level, not to Kubernetes' abstracts like Deployment's Pod spec or Service/Ingress resource object definitions. So the only thing left in your case, is to build a custom httpd image, with listening port > 1024. The same approach applies to the NGINX Docker containers.
One key information for the 'containerPort' field in Pod spec, that you are trying to manually adjust, and which is not so apparent. It's there primarily for informational purposes, and does not cause opening port on container level. According Kubernetes API reference:
Not specifying a port here DOES NOT prevent that port from being
exposed. Any port which is listening on the default "0.0.0.0" address
inside a container will be accessible from the network. Cannot be updated.
I hope this will help you to move on

How to specify ssl connection with Nginx stream?

Im trying to define reverse proxy with nginx.
I have a server which listens on port 943 (TCP with SSL). I use tekn0ir/nginx-stream docker. I have the following definitions in myotherservice.conf file:
upstream backend {
hash $remote_addr consistent;
server myserverip:943;
}
server {
listen localhost:943;
proxy_connect_timeout 300s;
proxy_timeout 300s;
proxy_pass backend;
}
When Im trying to connect loslhost:943, it refused. Im suspect its related to my SSL definitions. How should I define it?
Working with docker you must to bind port to all container interfaces in order to be able to expose ports:
...
server {
listen *:943;
...
Doc

AWS Beanstalk and Docker ports = what manner of tomfoolery is this?

So I have a docker application that runs on port 9000, and I'd like to have this accessed only via https rather than http, however I don't appear to be making any sense of how amazon handles ports. In short I'd like only expose port 443 and not 80 (on the load balancer layer and the instance layer), but haven't been able to do this.
So my Dockerfile has:
EXPOSE 9000
and my Dockerrun.aws.json has:
{
"AWSEBDockerrunVersion": "1",
"Ports": [{
"ContainerPort": "9000"
}]
}
and I cannot seem to access things via port 9000, but by 80 only.
When I ssh into the instance that the docker container is running and look for the ports with netstat I get ports 80 and 22 and some other udp ports, but no port 9000. How on earth does Amazon manage this? More importantly how does a user get expected behaviour?
Attempting this with ssl and https also yields the same thing. Certificates are set and mapped to port 443, I have even created a case in the .ebextensions config file to open port 443 on the instance and still no ssl
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupName: {Ref : AWSEBSecurityGroup}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
The only way that I can get SSL to work is to have the Load Balancer use port 443 (ssl) forwarding to the instance port 80 (non https) but this is ridiculous. How on earth do I open the ssl port on the instance and set docker to use the given port? Has anyone ever done this successfully?
I'd appreciate any help on this - I've combed through the docs and got this far with it, but this just plain puzzles me. In short I'd like only expose port 443 and not 80 (on the load balancer layer and the instance layer), but haven't been able to do this.
Have a great day
Cheers
It's known problem, from http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html:
You can specify multiple container ports, but Elastic Beanstalk uses only the first one to connect your container to the host's reverse proxy and route requests from the public Internet.
So, if you need multiple ports, AWS Elastic Beanstalk is probably not the best choice. At least Docker option.
Regarding SSL - we solved it by using dedicated nginx instance and proxy_pass'ing to Elastic Beanstalk environment URL.