asp.net core application docker swarm hosted client IP - asp.net-core

I want to log my client's requests IP address and I have a docker service of asp.net core on Linux.
now we always have docker network IP address!
how can I get my clients real IP address?

You can get the real IP address of the clients if you change your mode to host. Below is an example of how to do this:
traefikedge:
image: traefik:1.4.3-alpine
ports:
- target: 80
published: 80 #for redirect to HTTPS
protocol: tcp
mode: host #to bypass ingress mesh, to preserve client ip
- target: 443
published: 443
protocol: tcp
mode: host #to bypass ingress mesh, to preserve client ip
There is an open issue here about this.

Related

Traefik entrypoint redirect to scheme and port

I'm running traefik in docker-compose with network_mode: host to get an accurate remote_ip. My docker hosts ports 80 and 443 are occupied so traefik uses 5080 and 5443 web and websecure entry points. I've forwareded 5080/5443 to my routers 80/443 so my.domain.me routes to traefik. https://my.domain.me works correctly, but http://my.domain.me redirects to port 5443. How can I configure traefik to redirect to port 443?
version: '3.3'
services:
traefik:
image: traefik:v2.4
# use host network for accurate remote_ip
network_mode: host
command: # CLI arguments
- --providers.docker=true
# ports 80 and 443 are used by another process.
- --entryPoints.web.address=:5080
- --entryPoints.websecure.address=:5443
- --entrypoints.web.http.redirections.entrypoint.to=websecure
- --entrypoints.web.http.redirections.entrypoint.scheme=https

Phoenix in Production on EC2 not rendering in HTTPS with AWS Load Balancer

I have followed this tutorial to set up my phoenix app on EC2, and later I added the load balancer for SSL.
I used ACM (Amazon Certificate Manager) to get the public certificate and applied on the Amazon Load Balancer (ALB).
I'm still a bit fuzzy on the port mapping, so I suppose it might be the cause.
# config/prod.exs
host = System.get_env("HOST") || "example.com"
config :app_web, AppWeb.Endpoint,
force_ssl: [rewrite_on: [:x_forwarded_proto]],
load_from_system_env: true,
http: [port: 80],
url: [host: host, port: 80],
url: [host: host, port: 443, scheme: "https"],
server: true,
secret_key_base: System.get_env("SECRET_KEY_BASE")
# docker-compose.yml
version: '2'
services:
kroo:
image: [image url]
environment:
- HOST=0.0.0.0
ports:
- '443:443'
- '80:80'
$ docker ps
PORTS
0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp
$ docker logs
01:56:30.177 [info] Running AppWeb.Endpoint with cowboy 2.7.0 at 0.0.0.0:80 (http)
01:56:30.177 [info] Access AppWeb.Endpoint at https://example.com
Running Release tasks
[]
01:56:31.316 [info] Already up
01:56:33.085 [info] Plug.SSL is redirecting GET / to https://example.com with status 301
When I don't include force_ssl: [rewrite_on: [:x_forwarded_proto]], I'm able to have the page displayed fine in http, but when I include force_ssl, it redirects the https which is working fine, but I'm getting unable to connect error.
My confusion is that, since the load balancer is taking care of the SSL, I don't have the key and the certificate for SSL, which is why I don't have https: [] option in prod.exs.
Could someone point out what I'm doing wrong here?
Thanks
UPDATE: I finally got it working, below is my working configs in case anyone would find it helpful.
# config/prod.exs
# https config is not needed since ALB is handling the SSL
# Phoenix app serving in http is fine
config :app_web, AppWeb.Endpoint,
load_from_system_env: true,
http: [port: 8080],
url: [host: "example.com"],
server: true,
secret_key_base: System.get_env("SECRET_KEY_BASE")
# docker-compose.yml
# map phoenix port 8080 to docker 8080
ports:
- '8080:8080'
Since I'm not providing SSL certificates, but I still want to force ssl, like #jamesvl suggested in answer, use your load balancer to redirect http traffic to https.
If you need help setting up SSL on ALB, I followed this guide
If somehow your app still not showing up under your domain, make sure that you have an A Record with an alias map to the DNS name of your load balancer
I would suggest setting the listen port of your docker container to something other than 80, and don't listen on 443 at all.
Rationale
I think the issue may lie in the fact that your http: configuration is listening on port 80.
With force_ssl: enabled, you're indicating that you want http connections to go to port 443, but when something arrives on 443 (via the load balancer), you send it to your (listening) port 80... which redirects it back to 443?
Fix
Let Phoenix listen on an arbitrary port (say... 4010) for http only connections. (Since the load balancer does your SSL termination, all your communication with the load balancer will be over http.) This involves changing your Docker container to forward connections to that port as well - you don't want to listen on 80 or 443 at all in your container.
Your url: configuration would then be looking only at headers, redirecting http requests to https as needed.
By the way, Amazon's ALB can also do 80 -> 443 redirection for you if you setup the rules; this saves Phoenix from even having to have a config url: setup for port 80 at all

Traefik - Unable to expose redis docker containers with the same port for different domains

I'm trying to set up a Redis with docker-compose for different environments.
Therefore I need to expose two domains with traefik on the same port:
domain.com:6379
domain-dev.com:6379
I can't expose those ports on the container, because they are running on the same server.
My docker-compose file (for domain-dev) looks like this:
version: '2'
services:
redis:
container_name: redis-signalr-dev
image: redis
volumes:
- ./redis-signalr-data:/data
restart: always
labels:
- traefik.enable=true
- traefik.backend=redis-signalr-dev
- traefik.frontend.rule=Host:domain-dev.com
- traefik.port=6379
- traefik.docker.network=traefik_default
- traefik.frontend.entryPoints=redis
networks:
- traefik_default
volumes:
redis-signalr-data:
networks:
traefik_default:
external: true
I also tried to configure the treafik to use the following endpoint:
--entrypoints='Name:redis Address::6379'
When connecting to "domain-dev.com:6379" a connection cannot be astablished.
Does anyone know a solution to this problem?
Traefik is a reverse proxy for http, not a tcp load balancer. So traefik itself (usually) opens ports 80 and 443 for ingress and forwards incoming http requests to the given http-able backends. The port you specify in your compose service labels is the port of the container, the traffic should be passed to.
So if you run a nodejs (http) server on port 3000, you would connect to http://yourdomain:80 and traefik would forward the requests to your nodejs container on port 3000. This means that by specifying a port on a compose service, you will not open this port on your host.
In your example running redis with its custom protocol, traefik is not a solution as traefik only does http proxying. To expose redis on your host (if you really want to do that), just use regular docker port mappings and point your domains to your docker hosts. Doing this, there is no way to use the same port with different domains, just specify two different ports for your both instances. For http this works by traefik inspecting the http requests and doing routing based on the host header.
Traefik 2.0 will have TCP support: https://github.com/containous/traefik/pull/4587
Until then you'd have to use NGINX or similar.

Unable to set up ExternalIp port forwarding in openshift orgin pods

I have a use case where I am running services on my local machine which is behind a router behind a NAT, so I can't port forward to my public IP. The only way to do it is via ssh tunneling or a VPN, for both I would need an exposed port to connect from my local
So,
i tried setting up my yml config with external ip,
and it looks like this :
spec:
ports:
- name: 8022-tcp
protocol: TCP
port: 8022
targetPort: 8022
nodePort: 31043
- name: 8080-tcp
protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30826
selector:
deploymentconfig: remote-forward
clusterIP: 172.30.83.16
type: LoadBalancer
externalIPs:
- 10.130.79.198
deprecatedPublicIPs:
- 10.130.79.198
sessionAffinity: None
If I understand correctly here 10.130.79.198, is the IP I will need to connect to from my local ssh on port 31043 which then forwards it to the service port 8022 which then forwards to container port 8022 where the ssh server is running.
The problem is that I am not able to connect to this external IP.
ssh logs:
"debug1: connect to address 10.130.79.198 port 31043: Connection timed out"
I got this external IP from the pod -> dashboard page -> external IP. Is this external IP needs to be configured anywhere or is my above config has any issue with the setup?

AWS Beanstalk and Docker ports = what manner of tomfoolery is this?

So I have a docker application that runs on port 9000, and I'd like to have this accessed only via https rather than http, however I don't appear to be making any sense of how amazon handles ports. In short I'd like only expose port 443 and not 80 (on the load balancer layer and the instance layer), but haven't been able to do this.
So my Dockerfile has:
EXPOSE 9000
and my Dockerrun.aws.json has:
{
"AWSEBDockerrunVersion": "1",
"Ports": [{
"ContainerPort": "9000"
}]
}
and I cannot seem to access things via port 9000, but by 80 only.
When I ssh into the instance that the docker container is running and look for the ports with netstat I get ports 80 and 22 and some other udp ports, but no port 9000. How on earth does Amazon manage this? More importantly how does a user get expected behaviour?
Attempting this with ssl and https also yields the same thing. Certificates are set and mapped to port 443, I have even created a case in the .ebextensions config file to open port 443 on the instance and still no ssl
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupName: {Ref : AWSEBSecurityGroup}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
The only way that I can get SSL to work is to have the Load Balancer use port 443 (ssl) forwarding to the instance port 80 (non https) but this is ridiculous. How on earth do I open the ssl port on the instance and set docker to use the given port? Has anyone ever done this successfully?
I'd appreciate any help on this - I've combed through the docs and got this far with it, but this just plain puzzles me. In short I'd like only expose port 443 and not 80 (on the load balancer layer and the instance layer), but haven't been able to do this.
Have a great day
Cheers
It's known problem, from http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html:
You can specify multiple container ports, but Elastic Beanstalk uses only the first one to connect your container to the host's reverse proxy and route requests from the public Internet.
So, if you need multiple ports, AWS Elastic Beanstalk is probably not the best choice. At least Docker option.
Regarding SSL - we solved it by using dedicated nginx instance and proxy_pass'ing to Elastic Beanstalk environment URL.