How to retrieve client IP within a Docker container running Apache on AWS Elastic Container Service? - apache

I have a Docker server running Apache 2.4.25 (Debian) PHP 7.3.5.
This container is "hosted" within an Amazon Elastic Container Service.
The default AWS EC2s are sat behind an AWS application load balancer.
I want to be able to obtain, in PHP, the users/clients IP address.
My presumption based on my limited knowledge is that this IP address will need to be handed from the ALB, to the EC2, then to the Docker container, and finally for Apache to pick up.
I have tried to shorten the stack by attempting to obtain the IP within a Docker container running on my local machine, however still I wasn't able to find a way for Docker to fetch and pass through my IP to Apache.
I know typically you'd have the X-Forwarded header from the ALB, but I have not been able to work out how Docker can take this and pass it through to Apache.
I expected to have the client IP in $_SERVER['REMOTE_ADDR'] or $_SERVER['X_FORWARDED'].
Within the AWS hosted Docker containers
$_SERVER['REMOTE_ADDR'] contains an IP within the VPC subnet
$_SERVER['X_FORWARDED'] does not exist

Related

Cannot access the application via node ip and node port

I have to deploy an application via Helm by supplying a VM Ip address and node port. Its a BareMetal Kubernetes cluster. The kubernetes cluster has ingress controller installed (as node port, this value is supplied in helm command). The problem is: I am receiving a 404 not found error if I access the applciation as:
curl http://{NODE_IP}:{nodeport}/path
there is no firewall. I have "allow all ingresss traffic" policy. But not sure what is wrong. I have now tried anything possible but cannot find the root cause.

Custom DNS record and SSL certificate in docker container

I am facing a issue with self signed certificate and DNS record in hosts file inside docker container. We have multiple Linux servers with docker swarm running. There was a docker service where I need to copy the self signed certificate and create a DNS record manually with docker exec all the time when ever service is restarting. There was a mapped volume for the docker service. How can I map container DNS file(/etc/hosts) and /usr/local/share/ca-certificates to have these in mapped place so that there will be no issues if the container re-start.
Use docker configs.
Something like :-
docker config create my_public-certificate-v1 public.crt.
docker service create --config src=my_public-certificate-v1,target=/usr/local/share/ca-certificates/example.com.crt ...

How to generate a certificate for AWS EC2 instance part of AWS ECS ( Docker)

We have a domain and our web-app uses AWS ECS docker containers and we have 3 of such EC2 instances which host the docker containers. Since the web-app is https - the socket request we make to the docker containers also have to be https to avoid mixed content error.
We already have a certificate from Let's Encrypt for our web-app - how do I go about certificates for individual EC2 instances which are a part of AWS ECS cluster ?
Edit 1 : our web-app is hosted on AWS and the docker containers launch a node https server

No response from running Tomcat: does not start, does nothing

I'm using Ansible to spin up a new Amazon EC2 install, and then I install Java and Tomcat (via the yum module). After placing the war for sample project from the Apache website in the webapps directory, I go and run the the command (below), nothing happens. It returns with response, no error. I've checked both the IP and port 8080 and Tomcat is not running.
[centos#sonar-test webapps]$ sudo systemctl start tomcat
[centos#sonar-test webapps]$ sudo systemctl start tomcat
[centos#sonar-test webapps]$
For reference, I was following this tutorial as well:
https://www.digitalocean.com/community/tutorials/how-to-install-apache-tomcat-7-on-centos-7-via-yum
From your comment on my question running curl in your ec2 instance
When I curl I get a large html document with various apache-esque things on it
It means Tomcat is installed and running.
If you don't access it, its because of your security group rules
In your ec2 console, select the Security Groups option. Edit the rules that is associated with your ec2 instance (the one running Tomcat) and permits inbound connections to port 8080 (so you can make request to your Tomcat server) and port 80 if you're running Apache (or nginx/another web server). If you're not sure about security, you can restrict the inbound traffic to come only from your IP so you can test but no-one else can make request.

Overriding the way JMX works in a Docker WLS container

I have a WebLogic docker container. The WLS admin port is configured at 7001. When I run the container, I use --hostname=[hosts' hostname] and expose the 7001 port at a different host port using -p 8001:7001 for example. The reason I do the port mapping is because I would want to run multiple WLS containers on the same host.
I have some applications that I deploy on this WebLogic. These applications use an external SDK (which I don't control) to get the application url using JMX (getURL operation of RuntimeServiceMBean).
This is where it gets wrong. The URL comes out as http://[container's IP]:7001. I would want it to retrieve http://[hosts' hostname]:8001 - i.e. the hostname I used to start the container and the port at which 7001 is mapped i.e. 8001.
Is there a way this could be done?
When the container is started, you should start WebLogic after adjusting the External Listen Address of your AdminServer. You can use WLST Offline for that from within a shell script, passing parameters with docker run -e KEY=VALUE, then later read these from inside the WLST script. Modify your AdminServer External Listen Address, exit(), then you can start AdminServer.
Here's an example on how to create the extra Network Channel with proper External Listen Address.