I have one server, one IP, and multiple websites.
What I try to do is to isolate all websites, each website in it's own docker container.
What I need, its the NGINX as proxy, and one Docker container for each website.
I tried something like that (as root on server):
/etc/hosts :
127.0.0.1 example.com
NGINX config:
http {
upstream app-a {
server 127.0.0.1:3000;
}
server {
listen 80;
server_name example.com www.example.com;
location / {
proxy_pass http://app-a;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Docker:
docker run -d -p 3000:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND
curl example.com :
<title>Welcome to nginx!</title>
Why do I get the NGINX response and not the Apache server response?
If I connect on Docker container, the Apache server is started:
sudo docker exec -i -t ID /bin/bash
curl 127.0.0.1 :
<html><body><h1>It works!</h1>
Also, on a remote computer (my PC), I have test.com pointed to my server's public IP. If I access example.com, I get the same NGINX response.
And one more question: is this a good approach, to isolate/run a live websites on docker container? Or should I look for another VM solution?
Thanks in advance.
Edited: the problem was inside my NGINX config file. The strange thing, I get no error from NGINX server.
Changed from
location \ {
to
location / {
It works fine now. Thanks #Oliver Charlesworth.
Related
I'm having a problem with a nginx configuration which I use as a reverse proxy for different containerized applications.
Basically Nginx is listening on port 80 and is redirecting every request to https. On different subdomains I'll then proxy pass to the port of the applications.
For example my gitlab config:
server {
listen 443 ssl; # managed by Certbot
server_name gitlab.foo.de www.gitlab.foo.de;
location /{
proxy_pass http://localhost:1080;
}
I'm redirecting to the gitlab http (not https) port. The systems nginx is taking care of SSL, I don't care if the traffic behind is encrypted or not.
This has been working for every app since yesterday.
I'd like to test https://github.com/bitnami/bitnami-docker-osclass for an honorary association. Same config as above but it is not working as intended.
Ressources are downloaded via https while the main page is getting a redirect to http.
Exmaple: https://osclass.foo.de --> redirect --> http://osclass.foo.de:1234/ (yes with the port in the domain which is very strange)
I don't get why? So I changed the config a little to:
server {
listen 443 ssl; # managed by Certbot
server_name osclass.foo.de www.osclass.foo.de;
location /{
proxy_pass http://localhost:1234;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Now the mainpage is loaded via https and I don't have the port in my domain anymore. But the whole page is broken because no ressources will be loaded due to
"mixed-content warning".
SEC7111: [Mixed-Content] Origin "https://osclass.foo.de" [...] "http://osclass.foo.de/oc-includes/osclass/assets/js/fineuploader/fineuploader.css"
Do I have a conflict with the integrated apache in the docker image or what am I doing wrong?
Any hints are appretiated!
Kind regards from Berlin!
I found a solution to fix the mixed content problem. I just edited the following line in
/opt/bitnami/osclass/config.php
# define('WEB_PATH', 'http://osclass.foo.de/');
define('WEB_PATH', 'https://osclass.foo.de/'); # with https
I know about starting a Containerized private registry with TLS enabled. And copying domain.crt to other docker hosts to access the registry.
I have a private Container registry server is already running (not as Container, but from the office) and I can login using username, password. How Can I use it with ssl certificate?
I know I can generate a CA certificate. But, how to upload the private key to registry. Like, using ssh, where we upload the key to Gitlab, and the public key in Host machine.
Or how can I download the domain.crt file from the registry to docker host?
What am I missing?
Thanks and regards
I played with it a couple of years ago and got it working with nginx, with a configuration along this lines:
{
upstream private-docker-registry {
server docker_registry:5000;
}
server {
listen 443 ssl;
listen 80;
server_name mydockerregistry.com;
ssl_certificate /etc/nginx/certs/mydockerregistry.com.crt;
ssl_certificate_key /etc/nginx/certs/mydockerregistry.com.key;
proxy_set_header Host $http_host; # required for Docker client sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client IP
client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads
# required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
chunked_transfer_encoding on;
location / {
# let Nginx know about our auth file
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/authentication/docker-registry.htpasswd;
proxy_pass http://private-docker-registry;
}
location /_ping {
auth_basic off;
proxy_pass http://private-docker-registry;
}
location /v1/_ping {
auth_basic off;
proxy_pass http://private-docker-registry;
}
Create an htpasswd file for authentication, in this example I called it docker-registry.htpasswd
Then run an nginx image linking to docker registry container, in this example call it docker_registry, in this nginx configuration example, it will be listening on port 5000, to run the nginx container will be something like this:
sudo docker run -d \
--name="nginx_docker_registry" \
-p 443:443 \
-p 80:80 \
--link my_docker_registry_container_name:docker_registry \
-v "path_to_htpasswd_file_in_host:/etc/nginx/authentication:ro" \
-v "path_to_certificates_in_host:/etc/nginx/certs:ro" \
-v "path_to_nginx_conf_in_host:/etc/nginx/nginx.conf:ro" \
nginx
I recently installed this web application on my Ubuntu server which runs Apache (SSL disabled).
It doesn't matter how much i try i can't get the application to use http. tried the -p flag. Then it exposes port 443 and binds something else. I hate browser warnings about SSL. I just want to use http with port 8080.
The application use nginx which only listens to 443. I want my application URL to look like http://localhost:8080 This application use Google OAuth for logins. I'm assuming it will work on http.
How to get it to work in http?
You must edit nginx.conf in order to use plain http (nginx will never speak http on a https port, only for some errors)
Change:
listen 443;
server_name localhost;
access_log /dev/stdout;
error_log /dev/stderr;
ssl on;
ssl_certificate /src/openseedbox/conf/host.cert;
ssl_certificate_key /src/openseedbox/conf/host.key;
To:
listen 8080;
server_name localhost;
access_log /dev/stdout;
error_log /dev/stderr;
Then after docker build, run with:
docker run -p 8080:8080 .......
Alternatively you can set your Apache as an HTTP virtual host that reverse-proxy to the secure HTTPS nginx. But I think it is easier to modify nginx config.
Approach #2
You can add another nginx container to act as reverse proxy, not sure if the application behind will break, but it acts as http "plainer":
docker-compose.yml
# Add this:
plain_nginx:
image: nginx
volumes:
- ./plain_nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- 8080:80
links:
- openseedbox
plain_nginx.conf
server {
listen 80;
server_name _;
access_log /dev/stdout;
error_log /dev/stderr;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
proxy_pass https://openseedbox;
}
}
Then do from ./docker/ directory in that repo:
docker-compose up
Then you have http://localhost:8080 acting as reverse proxy of the SSL stuff
I have all my website files - example.com - on my EC2 server (Ubuntu and Apache) with SSL on EC2 instance 1. I want example.com/blog to go to another EC2 instance - EC2 instance 2. How can I do that with SSL?
I'm using Ubuntu and Apache and Route 53. thanks!
One easy way to do this is with CloudFront, described in this answer at Server Fault, where you can use path patterns to determine which URLs will be handed off to which server.
Another is an Application Load Balancer (ELB/2.0), which allows the instance to be selected based on path rules.
Both of these solutions support free SSL certificates from Amazon Certificate Manager.
Or, you can use ProxyPass in the Apache config on the main example.com web server to relay all requests matching specific paths oer to a different instance.
You cannot accomplish this with Route 53 alone, because DNS does not work at the path level. This is not a limitation in Route 53, it's a fundamental part of how DNS works.
You quickly and easily achieve this by using nginx reverse proxy. Your ssl will still be managed and offloaded on the ELB level. That is listener 443 =>> 80
1) install nginx
yum install nginx
2) add to nginx config
upstream server1 {
server 127.0.0.1:8080;
}
upstream server2 {
server server2_IP_address_here:8080;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://server1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
}
location /blog {
proxy_pass http://server1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
}
}
I am having a Amazon EC2 Ubuntu instance. I have installed LAMP server and tomcat 7. I also have application running in tomcat.
Now, my URL for apache is - http://ec2-54-xx-xx-xx.us-west-2.compute.amazonaws.com/
My URL for tomcat application is: http://ec2-54-xx-xx-xx.us-west-2.compute.amazonaws.com:8080
Instead of writing the 8080 part, I would like to call this directly via the URL http://ec2-54-xx-xx-xx.us-west-2.compute.amazonaws.com/.
I went through lot of tutorials, all are invalid, out dated or missing details. I am apache2, so the files inside the apache2 directory are below.
How can I do this "properly"? Because I will purchase a domain name in this weekend and I will replace the long amazon URL with this one soon as well.
What you need is a reverse proxy. You should setup nginx or httpd server instance, which would proxy requests from port 80 (http) to your local 8080 port (tomcat).
Here's a sample configuration for nginx:
upstream tomcat {
server 127.0.0.1:8080; # your tomcat app address
}
server {
listen 80;
root /path/to/your/app/directory;
index index.html index.htm;
server_name your.app.domain;
location / {
try_files $uri $uri/index.html $uri.html #tomcat;
}
location #tomcat {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass http://tomcat;
}
}