Is there any specific configuration needed to run docker images from a private registry within a crio runtime? - nginx-reverse-proxy

I'm deploying an OKE kubernetes cluster which uses crio as container runtime. I have already configured a reverse proxy nginx for a private registry (with nexus3, and outside the cluster) and tried and succeed locally pulling, pushing and running containers only with docker login <registry.com>.
However when I try to pull those docker images within the cluster it fails with the following message:
Exec commands:
crioctl pull <domain.com.br>/imageName:tag --creds user:pass
error creating build container: Error initializing source : error pinging docker registry : invalid status code from registry 404 (Not Found)
Following the nginx.conf:
worker_processes 2;
events {
worker_connections 1024;
}
http {
client_max_body_size 0;
error_log /var/log/nginx/error.log warn;
access_log /dev/null;
proxy_intercept_errors off;
proxy_send_timeout 120;
proxy_read_timeout 300;
upstream nexus {
server nexus-registry:8081;
}
upstream registry {
server nexus-registry:5000;
}
server {
listen 80;
server_name <my-domain>;
keepalive_timeout 5 5;
proxy_buffering off;
location ~ /.well-known/acme-challenge{
allow all;
root /usr/share/nginx/html/letsencrypt;
}
location / {
return 301 https://<my-domain>$request_uri;
}
}
server {
listen 443 ssl;
server_name <my-domain>;
ssl on;
server_tokens off;
ssl_certificate /etc/nginx/ssl/live/<my-domain>/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/<my-domain>/privkey.pem;
ssl_dhparam /etc/nginx/dhparam/dhparam-2048.pem;
ssl_buffer_size 8k;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
location / {
#redirect to docker registry
if ($http_user_agent ~ docker ) {
proxy_pass http://registry;
}
proxy_pass http://nexus;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}

The problem was because of the client which was trying to pull images from nexus registry. I had to add cri-o with docker so it worked.
if ($http_user_agent ~ (docker|cri-o) )

Related

upstream timed out (110: Connection timed out) while reading response header from upstream nginx in nodejs project

While I had not connected this server to Domain or integrated SSL, it was working fine, as soon as I implemented the SSL it started throwing error.
Here is my server block - sites-available - config.
server{
server_name demo.example.com;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:3002; (I have used just proxy_pass in one config and still had the problem so added this set header)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade"; (I have tried with just "" but it didnt work)
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/demo.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/demo.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server{
if ($host = demo.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name demo.example.com;
return 404; # managed by Certbot
}

How to serve devpi with https?

I have an out-of-the-box devpi-server running on http://
I need to get it to work on https:// instead.
I already have the certificates for the domain.
I followed the documentation for nginx-site-config, and created the /etc/nginx/conf.d/domain.conf file that has the server{} block that points to my certificates (excerpt below).
However, my devpi-server --start --init is totally ignoring any/all nginx configurations.
How do i point the devpi-server to use the nginx configurations? Is it even possible, or am I totally missing the point?
/etc/nginx/conf.d/domain.conf file contents:
server {
server_name localhost $hostname "";
listen 8081 ssl default_server;
listen [::]:8081 ssl default_server;
server_name domain;
ssl_certificate /root/certs/domain/domain.crt;
ssl_certificate_key /root/certs/domain/domain.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
gzip on;
gzip_min_length 2000;
gzip_proxied any;
gzip_types application/json;
proxy_read_timeout 60s;
client_max_body_size 64M;
# set to where your devpi-server state is on the filesystem
root /root/.devpi/server;
# try serving static files directly
location ~ /\+f/ {
# workaround to pass non-GET/HEAD requests through to the named location below
error_page 418 = #proxy_to_app;
if ($request_method !~ (GET)|(HEAD)) {
return 418;
}
expires max;
try_files /+files$uri #proxy_to_app;
}
# try serving docs directly
location ~ /\+doc/ {
try_files $uri #proxy_to_app;
}
location / {
# workaround to pass all requests to / through to the named location below
error_page 418 = #proxy_to_app;
return 418;
}
location #proxy_to_app {
proxy_pass https://localhost:8081;
proxy_set_header X-outside-url $scheme://$host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
}
}
This is the answer I gave to the same question on superuser.
Devpi doesn't know anything about Nginx, it will just serve HTTP traffic. When we want to interact with a web-app via HTTPS instead, we as the client need to talk to a front-end which can handle it (Nginx) which will in turn communicate with our web-app. This application of Nginx is known as a reverse proxy. As a reverse proxy we can also benefit from Nginx's ability to serve static files more efficiently than getting our web-app to do it itself (hence the "try serving..." location blocks).
Here is a complete working Nginx config that I use for devpi. Note that this is /etc/nginx/nginx.conf file rather than a domain config like yours because I'm running Nginx and Devpi in docker with compose but you should be able to pull out what you need:
worker_processes 1;
events {
worker_connections 1024;
}
http {
# Define the location for devpi
upstream pypi-backend {
server localhost:8080;
}
# Redirect HTTP to HTTPS
server {
listen 80;
listen [::]:80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name example.co.uk; # This is the accessing address eg. https://example.co.uk
root /devpi/server; # This is where your devpi server directory is
gzip on;
gzip_min_length 2000;
gzip_proxied any;
proxy_read_timeout 60s;
client_max_body_size 64M;
ssl_certificate /etc/nginx/certs/cert.crt; Path to certificate
ssl_certificate_key /etc/nginx/certs/cert.key; Path to certificate key
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/pypi.access.log;
# try serving static files directly
location ~ /\+f/ {
error_page 418 = #pypi_backend;
if ($request_method !~ (GET)|(HEAD)) {
return 418;
}
expires max;
try_files /+files$uri #pypi_backend;
}
# try serving docs directly
location ~ /\+doc/ {
try_files $uri #pypi_backend;
}
location / {
error_page 418 = #pypi_backend;
return 418;
}
location #pypi_backend {
proxy_pass http://pypi-backend; # Using the upstream definition
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-outside-url $scheme://$host:$server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
With Nginx using this configuration and devpi running on http://localhost:8080, you should be able to access https://localhost or with your machine with appropriate DNS https://example.co.uk. A request will be:
client (HTTPS) > Nginx (HTTP) > devpi (HTTP) > Nginx (HTTPS) > client
This also means that you will need to make sure that Nginx is running yourself, as devpi start won't know any better. You should at the very least see an Nginx welcome page.

Is possible to use SSL in Odoo with NginX avoiding the standard ports (80 and 443)?

Following this tutorial I configured my Nginx like this:
upstream odoo8 {
server 127.0.0.1:8069 weight=1 fail_timeout=0;
}
upstream odoo8-im {
server 127.0.0.1:8072 weight=1 fail_timeout=0;
}
server {
# server port and name (instead of 443 port)
listen 22443;
server_name _;
# Specifies the maximum accepted body size of a client request,
# as indicated by the request header Content-Length.
client_max_body_size 2000m;
# add ssl specific settings
keepalive_timeout 60;
ssl on;
ssl_certificate /etc/ssl/nginx/server.crt;
ssl_certificate_key /etc/ssl/nginx/server.key;
error_page 497 https://$host:22443$request_uri;
# limit ciphers
ssl_ciphers HIGH:!ADH:!MD5;
ssl_protocols SSLv3 TLSv1;
ssl_prefer_server_ciphers on;
# increase proxy buffer to handle some Odoo web requests
proxy_buffers 16 64k;
proxy_buffer_size 128k;
# general proxy settings
# force timeouts if the backend dies
proxy_connect_timeout 3600s;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
# set headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
# Let the Odoo web service know that we’re using HTTPS, otherwise
# it will generate URL using http:// and not https://
proxy_set_header X-Forwarded-Proto https;
# by default, do not forward anything
proxy_redirect off;
proxy_buffering off;
location / {
proxy_pass http://odoo8;
}
location /longpolling {
proxy_pass http://odoo8-im;
}
# cache some static data in memory for 60mins.
# under heavy load this should relieve stress on the Odoo web interface a bit.
location /web/static/ {
proxy_cache_valid 200 60m;
proxy_buffering on;
expires 864000;
proxy_pass http://odoo8;
}
}
And I have this ports in my Odoo configuration
longpolling_port = 8072
xmlrpc_port = 8069
xmlrpcs_port = 22443
proxy_mode = True
When I load https://my_domain:22443/web/database/selector in the browser it loads well. But when I choose a database or I make any action, the address loses the https and the port, so it's loaded through the port 80. Then I would need to add this to the NginX configuration and the port 80 should be open
## http redirects to https ##
server {
listen 80;
server_name _;
# Strict Transport Security
add_header Strict-Transport-Security max-age=2592000;
rewrite ^/.*$ https://$host:22443$request_uri? permanent;
}
Is there a way to avoid this redirection? Like that I could keep the port 80 closed in order to avoid spoofing
Update
I can open the login screen with the address https://my_domain:22443/web/login?db=dabatase_name and I can work well inside, but if I log out in order to choose another database in the droplist, it loses again the port and the ssl
Please, try to use this construction:
## http redirects to https ##
server
{
listen 80;
server_name _;
if ($http_x_forwarded_proto = 'http')
{
return 301 https://my_domain.com$request_uri;
}
}

ssl and https in nginx using meteor

I have this nginx configuration
server {
listen 80;
server_name app.com www.app.com;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443;
server_name app.com www.app.com;
ssl on;
ssl_certificate /etc/nginx/ssl/app.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
location = /favicon.ico {
root /opt/myapp/app/programs/web.browser/app;
access_log off;
expires 1w;
}
location ~* "^/[a-z0-9]{40}\.(css|js)$" {
root /opt/myapp/app/programs/web.browser;
access_log off;
expires max;
}
location ~ "^/packages" {
root /opt/myapp/app/programs/web.browser;
access_log off;
}
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
and deployed to ec2 using mup with normal settings
It is deployed and i can access the site app.com
But the https://app.com is not working
as In the config file all the requests are rewriting to https
What is happening here
I can access the site when I enter app.com that means it is
forwarding app.com ad https://app.com
I cannot access https://app.com that means nginx is not working
Which of the above two scenarios are true?
I'm out of options. I checked with ssl checkers they are showing that ssl certificate is not installed.
then why my app is working when enter app.com?
Now Meteor Up has the built in SSL Support. No more hard work.
Just add the SSL certificates and the key and do mup setup.
We use stud to terminate SSL
I am not NGINX knowledgeable but looking at my working production configs I see a number of parameters you have not included in yours.
In particular you may need the following at the top in order to proxy websocket connections:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
My 443 server also includes the following in addition to what you already have:
server {
ssl_stapling on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
add_header Strict-Transport-Security "max-age=31536000;";
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_redirect off;
}
}
Finally I would try commenting out your location directives for bug checking. The issue should not be with your SSL certificate, it should still allow you to visit (with a warning) for a self-signed or misconfigured certificate. Hope this helps.

Nginx Enable HTTPS/SSL when forwarding to other URL

Currently, I'm working with an AWS Ubuntu EC2 instance, running a Node.js app on port 3000, that has an Nginx reverse proxy. I have been trying to enable HTTPS and add a SSL certificate and I've been successful in that I don't get any errors in the nginx.conf file. However, I am redirecting my main website, "example.com" to the public DNS of the AWS server and when I try to load the "http://example.com" or "https://example.com" page, I get a "Unable to Connect" error from Firefox, which is my testing browser. Also when I run sudo nginx -t, there are no syntactical errors in the configuration file and when I check the /var/log/nginx/error.log file it is empty. Below is my current nginx.conf file.
Update: I changed server_name from example.com to the public DNS of my server, lets call it amazonaws.com. Now, when I type in https://amazonaws.com the page loads and the SSL certificate shows up when running the website through ssllabs.com. However, when I type in amazonaws.com or http://amazonaws.com I get a blank page like before.
user root;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
# max_clients = worker_processes * worker_connections / 4
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_buffers 16 8k;
# backend applications
upstream nodes {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/example_com.crt;
ssl_certificate_key /etc/nginx/ssl/example_com.key;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
server_name example.com;
# everything else goes to backend node apps
location / {
proxy_pass http://nodes;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
}
}
}
You should give this server definition
server {
listen 80;
return 301 https://$host$request_uri;
}
a server_name (eg amazonaws.com) as well.