Multiple subdomains on CloudFlare - cloudflare

Is it possible to set up DNS records using CloudFlare that would allow me to have subdomains pointing to two different ports on my local machine?
For example, one application running on port 80, and another on port 8880? According to this link the ports should both be supported:
https://blog.cloudflare.com/cloudflare-now-supporting-more-ports/
I'd like to have:
sub1.domain.com -> 1.2.3.4:80
sub2.domain.com -> 1.2.3.4:8880
I've looked at SRV records, but it doesn't seem to allow IP addresses as targets.

You can use a reverse proxy like nginx and use it along with Cloudflare for the purpose.
Check this link to learn about installing and configuring nginx as reverse proxy.
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-16-04
an example configuration looks like this
server {
listen 80;
server_name subdomain.example.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://local_ip:8081;
}
}
server {
listen 80;
server_name subdomain2.example.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://local_ip:port;
}
}

Related

Harbor 2.5.0 behind Apache reverse proxy

I installed Harbor in a server inside the company farm and I can use it without problem through https://my-internal-server.com/harbor.
I tried to add the reverse proxy rules to Apache to access it through the public server for harbor, v2, chartrepo, service endpoints, like https://my-public-server.com/harbor, but this doesn't work.
For example:
ProxyPass /harbor https://eslregistry.eng.it/harbor
ProxyPassReverse /harbor https://eslregistry.eng.it/harbor
I also set in harbor.yaml:
external_url: https://my-public-server.com
When I try to access to https://my-public-server.com/harbor with the browser I see a Loading... page and 404 errors for static resources because it tries to get them with this GET:
https://my-public-server.com/scripts.a459d5a2820e9a99.js
How can I configure it to work?
You should pass the whole domain, not only the path. Take a look at the official Nginx config to have an idea how this might look like.
upstream harbor {
server harbor_proxy_ip:8080;
}
server {
listen 443 ssl;
server_name harbor.mycomp.com;
ssl_certificate /etc/nginx/conf.d/mycomp.com.crt;
ssl_certificate_key /etc/nginx/conf.d/mycomp.com.key;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_pass http://harbor/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
}
Note that you should disable proxy or buffering

GoLang HTTPS API

I have a server in windows using Plesk.
I have a domain (example.com), also I bought a SSL certificate for this domain.
I just installed successfully and configure the domain and ssl in my server. So now I can join to my web using https://www.example.com or example.com and will be redirect to https://....
Until that everything works fine.
But now I have been developed an API in GoLang which can’t start to listen at 443 port for some reason. (I thought that maybe because is being already used ?) So I changed to 8081 port. Now when I want to make a request to my API I have to use https://www.example.com:8081/api/v1/users for example.
The problem is that some applications show me a error “Certificate invalid” which I think is because the port is not 443. Is there any way that I can run go in 443?
The code in GO is this: (The crt and key are the ones provided by GoDaddy, is where I bought the SSL)
func main() {
router := NewRouter()
handler := cors.AllowAll().Handler(router)
log.Fatal(http.ListenAndServeTLS(":8081", "tls.crt", "tls.key", handler))
}
Run the whole Golang application behind nginx (reverse proxy):
Create a Virtual Host Server Block in Nginx using your domain.
https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04
Setup your SSL certs
Point that domain to your Golang App
server {
server_name example.com;
location / {
proxy_pass http://localhost:8081;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_pass_request_headers on;
proxy_read_timeout 150;
}
ssl_certificate /path/to/chainfile/example.com/abcd.pem;
ssl_certificate_key /path/to/privatekeyfile/example.com/abcd.pem;
if ($scheme != "https") {
return 301 https://$host$request_uri;
}
}

Nginx proxy over https to server - different hostname on cert than Host header

I want to receive traffic at https://example.com, on server 1. I then want to proxy that traffic over https to server 2. Server 2 has Nginx set up with the exact same tls certificate and key as Server 1, so it should theoretically be able to serve the requests. However, when Nginx on server 2 tries to proxy a request to server 2, it sends it to server2.example.com, which differs from the common name on the cert, which is just example.com.
Is there a way to configure nginx to expect the name on the tls cert offered by the host (during tls handshake) to which it is proxying requests to be different from the address of the host to which it is proxying?
Example config on server 1:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /srv/tls/example.com.crt;
ssl_certificate_key /srv/tls/example.com.key;
location / {
proxy_pass https://server2.example.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
Example config on server 2:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /srv/tls/example.com.crt;
ssl_certificate_key /srv/tls/example.com.key;
location / {
proxy_pass http://localhost:12345;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
Example curl from server 1:
$ curl https://server2.example.com/chat -H "Host: example.com"
curl: (51) Unable to communicate securely with peer: requested domain name does not match the server's certificate.
If need be, I could generate a new self-signed cert and use that on server 2. However, I assumed it would be faster to just change the Nginx configuration. If the config change is not possible, I'll create a new cert.
You can use the proxy_ssl_name directive to specify the server name of the proxied host's certificate.
For example:
location / {
proxy_pass https://server2.example.com;
proxy_set_header Host $host;
proxy_ssl_name $host;
...
}
See this document for details.

How to install gitlab separate on centos7?

I wish to install gitlab on my Centos 7 server. But I need to separate the gitlab and apache folder. That is when I type localhost should get the index page in HTML folder and when I type git.example.com should get the gitlab page. Is there any way to do this? Please help me, anyone.
Might not be the best solution, but what I did was to set a "front NGINX" to proxy my 3 services: Apache (at www), Redmine (at issues) and GitLab (at git)
Then I configured my Apache to listen on another port (say 808). And my GitLab to listen on its own port (say 809).
And I added a server configuration in NGINX with a proxypass using something like this:
server {
listen 80;
server_name www.example.com;
location / {
access_log off;
proxy_pass http://localhost:808;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
and one for the GitLab as:
server {
listen 80;
server_name git.example.com;
location / {
access_log off;
proxy_pass http://localhost:809;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_page 502 /502.html;
location = /502.html {
root /opt/gitlab/error_pages;
}
}

Access Tomcat running on port 8080 without appending port in URL

I have two website running in two different two Tomcat servers on one machine. One Tomcat is listening in 80 port and other on 8080.
Requirements is, I want to access both of these website without appending the port. For eg:
Site A
http://www.siteA.com (Tomcat 1: Port 80)
Site B
http://www.siteB.com (Tomcat 2: Port 8080),
Currently Site B is accessible via http://www.siteB.com:8080. What are the possible options so I can access the website B without appending port 8080 (i.e http://www.siteB.com) and without Domain Forward and Marking, I am considering the following:
Proxy Server
Router
Please share some pointer that could helpful. Thank you.
Kamran
I believe this is a nice use case for a reverse proxy.
This answer is a good starting point to help you setup from your requirements, using NGINX: https://stackoverflow.com/a/13241047/967410
Applied to your case, it would be something like this:
server {
server_name www.siteA.com;
# siteA reverse proxy follow
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass $scheme://x.x.x.x:80;
}
server {
server_name www.siteB.com;
# siteB reverse proxy settings follow
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://x.x.x.x:8080;
}
Why not run httpd on port 80 and use mod_rewrite to forward the request?
It seems in this case all you really care about is the hostname, anyway.
Obviously you would have the two tomcat instances on a port other than 80 in this case.