I have traefik 1.6.5 installed and all my frontend and backend servers are running fine, except on backend which is behind a openvpn tunnel. In this setup I always get "Bad Gateway"
If I connect the same service without vpn the backend works find.
Any idea what I can do on that.
The following is my frontend / backend config
[backends.c00614]
[backends.c00614.servers.server1]
url = "http://172.18.20.41:81"
[frontends.c00614]
backend = "c00614"
passHostHeader = true
[frontends.c00614.routes.route1]
rule = "Host:c00614.test.xxxxxx.xx"
[frontends.c00614.headers.customrequestheaders]
X-Forwarded-Proto = "https"
If you are using docker, you can try running the alpine image (traefik:1.6.5-alpine). This will allow you to open a shell and ping/curl your backend URL. If you cannot access the URL over the vpn from the docker shell, then the issue is with your networking, not the Traefik app.
Thanks!
Related
I have local computer and remote server. Remote server is isolated and is only accessible with this computer. I want to connect to site from server, let it be https://example.com/site
I tried to make a tunnel via ssh -R 6761:example.com:80 remote-server. But when I am trying to use wget http://localhost:6761/site on the remote server - it doesn't work and show 404 whilst wget http://example.com/site working well on local computer.
What I am doing wrong?
You cannot tunnel HTTP that way.
The name of the server you are trying to reach will be included in the request (the Host header), but it will most likely only listen to example.com, not localhost.
You will need to set up a HTTP Proxy (Forward Proxy) on your local machine and tell your http client(s) to use that. (How depends on the client.)
I'm trying to access my web application served using the webpack DevServer from a virtual machine, but I'm able to connect through HTTPS only to the main URL - all sub-URLs fail with ERR_SSL_PROTOCOL_ERROR error.
Here is my setup:
I'm running webpack DevServer on a host machine with macOS. My virtual machine is running Windows 10 (VMware Fusion in bridged network mode). Webpack DevServer uses custom self-signed SSL certificates (generated using the mkcert tool).
Here is my DevServer configuration (#angular-builders/custom-webpack:dev-server):
"builder": "#angular-builders/custom-webpack:dev-server",
"options": {
"browserTarget": "admin:build",
"allowedHosts": [
"localhost",
"admin.local.slido-staging.com"
],
"host": "0.0.0.0",
"port": 443,
"servePath": "/",
"ssl": true,
"sslCert": "ssl/server.crt",
"sslKey": "ssl/server.key"
},
(local.slido-staging.com is just a "DNS alias" for localhost due to internal requirements, so the development certificate is also generated for *.local.slido-staging.com).
To make the web app accessible over HTTPS also from the virtual machine, I've exported the root certification authority (generated by mkcert) from the host machine, imported it to the root certificate authorities store on the VM Windows machine and added 192.168.2.90 admin.local.slido-staging.com to my Windows hosts file (192.168.2.90 is the IP address of my host machine).
The problem:
The web app is perfectly accessible from the host machine - HTTPS works for the main URL admin.local.slido-staging.com and also for sub-URLs (e.g. admin.local.slido-staging.com\main.js, see the screenshots:
But, when I try to access it from the VM, only the main URL (admin.local.slido-staging.com) loads through HTTPS, all other sub-URLs/resources end up with ERR_SSL_PROTOCOL_ERROR:
Here is another strange thing - trying to access any sub-URLs from the VM by entering the IP address of the host machine instead of the hostname works (an HTTPS connection is initiated, although the certificate doesn't match that name/IP address as expected), but trying to access it through the hostname fails (ignore the 4443 port on the last screenshot - I was just trying to serve the app from a different port):
What could be the problem? I spend a few hours debugging it, but without success (I tried also the -disable-host-check param for the DevServer, it didn't help)
Update:
I tried to serve the app using HTTP instead of HTTPS and it also doesn't work in the web browser - just the error message changed from ERR_SSL_PROTOCOL_ERROR to ERR_INVALID_HTTP_RESPONSE. But Wireshark shows that some data were fetched
The issue was caused by the latest version of Cisco AnyConnect Secure Mobility Client (4.10) installed on the host computer. After downgrading Cisco AnyConnect software to version 4.9 everything works as expected.
I am new to node.js and am trying to get into the hang of actually using it. I am very familiar with JavaScript so the language itself is self-explanatory but the use of Node.js is quite different from the browser implementation.
I have my own remote virtual server and have installed Node and the Package Manager and everything works as expected. I am not exactly a server extraordinaire and have limited experience with the Terminal and Apache Configurations.
I can run my server using:
nodejs index.js
Which gives me: listening on *:3300 as expected.
I can then access my localhost from the terminal using: curl http://localhost:3300/ which gives me the response I expect.
Given that the website that links to my server is https://example.com, how do I allow this link to access: http://localhost:3300/ so that I can actually use my node server in production? For example, http://localhost:3300/ runs a Socket Server that I would like to use using Socket.io on https://example.com/chat.html with the JavaScript:
var socket = io.connect('http://localhost:3300/', {transports: ['websocket'], upgrade: false});
Ok, this question has nothing to do with nodeJS.
localhost is a hostname that means this computer. it's equivalent to 127.0.0.1 or whatever IP address you can refer to your computer.
After the double colon (:) you enter the port number.
So if you want to make an HTTP call to a web-server running on your server, you have to know what is the IP address of your server, or the domain name, and then you call it with the port number where the server is running.
For Instance, you would call https://example.com:3300/chat.html to make an HTTP call to a server running on example.com with port 3300.
Keep in mind, that you have to make sure with your firewall configuration, that the specific port is open for incoming HTTP requests.
I'm having some trouble finding any way to make my situation workable. I have 2 applications:
1: External service web application running on sub1.domain.com. If I run this application behind traefik with acme (LetsEncrypt) it works fine. I have a few more backend services (api/auth) that all run with a valid LetsEncrypt certificate and get their http traffic redirected to https by traefik
[entryPoints.http.redirect]
entryPoint = "https"
I have to have some form of http to https forwarding for this service.
2: Internal service web application running on sub2.domain.com. I have a self signed trusted certificate (internal CA) which works fine behind traefik if I set it as a default certificate, or if I use it in the application itself (inside tomcat). However, since it is an internal service I can live without ssl for this if it solves my problem. However, this does not work with traefik's http to https forwarding.
I have been trying to get these 2 services to run behind the same traefik instance but all the possible scenarios I could think of do not work because they are either still work in progress or just plain not working.
Scenarios
1: No http to https redirect, don't bother with https for the internal service and just use http. Then inside the backend for the external webservice redirect to https.
Problems:
Unable to have 2 traefik ports which traefik forwards too Unable to
forward 1 single port to another proto (since the backend is always
either http or https port)
Use ACME over the default cert
2: Use ACME over default certificate
someone else thought this was a good idea. It's just not working yet.
3: Re-use backend ssl certificate. Have traefik just redirect without "ssl termination". I'm not sure if this is the same thing but there is an option called "passTLSCert". However it seems that this is only possible with frontends defined in the .toml file which do not work (probably because I use docker for backends).
4: use DNS-01 challenge to create an SSL certificate for my internal service.
Sounds like this could work, so I'm now using CloudFlare and have an api key. However, it does not seem to work for subdomains. and there is no reply on my issue report: https://github.com/containous/traefik/issues/1953
EDIT: I might be able to fix the issue described in 4 to get this to work. It seems the internal DNS might be conflicting with traefik
Someone decided that on our internal DNS zones would be added per subdomain, meaning that the SOA request returned the subdomain as the name. This does not play nice with cloudflare since the internal dns zone is not the same as the cloudflare dns.
Changing this to a main zone with a records for the subdomains fixed the issue (in combination with the delayDontCheckDNS option).
I want to enable ssl on an EC2 instance. I know how to install third party SSL. I have also enabled ssl in security group.
I just want to use a url like this: ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com with https.
I couldn't find the steps anywhere.
It would be great if someone can direct me to some document or something.
Edit:
I have a instance on EC2. On Which I have installed LAMP. I have also enabled http, https and ssh in the security group policy.
When I open the Public DNS url in browser,I can see the web server running perfectly.
But When I add https to URL, nothing happens.
Is there a way I am missing? I really dont want to use any custom domain on this instance because I will terminate it after a month.
For development, demo, internal testing, (which is a common case for me) you can achieve demo grade https in ec2 with tunneling tools. Within few minutes especially for internal testing purposes with [ngrok] you would have https (demo grade traffic goes through tunnel)
Tool 1: https://ngrok.com Steps:
Download ngrok to your ec2 instance: wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip (at the time of writing but you will see this link in ngrok home page once you login).
Enable 8080, 4443, 443, 22, 80 in your AWS security group.
Register and login to ngrok and copy the command to activate it with token: ./ngrok authtoken shjfkjsfkjshdfs (you will see it in their home page once you login)
Run your http - non https server (any, nodejs, python, whatever) on EC2
Run ngrok: ./ngrok http 80 (or a different port if your simple http server runs on a different server)
You will get an https link to your server.
Tool 2: cloudflare wrap
Alternatively, I think you can use an alternative to ngrok which is called cloudflare wrap but I haven't tried that.
Tool 3: localtunnel
A third alternative could be https://localtunnel.github.io which as opposed to ngrok can provide you a subdomain for free it's not permanent but you can ask for a specific subdomain and not a random string.
--subdomain request a named subdomain on the localtunnel server (default is random characters)
Tool 4: https://serveo.net/
Turns out that Amazon does not provide ssl certificates for their EC2 instances out of box. I skipped the part that they are a virtual servers providers.
To install ssl certificate even the basic one, you need to buy it from someone and install it manually on your server.
I used startssl.com They provide free basic ssl certificates.
Create a self signed SSL certificate using openssl. CHeck this link for more information.
Install that certificate on your web server. As you have mentioned LAMP, I guess it is Apache. So check this link for installing SSL to Apache.
In case you reboot your instance, you will get a different public DNS so be aware of this. OR attach an elastic IP address to your instance.
But When I add https to URL, nothing happens.
Correct, your web server needs to have SSL certificate and private key installed to serve traffic on https. Once it is done, you should be good to go. Also, if you use self-signed cert, then your web browser will complain about non-trusted certificate. You can ignore that warning and proceed to access the web page.
You can enable SSL on an EC2 instance without a custom domain using a combination of Caddy and nip.io.
nip.io is allows you to map any IP Address to a hostname without the need to edit a hosts file or create rules in DNS management.
Caddy is a powerful open source web server with automatic HTTPS.
Install Caddy on your server
Create a Caddyfile and add your config (this config will forward all requests to port 8000)
<EC2 Public IP>.nip.io {
reverse_proxy localhost:8000
}
Start Caddy using the command caddy start
You should now be able to access your server over https://<IP>.nip.io
I wrote an in-depth article on the setup here: Configure HTTPS on AWS EC2 without a Custom Domain