I am developing a website using Vue.js, and I have implemented HTTPS in my webpage.
Right now the website is basically static, no communications between itself and the backend server.
If I wanted to add features for instance, login, via a backend server which is on the same machine as the frontend server.
Do I need to get another SSL certificate to make to communication between frontend and backend through HTTPS?
Is there any way to make the SSL certificate work on the whole domain?
You have a few options here
Proxy API requests to your backend service
This means that your HTTP server (eg NGINX) uses your built Vue app (eg the dist folder contents) as its document root and creates a reverse proxy for requests to your API service. For example, say you have a backend service running on port 3000
location /api {
proxy_pass http://localhost:3000/
}
Then your Vue app can make requests to /api/whatever.
During development, you can mirror this using the devServer.proxy setting in vue.config.js
module.exports = {
devServer: {
proxy: {
"^/api": "http://localhost:3000/"
}
}
}
Use a wildcard certificate and run your backend service at a sub-domain, eg
Frontend - https://example.com
Backend - https://api.example.com
Just get two SSL certificates for your two domains. They're free after all
Keep in mind, options #2 and #3 will need you to enable CORS support whereas that is not required for option #1.
Related
Is it possible to use caddy for local development where you have https://mysite.loc and use Caddyfile as reverse proxy to your services running on localhost?
My hosts file so I have local mysite.loc domain
127.0.0.1 mysite.loc
mysite.loc {
reverse_proxy /api localhost:5000
reverse_proxy /admin localhost:6000
reverse_proxy /graphql localhost:7000
reverse_proxy localhost:4000
tls ???
}
And thats about how far I got. I think I need to somehow point mysite.loc to running caddy daemon so it can intercept the request provide generated certs which I would then trust locally and also act as proxy redirecting to my locally running services.
I also think I don't need to generate any certificates myself caddy should do it right?
I would also like to avoid having to use any ports for mysite.loc like https://mysite.loc:4000 just https://mysite.loc and then let Caddy handle the rest. I would also like to avoid using docker.
I haven't tested this but my gut reaction is: No, you can't.
My reason is that caddy secures HTTPS via Let's Encrypt (LE), and LE works by authenticating the site via caddy placing a beacon internally on the server and LE then querying the beacon has the correct contents. So LE will fail to query if this site is simply on localhost and not open to WAN. LE needs access.
You could try opening your site to WAN, doing the LE auth, then closing it to WAN but I'm not sold that would work.
That being said, if all you want is HTTPS locally for dev, use a self-signed cert. Keep in mind HTTPS is silly for local dev because the whole point of HTTPS is to encrypt in-transit and there is no transit for localhost
It seems that using .localhost instead of .loc is enough to get https for anyone looking to get started heres one of my recent Caddyfiles
Caution: I was kind of hesitant to post this as an answer because browsers get their updates automatically all the time so what works today might not next time you open your browser.
{
email foo#gmail.com
log {
format console
}
}
www.{$DOMAIN} {
redir https://{$DOMAIN}{uri}
}
{$DOMAIN} {
#websockets {
header Connection *Upgrade*
header Upgrade websocket
}
reverse_proxy /graphiql {$API_SERVICE}
reverse_proxy /voyager {$API_SERVICE}
reverse_proxy /graphql {$API_SERVICE}
reverse_proxy /f/* {$API_SERVICE}
reverse_proxy #websockets {$CLIENT_SERVICE}
reverse_proxy {$CLIENT_SERVICE}
}
It's possible to get SSL locally however the auto-ssl feature in Caddy will not work since that utilizes Let's Encrypt.
I suggest trying mkcert, after you have successfully installed mkcert run mkcert mysite.loc to generate a certificate and it should return something like:
Created a new certificate valid for the following names 📜
- "mysite.loc"
The certificate is at "./mysite.loc.pem" and the key at "./mysite.loc-key.pem" ✅
It will expire on 6 March 2025
And then inside your Caddyfile add the tls directive
mysite.loc {
reverse_proxy /api localhost:5000
reverse_proxy /admin localhost:6000
reverse_proxy /graphql localhost:7000
reverse_proxy localhost:4000
tls mysite.loc.pem mysite.loc-key.pem
}
then run it and it should just work!
Currently we have a C# web api running on 2 IIS servers, We are using Netscalar to load balance between IIS1 and IIS2 servers.
We have containerized our API and deployed it to OpenShift, as part of our testing initially we would like to point OpenShift as third node.
Means Netscalar should forward the request to OpenShift route also.
How can this be achieved in Netscalar.
My OpenShift route name is different so we tried specifying URL transformation rule to redirect IIS incoming request to OpenShift exposed route, but we are facing 503 service unavailable error.
What is the right way of configuring Netscalar to my API request are handled between IIS1, IIS2 and OpenShift ?
I don't think in most cases URL transformation is necessary. In a Route you can specify any host that you would like, so you can use your old DNS name. When a request with that HTTP Host header arrives at the OpenShift cluster (specifically at any Router Pod) it will be forwarded to your application.
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: my-route
...
spec:
host: www.example.com
...
Your Netscaler load balancer needs to forward the traffic to the OpenShift Load Balancer (which is typically a separate IP), which in turn will forward it to the Router Pods.
I'm working with Vue JS using the webpack template, and dev mode.
How can I have part of my server using the HTTPS protocol and other part using HTTP?
I know that to use HTTPS is just add "https: true" to the devServer variable of the file build/webpack.dev.conf.js . Example:
devServer: {
https: true,
// other variables...
}
But when I do that just the HTTPS requests are accepted, no HTTP anymore.
How can I work with both protocols? If it's not possible, is there a VueJS way to redirect an HTTP request to an HTTPS?
It doesn't look totally straightforward to configure multiple entry points on your webpack server. Your best bet is likely to reverse-proxy the http requests using whatever other webserver you have handy. IIS will do this for you, for example. Google "reverse proxy [name-of-your-webserver]" :-)
I am working with a Golang app and Caddy as the HTTP server. The golang app rejects every http connection, it only can be used over HTTPS. This app is a kind of API/service that is consumed by other apps. As, it requires HTTPS I installed Caddy so I can take advantage of the automatic SSL certificate and use proxy to switch between the ports.
The application is running in the port 9000, so, the consumers will only writte mysite.com and caddy should be in charge of redirect that petitions to the port 9000 but maintaining the HTTPS. The configuration in caddy for the site is:
mysite.com {
proxy / :9000 {
max_fails 1
}
log logfile
}
Nevertheless, it seems like when the proxy is made the HTTPS is lost. I checked the logs for the application (no the logs of caddy) and I get this:
http: TLS handshake error from xxx.xxx.xxx.xxx:xxxx: tls: oversized record received with length 21536
So, based in this error, to me looks like the HTTP proxy made by caddy is losing the HTTPS. What can I do?
From the caddy docs
to is the destination endpoint to proxy to. At least one is required,
but multiple may be specified. If a scheme (http/https) is not
specified, http is used. Unix sockets may also be used by prefixing
"unix:".
So maybe it is sending http requests to the proxied https endpoint.
Does
mysite.com {
proxy / https://localhost:9000 {
max_fails 1
}
log logfile
}
fix it?
If that is the case, you may not strictly need your app on :9000 to listen https. It may simplify your deployment or cert management to just have it listen http and have caddy manage all the certs.
I know how to setup https for, say, clojure web app with nginx. How to do that for Phoenix?
In the prod.exs I have this:
config :my_app, MyApp.Endpoint,
url: [host: "my_website.com", port: 443],
http: [port: 4000],
# https: [port: 443,
# keyfile: System.get_env("SOME_APP_SSL_KEY_PATH"),
# certfile: System.get_env("SOME_APP_SSL_CERT_PATH")],
cache_static_manifest: "priv/static/manifest.json"
I have this:
ssl_certificate: /etc/letsencrypt/live/my_app.com/fullchain.pem;
ssl_certificate_key: /etc/letsencrypt/live/my_app.com/privkey.pem;
I want to use nginx with Phoenix as well.
1) Should I remove "http: [port: 4000]," compeletely from "prod.exs"?
2) Should I instead uncomment "https: [port: 443,...." ? Or should I have them both? I don't want to website to be accessible at http or I'd let nginx take care of it by redirecting a user from http to https.
3) Or should I remove https and http and let nginx handle that?
4) How about the key "url" and its "port"?
If you are using Nginx to terminate the SSL part of the connection, then you leave the app server configured for HTTP and any port you like (4000 is fine as long as you configure Nginx to forward to it). If your server is configured correctly, it will not answer HTTP port 4000 requests, thus the SSL cannot be bypassed.
The SSL configuration you are referring to at the app server level configures the app server to terminate the SSL connection (no Nginx necessary). Phoenix apps are all "full featured" web servers thanks to cowboy. Thus, they can handle the SSL termination as well as serving the application's dynamic and static assets.
The URL configuration is so your application knows its domain and can generate full urls as well as paths.
If you're set on using nginx in front of your Phoenix app then use nginx to terminate the ssl connection (your option 3). You still need to configure http in Phoenix though since nginx will proxy to your app using http. Therefore:
config :my_app, MyApp.Endpoint,
url: [host: "my_website.com", port: 4000],
http: [port: 4000]
Which assumes you will configure nginx to proxy to your app on port 4000. You will also want to adjust the host config key to be the base url of your site since any URL's you generate will use this base name (as Jason mentioned).