Removing Rogue SSL Certs on AWS - apache

I have a client site set up on AWS with multiple servers running HTPPS behind an Elastic Load Balancer. At some point, someone from the client's team attempted to update the SSL Cert by installing a new one directly on one of the servers (instead of in the ELB).
I was able to upload a new cert to the ELB, but when traffic is directed towards the server with the improperly installed cert, it triggers a security warning.
No one can seem to answer who attempted this install, how they went about, or where they installed it.
What's the best way to go about finding and removing it?
Thanks,
ty

If it's installed on the server, it has very little to do with AWS. I see you tagged the question with apache so I assume the server is running Apache Web Server. You will have to connect into that server and remove the SSL settings from the Apache Web Server configuration, just like you would with an Apache Web Server install anywhere else.

Related

how to enable https for my aweb application hosted on google cloud

I acquired SSL certificate through some certificate authority and later installed on google cloud.
Still, my application is not accessible through https
www.eventic.in works but https://www.eventic.in don't work.
Can you please assist me in enabling https?
I want this site to be available only through https. Even if someone access without https, it should be redirected to https.
From the image I see you're configuring your certificates in Google App Engine Custom domains. Please note that Compute Engine (where is your VM) and App Engine are different products. Also it is possible that you're following this doc which is intended for App Engine and not for a VM.
Since you may want to set your certificates in a VM, those configuration remains on the Web server you're using (NGINX, Apache, etc). Also, checking your url https://www.eventic.in I'm sure the port 443 is not configured since this port is in general used for HTTPS.
You may want to look how to configure an SSL for the solution you have running in your VM

SSL Configuration in Clustered environment

We have an Oracle application (Agile PLM) which is deployed in a clustered environment. We have one admin node and two managed nodes supporting our application, where admin and 1 managed nodes are on the same server. We also have Load balancer which manages the traffic between the cluster.
We want to configure SSL in our application so that the application URL will be accessible over https only. We have already configured SSL at Load Balancer level(by installing security certificates in weblogic server which is the admin server) but want to know if we have to configure SSL on the managed server as well or bringing Load Balancer on https is sufficient?
All the users access the application using the Load Balancer URL only but since I am from the development team, so is only aware of the fact that we can also connect to the application with Managed server URLs, which are still running on http. Is it must to bring Managed servers also on https or it is just a good practice but not necessary?
It's not necessary, though probably a good practice.
I believe I have read in Oracle's installation guide that the recommended way is HTTP on the managed servers and terminating SSL on the load balancer. This may have changed.
For what it's worth, I leave HTTP on the managed servers.

Multi-tenant SSL with Cloudflare and Heroku

Im currently building an application that will reside at app.mydomain.com which is running on Heroku. All users will have their own entry points, like app.mydomain.com/client1, app.mydomain.com/client2, etc. I want clients to be able to setup their own domain (www.clientdomain.com) and cname it to their entry point. I understand this is pretty straight forward up until now.
All my DNS is handled by Cloudflare and I believe I can configure Cloudlfare into Full (Strict) mode, all I need to do is install their Origin Cert onto my Heroku dyno. This will ensure that all direct connects to my domain will be secure (going to app.mydomain.com/client1).
Question is, how does a client go about getting an SSL'ed connection for their domain; do I need to get a multidomain cert and start adding domains to it as I get clients, or am i supposed to install their cert onto Heroku (I believe I can only install 1 so thats a no go) or is it supposed to live on Cloudflare somewhere, or are there additional options I'm not seeing (I hope there are!).
Im not wondering what to do for my own domains, but rather, how do clients setup an SSL connection with their domains that resolve onto my servers.
This is rather perplexing!
The flow would be (I think):
User Browser -> Clients DNS -> (cname to) My Cloudflare -> Heroku
Hmm, it looks like this might be a pretty solid solution to this issue...
https://blog.cloudflare.com/introducing-ssl-for-saas/
Edit - after clarification
I'm currently building an application that will reside at
app.mydomain.com which is running on Heroku. All users will have their
own entry points, like app.mydomain.com/client1,
app.mydomain.com/client2, etc. Question is, how does a client go about
getting an SSL'ed connection for their domain; do I need to get a
multidomain cert and start adding domains to it as I get clients?
If you are going to use the same Heroku app for all of your clients (I think this is a bad idea by the way, but you might be required to) - then yes - you should get a multi-domain certificate and keep adding domains to it as your list of clients expand.
Original answer - which explains SSL + Load Balancing on Heroku.
Im currently building an application that will reside at
app.mydomain.com which is running on Heroku. I was clients to be able
to setup their own domain www.clientdomain.com and cname it to mine.
You will need a wildcard certificate to cover your subdomain (for the app.mydomain.com). You'll have use that cert in heroku.
...all I need to do is install their Origin Cert onto my Heroku dyno.
You are correct - except it's not on your Heroku dyno, it's on your Heroku app endpoint. There's a good read here: https://serverfault.com/questions/68753/does-each-server-behind-a-load-balancer-need-their-own-ssl-certificate
If you do your load balancing on the TCP or IP layer (OSI layer 4/3,
a.k.a L4, L3), then yes, all HTTP servers will need to have the SSL
certificate installed.
If you load balance on the HTTPS layer (L7), then you'd commonly
install the certificate on the load balancer alone, and use plain
un-encrypted HTTP over the local network between the load balancer and
the webservers (for best performance on the web servers).
So you should install your SSL certificate to your Heroku endpoint and let Heroku handle the rest.
Question is, how does a client go about getting an SSL'ed connection;
do I need to get a multidomain cert and start adding domains to it as
I get clients, am i supposed to install their cert onto Heroku (I
believe I can only install 1 so thats a no go) or is it supposed to
live on Cloudflare somewhere?
If you're referring to adding servers to your service from heroku, all you need to do is increase the number of web-dynos. Heroku will handle the load balancing in between these dynos. Your SSL certificate should be resolved in the load balancer so your dynos will be serving requests for the same endpoint. You shouldn't need another SSL certificate for the endpoint you've defined - as long as you're serving traffic from multiple dynos attached to it.

forwarding HTTPS from Plesk to AWS EC2

I'am quite new to setting up and managing websites, domains and stuff.
I purchased a domain (let's say example.de) and registerd it on my vserver running Parallels Plesk. As I need secure access I requested and created a SSL-Certificate at startssl.com. The developed application (Spring-Boot) runs on an EC2-Instance at AWS. The Product-Website runs on an Apache-Webserver on an EC2 instance. I need to secure both, the App (app.example.de) and the Website (example.de) using SSL.
What I want to archive is a redirect from the domain https://example.de to the EC2 Instance. I already tried several things - some I remember from the try&error marathon
Configure Plesk frame-forwarding the traffic on https://example.de to the ec2-ip
Obviously the Browser warns me that the Certificate is issued for example.de and not for and classifies the traffic as unsecure. Same like when accessing it like https://...
I also uploaded the certificate at Plesk - Also without success
Is there a solution for my setup? Or do I need (or is it recommened) to use Amazon Route53 for that task? Would be nice if someone could guide me and provide some tipps as I am pretty new to this topics.
Thanks
It seems there is no way around AWS route 53.
I figured out that there is a Extension for Plesk that is designed to route traffic using route53 and even a nice manual article at the Plesk homepage how to use any external DNS and also Route53 Extension. As this Extension requires a newer version of Plesk, than that one I am using I wasn't able to install it. I am pretty much bound to this version, so an update didn't come into question. I cannot tell for sure if using this Extension solves my initial problem, but it seems to be a potential solution.
The most simplistic solution (at least for me):
I ended up moving my Domain the AWS, created a Hosted-Zone, Added a Record Set with the IP of the EC2 and the DNS Server provided due the hosted Zone. Everything is now working like a charm.
Some more Background: The Product-Website and App-Frontend are running inside an Apache where I installed mod_ssl and configured SSL access. The Application backend runs as a Spring-Boot-App in a Tomcat where I also configured SSL using a TomcatConnectorCustomizer.
This setup works for my scenario

Azure SSL site no longer working after reboot - connections reset

I'm working on a site that exposes endpoints over HTTP and HTTPS through Azure Web roles.
We've done some updates to the database, and to force the sites to re-populate their cache we've rebooted the roles. Now, however, the SSL endpoints no longer work - connections to the server are broken.
We've tried reimaging the instances, which also didn't help.
Answering my own question to serve as reference for the future:
We'd removed the certificate used for Remote Desktop access to the site.
Interestingly, this caused the SSL side of the site to fail, but only on a reboot.
Our fix was to replace the certificate, then do a full redeploy to staging and a VIP swap.
It does appear that the load balancer is very sensitive to its certificate configuration. I've found before that if there's any issues with any of the certificates in a package that the load balancer just stops working with no error in the log.