I have several WebApp clients that will be used to serve authenticated users via TLS. On the backend these WebApps communicate to a CloudService via NetTCP using WCF on cloudapp.net endpoints. Both sets of resources are on the same Azure subscription, resource group and geographic location.
My question is in regards to both performance as well as security:
1. Will traffic between the WebApp and CloudService resources stay within the Azure infrastructure to maintain the best performance? If not is there a way to ensure the fastest possible communication between the WebApps and CloudServices?
2. If the traffic does remain internal then are there any concerns about securing the traffic when these calls are made or can I safely keep the calls as http:// requests since they will not be made on any public channels?
3. Are there any other security or performance concerns regarding this scenario?
Not, unless you implement VNet integration and integrate your WebApp into the CloudService VNet.
Well, it depends on how paranoid you are, if you absolutely trust Azure, you can use HTTP, without S. I'm not sure how secure that is, but I can imagine no one would say: "Use HTTP, Azure (or whatever) is secure enough".
If everything is in one region, it should be fine, theoretically.
Related
I have been reading the Openshift documentation for secured (SSL) routes.
Since I use a free plan, I can only have an "Edge Termination" route, meaning the SSL is ended when external requests reach the router, with contents being transmitted from the router to the internal service via HTTP.
Is this secure ? I mean, part of the information transmission is done via HTTP in the end.
The connection between where the secure connection is terminated and your application which accepts the proxied plain HTTP request is all internal to the OpenShift cluster. It doesn't travel through any public network in the clear. Further, the way the software defined networking in OpenShift works, it is not possible for any other normal user to see that traffic, nor can applications running in other projects see the traffic.
The only people who might be able to see the traffic are administrators of the OpenShift cluster, but the same people could access your application container also. Any administrators of the system could access your application container even if using a pass through secure connection terminated with your application. So is the same situation as most managed hosting, where you rely on the administrators of the service to do the right thing.
I'm building a serverless web application. My HTML, CSS and JavaScript are in a public storage location which my domain example.com points towards.
When my users navigate to my domain using their browser, their browser will GET these files from that location and then there is no further communication with example.com. The JavaScript application runs in the browser and communicates with a separate backend via HTTPS (in my case AWS, but could be e.g. Azure, Kinvey, BlueMix or others).
It therefore seems to be that there is no reason to encrypt the communication between my users' web browsers and xyz.com i.e. I don't need to provide https://example.com, and my doing so would provide no security benefit.
Am I correct?
The reason I ask is that I found at least two static hosting services which offer SSL support:
https://www.netlify.com/features#security
https://surge.sh/help/using-https-by-default
I am aware of the reasons for wanting HTTPS (described in the second link above and also at https://levels.io/default-to-https/ ...) but none of this seems to apply to my situation.
I believe this is a serious question because more applications will be built in this manner (the folks at http://serverlessconf.io/ certainly think so), and as long as the channel to the actual backend is secured there is no reason to secure the channel to what is essentially a read-only hard disk.
If you don't secure communication with example.com then a man in the middle attacker (eg a rogue wifi hotspot) could modify the html and JavaScript loaded by users.
One way to use this would be to change the JavaScript so that subsequent API requests are sent to attacker controller servers instead of yours, compromising any credentials or information transferred.
I have 2 web roles in a cloud service; my API and my Web Client. Im trying to setup SSL for both. My question is, do I need two SSL certificates? Do I need 2 domain names?
The endpoint for my api is my.ip.add.ress. The endpoint for my webclient is my.ip.add.ress:8080.
Im not sure how to add the dns entrees for this as there is nowhere for me to input the port number (which I have learned is because its out of the scope of the dns system).
What am I not understanding? This seems to be a pretty standard scenario with Azure Cloud Services (it is set up this way in the example project in this tutorial, for instance http://msdn.microsoft.com/en-us/library/dn735914.aspx) but I can't find anywhere that explains explicitly how to handle this scenario.
First, you are right about DNS not handling port number. For your case, you can simply use one SSL certificate for both endpoints and make the two endpoints have the same domain name. Based on which port is used by user request, the request will be routed to the correct endpoint (API vs. Web Client). Like you said this is a relative common scenario. There is no need to complicate things.
Let's assume you have one domain www.dm.com pointing to the ip address. To access your Web API, your users need to hit https://www.dm.com, without port number which defaults to 443. To access your web client, your users need to hit https://www.dm.com:8080. If you want users to use default port 443 for both web api and web client, you need to create two cloud services instead of one, then web api on one cloud service and web client on the other cloud service. Billing wise, you will be charged the same as one cloud service.
Are there any reasons you want to make 2 different domains and in turn 2 SSL certificates? If so, it is still possible. Based on your requirements, you may have to add extra logic to block requests from the other domain.
IPSec VPN is security provided at Network layer with following facilities:
Authentication
Data Integrity
Confidentiality
Anti-Replay
But making this set up is more costlier than using SSL at Application layer.
For example: http uses SSL to talk to Web server
So, Why people use VPN?
You pretty much gave the answer yourself: encryption on the network layer provides security for all traffic that goes through it, instead of each application having to implement its own security model.
VPNs are also used in completely different scenarios than your typical HTTP request. You most typically use VPNs to join into an intranet from outside and use all the network internal services. Doing this via a VPN means you just need to expose one network entry point to the outside world. Otherwise you'd have to expose every single service to the outside world, and implement individual security models for each.
I am working on a HIPAA cloud project and am implementing a Key Store as a central repository for all of the key pairs for PHI(Private Health Information) encryption... I am not worried about the actual data because it will be encrypted at rest and in transit.
However when a worker or webrole needs to work with the data they need to decrypt and reencrypt it (if they do updates). That's where the key Store comes into play. However, I don't want this internal service exposed and I also need it to be SSLed, because sending keys in the clear, even inside a virtual network of roles wouldn't pass a security audit.
So any suggestions on how I can get a web or worker role to use SSL with an internal endpoint?
Thanks
I don't think you can. Internal endpoints are on a closed network branch, so https would normally be redundant (although I understand your compliance issues). I found this answer (to my question) very useful in figuring out the security of internal endpoints: How secure are Windows Azure internal endpoints? - see the more detailed post that Brent links to.