Here is the situation I am facing. I work for a company that is designing a product in which, due to legal constraints, certain pieces of data need to reside on physical machines in specific geopolitical jurisdictions. For example, some of our data must reside on machines within the borders of the "Vulgarian Federation".
We are using Kubernetes to host the system, and will probably settle on either GKE or AWS as the cloud provider.
A solution I have invented creates a pod to host a MongoDB instance that is locale specific (say, Vulgaria-MongoDB), which then seamlessly stores the data on physical drives in that locale. My plan is to export the storage from the Vulgarian machine to our Kubernetes cluster using NFS.
The problem that I am facing is that I cannot find a secure means of achieving this NFS export. I know that NFSv4 supports Kerberos, but I do not believe that NFS was ever intended to be used over the open web, even with Kerberos. Another option would be creating a VPN server in the cluster and adding the remote machine to the VPN. I have also considered SSHFS, but I think it would be too unstable for this particular use case. What would be an efficient & secure way to accomplish this task?
As mentioned in the comment, running the database far away from the storage is likely to result in all kinds of weirdness. Modern DB engines allow for some storage latency, but not 10s of seconds generally. But if you must, the VPN approach is correct, some kind of protected network bridge. I don't know of any remote storage protocols I would trust over the internet.
Related
I have set up a LAMP server on a Raspberry Pi on my home network. I would like to expose the pi to the internet by opening up ports 22 and 80 on my router so i can ssh into it as well as use any web services i set up on it.
This is a little pet project I'm using to learn more about networking, server setup and linux in general - with only a cheap RPi which i can wipe and start again easily anything goes wrong. I do plan to put it on a separate subnet to the other devices on the home network, just in case.
(Yes, i know this is a little much for a raspberry pi - this is just a learning exercise and a proof of concept before i throw money at this to build a rig for it)
My understanding is that SSH is already secure, so i don't have to worry about my username and password being seen across the web when i want a terminal session.
My concern is that if i send anything to a web service (such as a wordpress or phpmyadmin password) it'll be clear to see on the web. How can i stop this?
My plan was SSL, but from what I've read, an SSL certificate needs a domain name for a certificate to be issued by most places. When all I'll be doing is pointing to a static IP from my ISP on the devices I'll be using.
The other use i have planned for it is as a mysql server for my kodi boxes to use for the library data so my devices can share data (the videos live on another server running windows). So other devices on a local network need to be allowed access to mysql easily without the silly level of security the internet-side requires. I assume this will be easily possable alongside my other use cases as I'd not be opening the port for it on the router and the only things that would access msql are local network devices, and services on the mysql host itself
Are any of my assumptions or conclusions wrong?
Are there any better ways to achieve what I'm after than what I'm describing?
Is there a preferred way to interact with the Pi if i just wanted it to set off a specific script? (say send a wake-on-lan packet to a specific computer)
Is there a way for me to have the web server onllly communicate with specific devices that i have the appropriate keys/certificates loaded onto so that i can be certain that I'm the only one with access?
Are any of my assumptions or conclusions wrong?
Using a username/password combo for SSH is probably secure enough, but it's generally more secure to use a public/private SSH key pair.
Your assumptions about MySQL seem sound. Just make sure to have some authentication on the server just in case you have a nosey houseguest on your WiFi. :)
Are there any better ways to achieve what I'm after than what I'm
describing?
A couple options that come to mind:
You could generate a self-signed certificate for the web server and then manually copy that onto your client devices. I think this would allow you to get around the requirement for a domain name.
You could set up a secure VPN into your home network. This way you wouldn't have to expose your web/SSH servers to the world.
Is there a way for me to have the web server only communicate with
specific devices that i have the appropriate keys/certificates loaded
onto so that i can be certain that I'm the only one with access?
The VPN option mentioned earlier would allow you to do this.
You could restrict access to the Apache server to only devices with specific client certificates: https://stackoverflow.com/a/24543642/2384183
I'm now thinking about moving to s3, but I'm still concern about the restriction policy in my country in the future, so I'm wondering if I can use some DNS service or some other way to solve this problem.
It's doubtful that DNS can help with this issue.
In general, it's quite difficult to bypass such restrictions if there is an entity that has complete control over the borders of its network. It could be anything - from a government blocking opposition sites for political reasons to a company blocking access to insecure web mail providers for security reasons.
If an entity wants to block a specific service provider, it's easier, more effective and far more efficient to simply block all IP address blocks that belong to that provider. DNS is at a higher level and will not help with this issue.
What would help is an unblocked proxy (relay) or VPN service. You connect to that service and tunnel any connection to your intended service provider through that connection. It could be:
A proxy server abroad that is not blocked. There are commercial proxy/anonymizer services that may be able to help here, although the most known ones are bound to be blocked too.
A VPN connection to an unblocked network e.g. a business partner.
An application such as Tor. This option usually implies a very significant performance drop and should not be used for high data rates (anything above a few KB/sec).
If you use a remote proxy or VPN server you should contact the owners and find out their policy for something like this.
For a cloud instance that runs Apache, I'm guessing the cloud has an IP address.
One of the benefits of using a cloud is scaling, but I'm not sure how that scaling happens. I thought that new instances are created automatically to accommodate rise in traffic. IF that's correct (correct me if I'm wrong), then does that mean that each new instance would have its own IP or what? because if that's the case, it would complicate matters a lot when pointing a domain to a cloud.
The cloud sits behind a load balancer which is able to redirect traffic to different spawned instances of Apache servers. In that since you can grow and shrink to any number of servers based on how much traffic you are receiving.
How can I work with Novell eDirectory services in J2SE? Will JNDI work with eDirectory? What are some resources I can use to learn about whatever library or libraries you suggest?
I just want to play around with retrieving information via LDAP for right now, and if I get things working the way I want, I will probably need to be able to modify objects later on.
Thanks!
JNDI should work with eDirectory.....
try; http://developer.novell.com/wiki/index.php/Jldap and http://developer.novell.com/wiki/index.php/Novell_LDAP_Extended_Library
Used it successfully with OpenLDAP and should suffice for eDirectory as well.
Any LDAP interface you want to use should work fine against eDirectory.
Be aware that the configuration of the LDAP server may not allow clear text passwords, thus a bind to port 636 via SSL (Where you have the certificate imported into the keystore already) or via TLS (retrieve the tree CA's public key on the fly).
If you have administrative access to the eDirectory server, you can easily change that, but still best to confirm that you can get it to work over SSL/TLS (aka LDAPS).
If you really need it, you can ask the admins for a server with only a replica of some test partition (and thus no real user data in its view) and test via cleartext against that.
It is very easy in eDirectory to add a new replica of a partition, carve off or merge a partition, and all can be done live.
It is similarly very easy to host replicas of many partitions on one server. (The official limit is, no limit on the number or partitions in a tree, or replicas on a server, but it used to be 256 in older versions (before 8.x) )
If you are allowed access to the eDirectory server, you want to to ask for access to Dstrace (several versions of this, see Many Faces of Dstrace). There is a web interface (server:8008 on Netware, 8010 on Windows, 8028 on Unix/Linux usually) or other interfaces. If you enable the LDAP trace option (and turn off all the others) you can fairly completely debug what is going on at the server side. See the errors, the communication, or lack thereof and so on.
From what I understand, if you have multiple web servers, then you need some kind of load balancer that will split the traffic amongst your web servers.
Does this mean that the load balancer is the main connecting point on the network? ie. the load balancer has the IP address of the domain name?
If this is the case, it makes it really easy to add new hardware since you don't have to wait for any dns propogation right?
There are several solutions to this "problem".
You could round-robin at the DNS-level. I.e. have www.yourdomain.com point to several IP-addresses (well all your servers).
This doesn't give you any intelligence in the load balancing, but the load will be more or less randomly distributed, but you wouldn't be resilient to hardware failures as they would still require changes to DNS.
On the other hand you could use a proxy or a loadbalancing proxy that has a single IP but then distributes the traffic to several back-end boxes. This gives you a single point of failure (the proxy, you could of course have several proxies to defeat that problem) and would also give you the added bonus of being able to use some metric to divide the load more evenly and intelligently than with just round-robin dns.
This setup can also handle hardware failure in the back-end pretty seamlessly. The end user never sees the back-end, just the front-end.
There are other issues to think about as well, if your page uses sessions or other smart logic, you can run into synchronisation problems when your user (potentially) hits different servers on every access.
It does (in general). It depends on what server OS and software you are using, but in general, you'll hit the load balancer for each request, and the load balancer will then farm out the work according to the scheme you have in place (round robin, least busy, session controlled, application controlled, etc...)
andy has part of the answer, but for true load balancing and high availability you would want to use a pair of hardware load balancers like F5 bigips in an active passive configuration.
Yes your domain IP would be hosted on these devices and traffic would connect firstly to those devices. Bigips offer a lot of added functionality including multiple ways of load balancing and some great url rewriting, ssl acceleration, etc. It also allows you to run your web servers on a seperate non routable address scheme and even run multiple sites on different ports with the F5's handling the translations.
Once you introduce load balancing you may have some other considerations to take into account for your application(s) like sticky sessions and session state but that is a different subject