Connecting Nfs storage to proxmox cluster - nfs

When Im trying to add nfs storage from Truenas to my cluster it gives me this error, any ideas how to fix it?

Has your share been created with no_root_squash option? Check your NFS share options on Truenas and also check the user id (root, anonymous, guest, etc.).
In my experience, the fault often is the wrong root squash option.

Related

Connecting remote filesystem securely to Kubernetes Cluster

Here is the situation I am facing. I work for a company that is designing a product in which, due to legal constraints, certain pieces of data need to reside on physical machines in specific geopolitical jurisdictions. For example, some of our data must reside on machines within the borders of the "Vulgarian Federation".
We are using Kubernetes to host the system, and will probably settle on either GKE or AWS as the cloud provider.
A solution I have invented creates a pod to host a MongoDB instance that is locale specific (say, Vulgaria-MongoDB), which then seamlessly stores the data on physical drives in that locale. My plan is to export the storage from the Vulgarian machine to our Kubernetes cluster using NFS.
The problem that I am facing is that I cannot find a secure means of achieving this NFS export. I know that NFSv4 supports Kerberos, but I do not believe that NFS was ever intended to be used over the open web, even with Kerberos. Another option would be creating a VPN server in the cluster and adding the remote machine to the VPN. I have also considered SSHFS, but I think it would be too unstable for this particular use case. What would be an efficient & secure way to accomplish this task?
As mentioned in the comment, running the database far away from the storage is likely to result in all kinds of weirdness. Modern DB engines allow for some storage latency, but not 10s of seconds generally. But if you must, the VPN approach is correct, some kind of protected network bridge. I don't know of any remote storage protocols I would trust over the internet.

Change remote directory ownership without ssh

First, I feel very silly.
For fun/slight profit, I rent a vps which hosts an email and web server and which I use largely as a study aid. Recently, I was in the middle of working on something, and managed to lose connection to the box directly after having accidentally changed the ownership of my home folder to an arbitrary non-root, incorrect user. As ssh denies root, and anything but pubkey authentication, I'm in a bad way. Though the machine is up, I can't access it!
Assuming this is the only issue, a single chown should fix the problem, but I haven't been able to convince my provider's support team to do this.
So my question is this: have I officially goofed, or is there some novel way I can fix my setup?
I have all the passwords and reasonable knowledge of how all the following public facing services are configured:
Roundcube mail
Dovecot and postfix running imaps, smtps and smtp
Apache (but my websites are all located in that same home folder, and
so aren't accessible - At least I now get why this was a very bad idea...)
Baikal calendar setup in a very basic fashion
phpMyAdmin but with MySql's file creation locked to a folder which apache isn't serving
I've investigated some very simple ways to 'abuse' some of the other services in a way that might allow me either shell access, or some kind of chown primitive, but this isn't really my area.
Thanks!!
None of these will help you, at least of the services you listed none have the ability to restore the permissions.
All the VPS providers I've used give "console" access through the web interface. This is equivalent to sitting down at the machine, including the ability to login or reboot in recovery mode. Your hosting provider probably offers some similar functionality (for situations just like this, or for installing the operating system, etc), and it is going to be your easiest and most effective means of recovery. Log in there as root and restore your user's permissions.
One thing struck me as odd,
I haven't been able to convince my provider's support team to do this.
Is that because they don't want to do anything on your server which you aren't paying them to manage, or because they don't understand what you're asking? The latter would be quite odd to me, but the former scenario would be very typical of an unmanaged VPS setup (you have root, console access, and anything more than that is your problem).

Google cloud instance doesn't allow me to ssh with error: due to external disks detached?

I've had a google cloud instance for some time and I used to ssh into it without any problem. At some point I had to remove the additional disk on which I just had some files. Now it doesnàt allow me to ssh into it anymore. Can the two things be linked? The firewall is set to default and it has the rule to allow SSH from anywhere.
Any advice?
You can try to reboot your cloud instance. What error do you get?.

Google compute engine - getting blocked after accessing SSH a few times

I have a google compute engine VM, running ubuntu, and utilising Laravel Forge.
I seem to get blocked by the VM after accessing SSH a few times (2-4), even if I'm logging in correctly. Restarting the VM unblocks me.
I first noticed the issue as I was having trouble logging into SSH, after a few attempts it would become unreachable. My website hosted on it also wouldn't resolve. After restarting the vm, I could try log into ssh again and my website works. This happened a couple time before I figured out how to correctly log in with SSH.
Next, trying to log in to the database with HeidiSQL, which uses plink, I log in fine. But it seems to keep reconnecting via SSH every time I do something, and after 2-4 of these reconnects, I get the same problem with the VM being unreachable by SSH and my website hosted on it being down.
Using SQLyog, which seems to maintain the one SSH connection, rather than constantly reconnecting like HeidiSQL, I have no problems.
When my website is down, I use those "down for everyone or just me" websites to see if it is down, and apparently it's just down for me, so I must be getting blocked.
So I guess my questions are:
1. Is this normal?
2. Can I unblock myself without restarting the VM?
3. Can I make blocking occur in a less strict way?
4. Why does HeidiSQL keep reconnecting via SSH rather than maintaining the one connection like SQLyog seems to?
You have encountered sshguard, which is enabled by default on the GCE Ubuntu images (at least on the 14.10 image, where I encountered it myself). There is a whitelist file at /etc/sshguard/whitelist.
The sshguard default configuration on my VM has a "dangerousness" threshold of 40. Most "attacks" that sshguard detects incur dangerousness of 10, so getting blocked after 4 reconnects sounds about right.
The attack signatures are listed here: http://www.sshguard.net/docs/reference/attack-signatures/
I would bet that you are connecting from an IP that has an invalid reverse DNS configuration (I was). Four connects like that and the default config blocks you for 20 minutes.

Can multiple people SSH into an Amazon Server instance simultaneously?

The startup I'm working for is going to be hosting our site and accompanying database on Amazon Cloud Servers. I was wondering if it's possible to have multiple people SSH'd into the instance simultaneously, like if I want to fool around with the databases while my coworker edits some php sripts. Can this be done?
Yes it's possible :)
Just an advice use ssh-key it's better for detect/log who are logged.