How to set quota or limits on NFS share on the client? - nfs

I am running Debian GNU/Linux 7 VM
mount.nfs version
mount.nfs: (linux nfs-utils 1.2.6)
I want to set quota on a NFS mount. The NFS server doesnt have quotas set. I installed quota, quotatool as per this wiki
enabled it using the below command
quotaon -avug
Then tried the nfs mount with quota option and it failed with the below error
mount -t nfs -o usrquota,grpquota nfs-server:/export/home/storage /mnt/storage
mount.nfs: an incorrect mount option was specified
also tried running quotaon
quotaon /mnt/storage/
quotaon: Mountpoint (or device) /mnt/storage not found or has no quota enabled.
none of them seem to work.
Is it possible to set quota for NFS share on client side?

As far as I know the quota must be set on the NFS server, that's why mount.nfs does not recognize the usrquota,grpquota options.
See also: https://serverfault.com/questions/644749/can-nfs-server-limit-the-amount-of-disk-space-that-the-nfs-client-can-use

Related

SSH not working in MacBook Pro 2019 Catalina

I have spent a few hours hunting for the problem here.
I have been working with Macs for years now and never had this problem, and have ssh into eC2 instances thousands of times.
I recently received at work a new MacBook Pro.
SSH runs as a service, meaning here it does not return any error that it is not found.
But no matter what server or EC2 instance I try to ssh into, as I have done a million times before I get a timeout.
Before you ask, I have looked all over for this problem. I have also looked for the normal ~/.ssh directory, which seems to be missing and therefore cannot find any config file.
The following is the Mac info:
Catalina 10.15.2
Model Name: MacBook Pro
Model Identifier: MacBookPro16,1
Processor Name: 8-Core Intel Core i9
Processor Speed: 2.3 GHz
Number of Processors: 1
Total Number of Cores: 8
L2 Cache (per Core): 256 KB
L3 Cache: 16 MB
Hyper-Threading Technology: Enabled
Memory: 16 GB
Boot ROM Version: 1037.60.58.0.0 (iBridge: 17.16.12551.0.0,0)
Serial Number (system): C02ZNMV5MD6N
Hardware UUID: 27B1EDF5-B1D2-5F86-BD12-D646F36D9D2D
Activation Lock Status: Enabled
ETA: Yes, from a Windows machine I can access the EC2 network. Yes, I have the correct PEM file. And yes, I have made sure security groups in AWS are correct. For some reason the normal ssh -i etc. picked up directly from AWS connect for the EC2 instance always times out.
Crazy question: does the ssh in Catalina demand another command, addition or some other parameter besides -i?
(I do not seem to be able to ping, telnet etc. either. So something seems to be preventing the OS from going out on ssh port 22.)
Does anyone know of or has had this problem and a fix for it? I am fairly sure it is some type of configuration in ssh or in the Network configurations.
It is driving me crazy. Any and all help would be greatly appreciated!
New Macbook Pro owner/user here. Same issue, despite all configs being identical to my Windows 10 pc and Ubuntu 20 laptop.
For some reason, this doesn't work for me on my Macbook, but does work on Windows and Ubuntu.
ssh -i path-to-keyfile.pem user#ipaddress
But creating an SSH config file and adding my AWS keyfile to my keychain works:
open ~/.ssh/config if the config file exists, or touch ~/.ssh/config if not.
Edit this config file as follows:
Host *
AddKeysToAgent yes
UseKeychain yes
IdentityFile ~/.ssh/id_rsa
Note: I don't know for sure, but I imagine only the 'AddKeysToAgent' and 'UseKeychain' parts are what's important. I'm using the 'IdentityFile' part for connecting to my git repos.
Save the config file and exit. Next, make sure your keyfile isn't too open, otherwise you won't be able to add it to your keychain:
chmod 600 path-to-keyfile.pem
Finally, add the keyfile to your keychain:
ssh-add -K path-to-keyfile.pem
Now on Mac, I'm able to ssh into my AWS instance without the -i flag:
ssh aws-username#aws-ipaddress
Hope this helps. I found the solution here: https://www.cloudsavvyit.com/1795/how-to-add-your-ec2-pem-file-to-your-ssh-keychain/
PS - I'm also unable to sFTP into AWS using Filezilla on Mac, so I'm looking into this as well.
Update on Filezilla: A bit bizarre and I haven't figured out how to save my settings, but for now this answer works: https://superuser.com/questions/280808/filezilla-on-mac-sftp-with-passwordless-authentication

Configuration of Running Redis Instance in Swisscom CloudFoundry

I am trying to read the configuration of the running Redis instance. I want to better understand how Redis is configured, especially in regard to persistence settings.
I have successfully connected to the running Redis instance (SSH tunnel) and try to execute the following command:
CONFIG GET *
CONFIG GET appendonly
However, I get the message
ERR unknown command 'CONFIG'
If I invoke the command "CONFIG GET" without any parameters I get the message
Invalid input argument for command: 'CONFIG GET', passed 0 arguments, must be in range 1 - 1
So the command is known. Seems to be a permission issue!? Is there a way to get the configuration?
The current Redis offering (march 2019) has the following settings for persistency:
appendonly yes
appendfsync everysec
It runs with 2 replicas.
Please note that this allies to the current service offering of Swisscom and might change in the future.

How to disable NFS client caching?

I have a trouble with NFS client file caching. The client read the file which was removed from the server many minutes before.
My two servers are both CentOS 6.5 (kernel: 2.6.32-431.el6.x86_64)
I'm using server A as the NFS server, /etc/exports is written as:
/path/folder 192.168.1.20(rw,no_root_squash,no_all_squash,sync)
And server B is used as the client, the mount options are:
nfsstat -m
/mnt/folder from 192.168.1.7:/path/folder
Flags: rw,sync,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,nosharecache,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.20,minorversion=0,lookupcache=none,local_lock=none,addr=192.168.1.7
As you can see, "lookupcache=none,noac" options are already used to disable the caching, but seems doesn't work...
I did the following steps:
Create a simple text file from server A
Print the file from the server B by cmd "cat", and it's there
Remove the file from the server A
Wait couple minutes and print the file from the server B, and it's still there!
But if I do "ls" from the server B at that time, the file is not in the output. The inconsistent state may last a few minutes.
I think I've checked all the NFS mount options...but can't find the solution.
Is there any other options I missed? Or maybe the issue is not about NFS?
Any ideas would be appreciated :)
I have tested the same steps you have given with below parameters. Its working perfectly. I have added one more parameter "fg" in the client side mounting.
sudo mount -t nfs -o fg,noac,lookupcache=none XXX.XX.XX.XX:/var/shared/ /mnt/nfs/fuse-shared/

docker-machine create --driver generic kills ssh on google compute engine

Hi I am still learning docker's wonderful magical world. I use docker on linux with docker-machine I already added 2 already existing Linux servers with the docker-machine create and successfully run my containers on them. Now I try to do the same with an already existing google compute engine based machine which has Linux too. I use the command:
docker-machine create --driver generic --generic-ip-address ipaddress --generic- ssh-key path_To_Key --generic-ssh-user user_Name machine_Name
And I get an error:
Error creating machine: Error checking the host: Error checking and/or
regenerating the certs: There was an error validating certificates for
host "X.X.X.X:2376": dial tcp X.X.X.X:2376: i/o timeout You can
attempt to regenerate them using 'docker-machine regenerate-certs
[name]'.
Then the docker-machine does not know it's ip But I seems to give it a command trought docker-machine ssh
Altough I am not able to log in with ssh anywhere else and I must stop/remove the created machine and restart it.
Anyone has a similar problem?
According to generic driver's page at docker docs, try to edit --generic-ip-address=ip_address with equal sign.

How to create a directory in HDFS on Google Cloud Platform via Java API

I am running an Hadoop Cluster on Google Cloud Platform, using Google Cloud Storage as backend for persistent data. I am able to ssh to the master node from a remote machine and run hadoop fs commands. Anyway when I try to execute the following code I get a timeout error.
Code
FileSystem hdfs =FileSystem.get(new URI("hdfs://mymasternodeip:8020"),new Configuration());
Path homeDir=hdfs.getHomeDirectory();
//Print the home directory
System.out.println("Home folder: " +homeDir);
// Create a directory
Path workingDir=hdfs.getWorkingDirectory();
Path newFolderPath= new Path("/DemoFolder");
newFolderPath=Path.mergePaths(workingDir, newFolderPath);
if(hdfs.exists(newFolderPath))
{
hdfs.delete(newFolderPath, true); //Delete existing Directory
}
//Create new Directory
hdfs.mkdirs(newFolderPath);
When executing the hdfs.exists() command I get a timeout error.
Error
org.apache.hadoop.net.ConnectTimeoutException: Call From gl051-win7/192.xxx.1.xxx to 111.222.333.444.bc.googleusercontent.com:8020 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=111.222.333.444.bc.googleusercontent.com/111.222.333.444:8020]
Are you aware of any limitation using the Java Hadoop APIs against Hadoop on Google Cloud Platform ?
Thanks!
It looks like you're running that code on your local machine and trying to connect to the Google Compute Engine VM; by default, GCE has strict firewall settings to avoid exposing your external IP addresses to arbitrary inbound connections. If you're using defaults then your Hadoop cluster should be on the "default" GCE network. You'll need to follow the adding a firewall instructions to allow incoming TCP connections on port 8020 and possible on other Hadoop ports as well from your local IP address for this to work. It'll look something like this:
gcloud compute firewall-rules create allow-http \
--description "Inbound HDFS." \
--allow tcp:8020 \
--format json \
--source-ranges your.ip.address.here/32
Note that you really want to avoid opening a 0.0.0.0/0 source-range since Hadoop isn't doing authentication or authorization on those incoming requests. You'll want to restrict it as much as possible to only the inbound IP addresses from which you plan to dial in. You may need to open up a couple other ports as well depending on what functionality you use connecting to Hadoop.
The more general recommendation is that wherever possible, you should try to run your code on the Hadoop cluster itself; in that case, you'll use the master hostname itself as the HDFS authority rather than external IP:
hdfs://<master hostname>/foo/bar
That way, you can limit the port exposure to just the SSH port 22, where incoming traffic is properly gated by the SSH daemon, and then your code doesn't have to worry about what ports are open or even about dealing with IP addresses at all.