NFS server: i added a path in /etc/exports
[local-path] [client-ip] (rw,sync,no_root_squash,no_subtree_check)
run $exportfs -a
Succeeds!
NFS client:
Now run: $mount [server-ip]:[path] [local-path]
Now the mount command just gets stuck there, no error, no cli o/p.
How to even get what's the error?
This problem go sorted out by restarting the computer. I think nfs daemon restart should also work.
Related
Looking at several resources I have tried a variety of unsuccessful mount commands to try to be able to chown on a mounted smb/cifs share. My current iteration looks like this
sudo mount -t cifs //192.168.0.1/g /mnt/network -o user=user,uid=1001,gid=1001,vers=1.0,file_mode=0777,dir_mode=0777
followed by
sudo chown 1001:1001 /mnt/network/storage
This produces the following error that I can't seem to get around
chown: changing ownership of '/mnt/network/storage': Permission denied
I've tried gid=0,uid=0 anduser=root
Any insight into what I'm doing wrong or how to make this work would be much appreciated as so far what I've found online doesn't work, Thanks in advance!
I am running on ubuntu 21.04 and some raspberry pi's also running ubuntu (trying to access the same mounted directory which is on an external ssd)
I ran into a very strange issue this morning. When I rebooted my machine, and tried to run vagrant up, I get this error;
==> default: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mount -o vers=3,rw,tcp,nolock,noacl,async 10.0.1.1:/Users/me/code /vagrant
Stdout from the command:
Stderr from the command:
mount.nfs: requested NFS version or transport protocol is not supported
I didn't change any configuration settings, or update my machine or anything. Of things I know, nothing has changed. What gives? Anyone have any ideas as to what the issue is and what I can do to fix it?
For anyone that runs into this same issue and none of the other solutions you find seem to work, my issue was 127.0.0.1 localhost missing from my /etc/hosts file. Not sure how or why it went missing, but when adding this back, it fixed the issue.
I installed git instead of openssl to use Remote-SSH in VSCode.However,after I completed the config document and tried to connect to the remote host.I failed. The error info is showed in the below pic.error info
error info:
[11:27:12.631] remote-ssh#0.48.0
[11:27:12.632] win32 x64
[11:27:12.656] SSH Resolver called for "ssh-remote+23321", attempt 1
[11:27:12.659] SSH Resolver called for host: 23321
[11:27:12.659] Setting up SSH remote "23321"
[11:27:12.790] Using commit id "26076a4de974ead31f97692a0d32f90d735645c0" and quality "stable" for server
[11:27:12.798] Testing ssh with ssh -V
[11:27:13.099] ssh exited with code: 0
[11:27:13.100] Got stderr from ssh: OpenSSH_8.1p1, OpenSSL 1.1.1d 10 Sep 2019
[11:27:13.128] Running script with connection command: ssh -T -D 49485 23321 bash
[11:27:13.132] Install and start server if needed
[11:27:13.151] Terminal shell path: C:\Windows\System32\cmd.exe
[11:27:30.151] Resolver error: Connecting with SSH timed out
[11:27:30.178] ------
I had the same problem but the above solutions didn't work with my setup,
but the following setting did work:
"remote.SSH.useLocalServer": false
I got this solution from github reported issues and fix
In my case, the problem was caused by a too long authentication process on the server-side.
Solved it by extending the Connect Timeout from 15 to 30 seconds.
Instructions:
open your vscode Command Palette (via keyboard shortcut or from the
View menu).
search for the Remote-SSH: Settings.
scroll till you find the Connect Timeout.
change it to a longer duration than 15 secs.
key F1
Remote-SSH: Settings
Connect Timeout: from 15 seconds to 60 seconds solve my connection issue
You can try the following approaches:
ssh to your remote server. Then run the following commands to clean data folder and bin folder under .vscode-server folder on the server:
cd ~/.vscode-server
rm data/* -rf
rm bin/* -rf
If step 1 does not work, ssh to your remote server and delete the entire .vscode-server folder with the following command:
rm -rf ~/.vscode-server
Please note that this will also remove the extensions that you installed on the server.
Downgrade the version of the remote-SSH extension in vscode. Look up the extension in the vscode interface, right click on that, and you will find the option "Install Another Version ...". Install the previous version of the current one. If it does not work keep downgrading the version.
I had the same problem before, I solved this by deleting "terminal.integrated.inheritEnv": false inside ~/.config/Code/User/setting.json
I found the solution here in this thread from user oreilm49:
https://github.com/microsoft/vscode-remote-release/issues/1137
in vscode settings :
search conpty and uncheck it
I had same issue, my problem was solved after changing settings in the json file:
I removed "terminal.integrated.inheritEnv": false inside ~/.config/Code/User/setting.json
I added "remote.SSH.useLocalServer": true inside ~/.config/Code/User/setting.json
It worked for me after so many different trials
This might be a very foolish solution but it actually works for me, so I will write it down in case any other people get into the same problem.
I made modifications to the config file for SSH, then all the trials for connection ran into the error of 'Connecting SSH timed out'. I tried many possible solutions but none of them solved my problem.
Then I just closed the VScode and restarted it. Then everything works.
I had a case of this. I my client (local computer) is a Mac, and I was connecting to Linux host. I just went to the setting "Remote Platform" under Remote.SSH settings, and explicitly told it that I am connecting to a Linux remote. After this, it started to work.
I had this issue because of version missmatch of client and server. After updating both to the same version, it worked for me.
The issue with me was timeout at first. I tried increasing the timeout in settings but then later found the issue was with "tar".
The vscode-server.tar.gz (probably a little change in the file name) was not able to install due to tar not being present in my host.
So I installed tar in the host as "yum install tar"
And then tried reconnecting to the server and it worked
I use Ubuntu 18 as WSL and everything was running well. Today I run the apache and started the application. When the app tried to perform chmod() on a file which was submited through form inside the folder project (I use Laravel), I received the following error:
chmod(): Operation not permitted
I have notice that this error happen when I try running chmod() from web server (www-data user). In the cli I dont have problems.
From other posts over the net, I understand that Windows has some changes regarding WSL permissions and drive mounts. But I didnt get answer or didnt succeed to resolve that issue.
sudo umount /mnt/c
sudo mount -t drvfs C: /mnt/c -o metadata
Reference: https://github.com/Microsoft/WSL/issues/3172#issuecomment-389157376
sudo umount /mnt/c
sudo mount -t drvfs C: /mnt/c -o metadata,uid=1000,gid=1000,umask=22,fmask=111
did the trick for me.
Ref: https://devblogs.microsoft.com/commandline/chmod-chown-wsl-improvements/
Here i am creating a test machine(dev) using the docker machine.
$ docker-machine create -d virtualbox dev
Creating CA: C:\Users\xxx\.docker\machine\certs\ca.pem
Creating client certificate: C:\Users\xxx\.docker\machine\certs\cert.pem
Creating VirtualBox VM...
Creating SSH key...
Starting VirtualBox VM...
Starting VM...
The vm gets created and runs with out flaws.
And here is the error when i run the following command:
$ docker-machine env dev
open C:\Users\xxx\.docker\machine\machines\dev\ca.pem: The system cannot fin
d the file specified.
I have no idea how to deal with this problem. Tried restarting boot2docker.
You should try using docker-machine regenerate-certs dev. The problem i think is that somehow your .pem file got deleted or was not created. I had the same issue and regenerating the certs fixed the problem (reboot did not help btw).
I guess you are getting Docker-machine : ca.pem not found error even when you use docker info or any command with docker
Try this command: docker-machine env -u
output will be similar to:
unset DOCKER_TLS_VERIFY
unset DOCKER_HOST
unset DOCKER_CERT_PATH
unset DOCKER_MACHINE_NAME
# Run this command to configure your shell:
# eval $(docker-machine env -u)
now enter eval $(docker-machine env -u)
this should do the work. Try docker info to be sure finally.
I was getting the exact same error. It turned out to be the Cisco AnyConnect client affecting my networking settings. It's not enough to quit AnyConnect, you have to reboot your machine to restore your settings.
If someone knows more about how AnyConnect is affecting things and if there are solutions better than rebooting, I'd love to hear about it!
Copy certificates from "C:\Users\xxx\.docker\machine\certs"
Paste certificates to "C:\Users\xxx\.docker\machine\machines\dev"
NOTE: This error was on Windows 10 Docker
Here was my error:
#user ➜ git-repo git(users/user/dev) ✗ docker
unable to resolve docker endpoint: open C:\Users\user\.docker\ca.pem: The system cannot find the file specified.
Here is the link to the shell file I used to recreate the certificates I named it generate_docker_cert.sh:
https://gist.github.com/bradrydzewski/a6090115b3fecfc25280
So I went to that directory that the error output:
cd C:\Users\user\.docker\
Created that file:
notepad generate_docker_cert.sh
Copied the values from the link into there and saved.
Then ran that .sh file:
.\generate_docker_cert.sh
Then the docker command worked:
#user ➜ git-repo git(users/user/dev) ✗ docker
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
...