How to access a folder via SMB protocol from ASP Net Core [duplicate] - asp.net-core

I am trying to setup a script that will:
Connect to a windows share
Using LOAD DATA LOCAL INFILE, upload the two files into their appropriate db tables
Umount share
Situation:
I can currently vpnc into this remote machine
Problem:
I cannot
mount -t cifs //ip.address/share /mnt/point -o username=u,password=p,port=445
mount error(110) Connection timed out
I am attempting to do this manually first
Remote server is open to port 445
Questions:
Do I even need to vpnc in first?
Do I need to do route add for the remote ip/mask/gw after vpnc?
Thank you!

The mount.cifs file is provided by the samba-client package. This can be installed from the standard CentOS yum repository by running the following command:
yum install samba samba-client cifs-utils
Once installed, you can mount a Windows SMB share on your CentOS server by running the following command:
Syntax:
mount.cifs //SERVER_ADDRESS/SHARE_NAME MOUNT_POINT -o user=USERNAME
SERVER_ADDRESS: Windows system’s IP address or hostname
SHARE_NAME: The name of the shared folder configured on the Windows system
USERNAME: Windows user that has access to this share
MOUNT_POINT: The local mount point on your CentOS server
I am mounting to a share from \\10.11.10.26\snaps
Make a directory under mount for your reference
mkdir /mnt/mymount
Now I am mounting the snaps folder from indiafps02, User name is the Domain credentials, i.e. Mydomain in this case
mount.cifs //10.11.10.26/snaps /mnt/mymount -o user=Girish.KG
Now you could see the content by typing
ls /mnt/mymount
So, after performing your task, just fire umount command
umount /mnt/mymount
That's it. You are done.

no need to install "samba" and "samba-client", only "cifs-utils" using command
yum install cifs-utils
after that in windows share the folder you would like to mount in centos if you didn't do that already ("c:\interpub\wwwroot" in my case).
make sure you share it with a specific username whom your know the password for ("netops" in my case).
create a directory in centos in which you would like to mount the windows share in to ("/mnt/cm" in my case).
after that run that simple command as a root
mount.cifs //10.16.0.160/wwwroot /mnt/cm/ -o user=netops
centos will prompt you for the windows username password.
you are done.

Related

wrong entry in limits.conf , unable to ssh to host

We have VirtualBox (using vagrant) env , by mistake made an entry in /etc/security/limits.conf [with out having a root shell open:( ] and now I am unable to ssh (the connections drops immediately).
Previously we had one such scenario (limits done by someone else) , was able to fix using vboxmanage guestcontrol copyto CLI and was able to overwrite limits.conf and then ssh was allowed, this time around the vboxmanage CLI also hangs
Tried to open the VM in GUI and went to console and tried few options , but could not get to single user mode.
Since you already tried vbox cli command and the commands hang, it means even virtualbox cannot access the system or get a shell to open.
In this case you will have to bring up a ubuntu VM and use the qemu-nbd module to fix this. The steps are given below.
Bring up a very simple ubuntu vm using hashicorp’s bionic64 on the same host machine by executing the following steps.
mkdir bionic
cd bionic
vagrant box add hashicorp/bionic64
vagrant init
Open the Vagrantfile and change the config.vm.box = "base" to config.vm.box = "hashicorp/bionic64"
Also mount the folder in the host where the .vdi file for the VM is located by adding the following to the Vagrant file by adding the following line(replace the file path with the correct one corresponding to your system. Here /nbd2 will be created on the ubuntu machine and will contain the files including the .vdi file.
config.vm.synced_folder "/home/topcat/VirtualBox\ VMs/your_vm", "/nbd2"
Now do vagrant up
Once the machine boots up
vagrant ssh #to ssh as vagrant
sudo su #to become root
apt-get update #This will refresh the apt cache
apt-get install qemu
modprobe nbd (to check if the module is loaded successfully. Will exit without any output if it is installed)
qemu-nbd -c /dev/nbd1 "/nbd2/box-disk001.vdi" - (Here change the path to whatever you gave in the config.vm.synced_folder property)
mkdir -p /mnt/vdi-boot
mount /dev/nbd1p1 /mnt/vdi-boot
cd /mnt/vdi-boot/etc/security (This folder will have all the files as it were in your VM)
touch limits.conf (if the file is already there, delete it)
chmod 644 limits.conf
chown root:root limits.conf
open the /mnt/vdi-boot/etc/security/nsswitch.conf file and check if the following three lines are present
passwd: files
shadow: files
group: files
umount /mnt/vdi-boot (unmounts the mounted path)
qemu-nbd -d /dev/nbd1 (disconnects from qemu-nbd)
Exit the VM and start the VM
Open another shell and try to ssh. It should go through fine this time.

Problem while bootstraping ubuntu chef node from chefDK on windows workstation

I'm new to Chef and I have stuck in a problem. I'm using AWS Chef Automate Server and EC2 ubuntu instance as Chef Node. My workstation is local machine where I have installed ChefDK on windows. I have successfully configured the Chef server with ChefDK.
When I bootstrap the node using Knife Bootstrap command, it bootstraps the ubuntu node but shows this error in the end cannot create /etc/chef/trusted_certs/opsworks-cm-ca-2016-root.pem: Directory nonexistent
The command I used here is knife bootstrap myEC2PublicIPHere -N UmaidNode1 -x ubuntu --sudo --run-list "recipe[nginx]" -i .chef/my_key.pem.
After that I added some other cookbooks in the server and run Knife ssh command from my windows workstation to run Chef-client on the node, but this command is not working. I have tried it with different attributes, but always the similar issue FATAL: 1 node found, but does not have the required attribute to establish the connection. Try setting another attribute to open the connection using --attribute.
The command I tried here is knife ssh 'name:*' --attribute myEC2PublicIpHere -x ubuntu -i .chef/my_key.pem 'sudo chef-client'.
Furthur upon running this command knife node show UmaidNode1, it shows the data about node where IP is blank. I don't know why it is not getting this IP here. Showing the output Node Name: UmaidNode1 Environment: _default FQDN: IP: Run List: recipe[nginx], recipe[apache] Roles: Recipes: Platform: Tags:
enter image description here
The issue is finally resolved. I don't know why, but the problem was with the ChefDK version. I was using the latest version 4.8.23. It always creates directory /etcchef but the chef searches for all files in the directory /etc/chef. So it was unable to get the files like client.rb etc.
NOTE: I even make the required /etc/chef directory by myself, but it didn't work.
I installed an older version of ChefDK and now it's working fine.

Using "Remote SSH" in VSCode on a target machine that only allows inbound SSH connections

Is there a way to use the VSCode Remote SSH extension to interact with a remote host that does not allow outbound internet connections?
Is it possible to download the vscode-server files from another system and copy to host?
I read this but I can't connect the server to internet.
When you connect to a host it executes a bash script that wgets or curls a tarball and extracts it in a directory in your home directory. Here's an offline workaround.
Attempt to connect, let it fail
On server, get the commit id
$ ls ~/.vscode-server/bin
553cfb2c2205db5f15f3ee8395bbd5cf066d357d
Download tarball replacing $COMMIT_ID with the the commit number from the previous step
For Stable Version
https://update.code.visualstudio.com/commit:$COMMIT_ID/server-linux-x64/stable
For Insider Version
https://update.code.visualstudio.com/commit:$COMMIT_ID/server-linux-x64/insider
Move tarball to ~/.vscode-server/bin/$COMMIT_ID/vscode-server-linux-x64.tar.gz
Extract tarball in this directory
$ cd ~/.vscode-server/bin/$COMMIT_ID
$ tar -xvzf vscode-server-linux-x64.tar.gz --strip-components 1
Connect again
You'll still need to install any extensions manually. There's a download button next to all the extensions in the marketplace. Once you have the .vsix file you can install them through the GUI with the Install from VSIX option in the extensions manager.
This is kind of a pain and hopefully they improve this process, but if you have a network-based home directory, you only have to do this once.
open vscode -> about
Version: 1.46.1
Commit: cd9ea6488829f560dc949a8b2fb789f3cdc05f5d
Date: 2020-06-17T21:17:14.222Z
Electron: 7.3.1
Chrome: 78.0.3904.130
Node.js: 12.8.1
V8: 7.8.279.23-electron.0
OS: Darwin x64 17.7.0
$COMMIT_ID = cd9ea6488829f560dc949a8b2fb789f3cdc05f5d
A new feature is being added to support offline install
However, you can now solve this issue by a new user setting in the Remote - SSH extension. If you enable the setting remote.SSH.allowLocalServerDownload, the extension will install the VS Code Server on the client first and then copy it over to the server via SCP.
Note: This is currently an experimental feature but will be turned on by default in the next release
https://code.visualstudio.com/blogs/2019/10/03/remote-ssh-tips-and-tricks
A a work around I have done the following:
Desktop ~/.ssh/config
...
Host *
RemoteForward 54321
...
Remote: ~/bin/wget in which ~/bin is added to PATH via .bashrc
#!/bin/bash
export LD_LIBRARY_PATH=$HOME/opt/lib/tsocks/
export TSOCKS_CONF_FILE=$HOME/opt/tsocks/tsocks.conf
$HOME/bin/tsocks /usr/bin/wget $#
Remote: ~/opt/tsocks/tsocks.conf
server = 127.0.0.1
server_port = 54321
server_type = 5
note tsocks binary has been scp-ed to ~/bin/tsocks and ~/opt/tsocks/ has been created with libtsocks.so which is normally stored in /usr/lib64/libtsocks.so
This is a work around that allows me to have wget functionality with out messing with anything outside my profile to get it to work (eg: no root required ... even though I have it).
Current Version of VS Code: 1.48.2
I just kill the wget process on the server end, and let the client download the archive and transfer it to the server end. That's quite easy as below.
make sure that you set in settings.json
"remote.SSH.allowLocalServerDownload": true,
execute the shell scrpits below.
# to find the <pid>
ps aux | grep wget | grep vscode-server
# kill the process
kill -9 <pid>
# then wait for the client downloading and transferring
# optional: If you want to know the progress, just
cd ~/.vscode-server/bin/<commit-id>/
watch -n 1 -d ls -rthl

NFS client under WSL - mount.nfs: No such device

I am getting the following error trying to mount a nfs export.
sudo mount 192.168.1.175:/mnt/nas /mnt/c/nas
mount.nfs: No such device
Any ideas on how to fix this?
As of October 2020: You can mount nfs with wsl2, but wsl2 itself requires a hardware virtualization available. See here: https://github.com/microsoft/WSL/issues/5838
If like me you are stuck on WSL1 you can work around this issue by mapping the drive in windows. Use the Map Network Drive feature and create a drive letter for your nfs mount e.g. G:
Now in WSL you can mount that drive letter:
sudo mkdir /mnt/g
sudo mount -t drvfs G: /mnt/g
from: How to Mount Windows Network Drives in WSL
I have not tested the access speed to a drive mapped through to WSL like this but I would expect it to be slow!
The error indicates the nfs kernel modules are not loaded correctly and
also verify the exported path "/mnt/nas" exists on sever "192.168.1.175" or not.
first of all,we understand nfs is one of tctp/ip protocol, so one client and one server are needed, So our purpose is sharing a dir on windows or wsl to a another linux, that means the windows or wsl is a server, all you guys are right about wsl nfs, it doesnt work if we use the wsl nfs inside, we can make a another nfs server on windows instead of wsl, and configure the share dirs right which we can find the dirs on wsl, e.g. /mnt/d/WORK/tftpserverDir, after that we can mount successfully. those are tips of me:
make a nfs server on windows
I dowwnload from this:
https://www.hanewin.net/nfs-e.htm
configure the shared dir in exports file
D:\WORK\tftpserverDir -name:nfsroot -umask:000 -public -mapall:0
mount the share dirs on your dst linux
mount -t nfs -o nolock -o tcp -o rsize=32768,wsize=32768 172.10.10.80:/nfsroot /sdcard/mnt

Linux mount NFS with specific user

I was searching hours on the Internet, but for this specific problem I could not find any solution.
1: I have a Xubuntu Linux on my PC. I use it in average way: browse the Internet, watch videos, etc. And also it gives home for my PHPStorm app but no the project files. This is the HOST. It has a host-only network: 192.168.56.1
2: I have a VirtualBox Debian Linux (no GUI) system. This meant to be represent a development version of my real webserver. It has all the project files. This VM is on an external drive, so I can take it everywhere (e.g.: to the office). 192.168.56.101. This is the GUEST.
3: on the HOST I use dnsmasq to force every *.dev domain to be redirected to the GUEST. So I can test my projects easily.
4: on the GUEST I exported the /var/www folder in the /etc/exports:
/var/www 192.168.56.1(rw,sync,no_root_squash,no_subtree_check)
The problem: I want to use the PHPStorm on the HOST to edit the files on the GUEST "locally". But I cannot mount the GUEST's /var/www folder into the HOST's /home/gabor/Projects folder with full permissions. I tried to use the following:
$> sudo mount 192.168.56.101:/var/www /home/gabor/Projects
This looks okay for the first time, but the folder is mounted with nobody:nogoup and I have no permissions to edit.
I want the /home/gabor/Projects has the owner gabor:gabor and everything I create in this folder must has the owner www-data:www-data on the Debian side. But for NFS mounting I cannot specify the user.
$> sudo mount -o umask=0022,gid=1000,uid=1000 192.168.56.101:/var/www /home/gabor/Projects
mount.nfs: an incorrect mount option was specified
I also failed to mount --bind the /var/www with different user (should be nobody:nogroup) on the Debian, so that I could export that one...
How can I solve this problem?
Please help me.
Thank you.
NFS v2 and v3 do not support uid/gid.
on Ubuntu man nfs
Adding this answer for posterity, as I ended up here with the same question.
Try this in /etc/export:
/var/www 192.168.56.1(rw,root_squash)
Then on the client, put this in /etc/fstab:
192.168.56.101:/var/www /home/gabor/Projects nfs defaults,user,noauto,relatime,rw 0 0
The user option will allow a non-root user to mount the volume. Adjust other options as needed.
Then on the client again, become the user you want to mount the volume as, and then mount the volume you added to /etc/fstab:
$ id
uid=1000(gabor) gid=1000(gabor) groups=1000(gabor)
$ mount /home/gabor/Projects
$
Make sure that the uid and/or gid are the same on the server. I'm not sure if the usernames can be different or not. Also make sure that the directory being exported on the server is writable by the user or group. See this blog post for additional info about setting up NFS in a similar manner.
Caution: This is an insecure configuration without authentication. Use NFS v4 with Kerberos for strong authentication.
Ok, I found a solution that exactly does what I want.
First, install the sshfs:
$> sudo apt-get install sshfs
Then mount the remote /var/www:
$> sshfs -o uid=33,gid=33 root#192.168.56.101:/var/www /home/gabor/Projects
And that is it!
$> ls -la /home/gabor | grep Projects
drwxr-xr-x 1 www-data www-data 4096 Okt 14 21:10 Projects