Mount a network share to QNAP using SSH - qnap

I would like to pull files from a network share to my QNAP device.
In windows i would type net use \MyDevice\MyShare /User:... and then copy \MyDevice\MyShare\FileFilter Localpath
o How do I mount the network share to the QNAP using SSH?
o Where are my Volumes at the QNAP? I did not find them

In the local filesystem of your QNAP there is a /share directory. It contains symlinks to all shared folders that have been set up. Even external storage options like USB harddrives are symlinked there.
It is also the mountpoint for the qnap volumes.
You can check this by just using the readlink command.
[/] # readlink -f /share/Music
/share/CACHEDEV1_DATA/Music
[/] #
A network share can be mounted on the qnap by various protocols. (e.g. nfs, cifs). If you are still on QTS 4.2 and did not update to QTS 4.3 yet, you could try this third party app (qpkg) to support sshfs.

Related

Proposal to Migrate OpenNebula Datastore from Local FS to NFS

I have an instance of OpenNebula with 2 nodes running KVM and local file store. This means no live migration as vm images are scp'd to each node, so there is also no option of failover or Live Migration.
I would like to implement NFS shared storage and move the VM's from the local FS datastore to the NFS shared storage datastore. OpenNebula supports migrating VM's between datastores, but only datastores of the same type i.e. 'ssh' to 'ssh' but not 'ssh' to 'shared'.
I am working on a method of achieving this, and would love some feedback as to why this is a good or a bad idea.
Thanks
OpenNebula doesn't currently support migrating VM's from one type of datastore to another different type of datastore. I have been working on a method that is working and want to document it here to get some feedback and opinions on the method.
A datastore type is identified primarily by the Transfer manager Driver 'TM_MAD' setting. This setting cannot be changed, either through Sunstone or through the cli. So we need a method to do just this. This is what i did. I started with a fresh install of OpenNebula 5.4.13 in one VM, and 2 VM nodes all running Debian 9 within VMware virtual machines (don't forget to check virtualisation for the VM CPU options).
NOTE: This is an experimental process so make sure you Backup everything first!
Steps
To migrate to a different store, there are a few steps we need to do. They are as follows:
Setup the NFS share exports,
Move the VM images to the NFS share and mount the datastore,
Change the datastore types,
Configure the nodes for NFS share.
Setup NFS Server
First thing we want to do is setup the NFS shares that we want to use. I'm using a single share for the base datastore folder, but you could use separate shares for each datastore ID from different NFS servers.
On the NFS Server create the datastore folder i.e. mkdir /share/one_datastore,
Add the datastore path to exports and export the new share exportfs -rav,
Confirm the share is available showmount -e localhost
Prepare to Migrate
Before we modify the datastores there are a few things to do first:
Shut down any running VM's and undeploy them. This saves the machines states and copies the images back to the image store,
Stop Sunstone and OpenNebula services systemctl stop opennebula && systemctl stop opennebula-sunstone.
Migrate Data
Shared storage shares the VM disk images so all the nodes can access the same data. So copy the VM data to the NFS share ready for mounting.
From the Sunstone frontend server confirm the NFS shares showmount -e [nfs-server],
Create a temp folder to mount the share in mkdir /mnt/datastore,
Temporarily mount the NFS folder mount [nfs-server]:/share/one_datastore /mnt/datastore,
Move the datastore folders to the share mv /var/lib/one/datastores/* /mnt/datastore/
OpenNebula datastore folders now live on the NFS server: ls /mnt/datastore should list folders 0, 1 and 2,
Mount the NFS share to replace the OpenNebula datastore folder mount [nfs-server]:/share/one_datastore /var/lib/one/datastores,
Confirm the folders are available ls /var/lib/one/datastores should list our 3 folders 0, 1 and 2,
Add the mount into /etc/fstab to persist the mount on boot.
OpenNebula frontend is now configured to access the datastore folders from the NFS share. Next we want to change the datastores type from ssh to shared.
Change Datastore Types
The data for the datastore configuration is stored in the OpenNebula database /var/lib/one/one.db. We can change the driver type by editing the datastore configuration data which then tells OpenNebula whiche drivers to use, and how to handle the datastore data. By default OpenNebula uses an sqlite database with the option of MySql. i'm using sqlite but the same works for MySql.
Open the OpenNebula database sqlite3 /var/lib/one/one.db,
View all tables with .tables. datastore_pool is the table we want to modify,
List all the records in the table select * from datastore_pool; will result in a screen-full of configuration data. Each record has an identifier oid which matches the datastore ID, like this (the first 0 is the datastore ID for the default SYSTEM database):
0|system|<DATASTORE><ID>0</ID><UID>0</UID><GID>0</GID><UNAME>oneadmin</UNAME><GNAME>oneadmin</GNAME><NAME>system</NAME><PERMISSIONS><OWNER_U>1</OWNER_U><OWNER_M>1</OWNER_M><OWNER_A>0</OWNER_A><GROUP_U>1</GROUP_U><GROUP_M>0</GROUP_M><GROUP_A>0</GROUP_A><OTHER_U>0</OTHER_U><OTHER_M>0</OTHER_M><OTHER_A>0</OTHER_A></PERMISSIONS><DS_MAD><![CDATA[-]]></DS_MAD><TM_MAD><![CDATA[ssh]]></TM_MAD><BASE_PATH><![CDATA[/var/lib/one//datastores/0]]></BASE_PATH><TYPE>1</TYPE><DISK_TYPE>0</DISK_TYPE><STATE>0</STATE><CLUSTERS><ID>0</ID></CLUSTERS><TOTAL_MB>0</TOTAL_MB><FREE_MB>0</FREE_MB><USED_MB>0</USED_MB><IMAGES></IMAGES><TEMPLATE><ALLOW_ORPHANS><![CDATA[NO]]></ALLOW_ORPHANS><DISK_TYPE><![CDATA[FILE]]></DISK_TYPE><DS_MIGRATE><![CDATA[YES]]></DS_MIGRATE><RESTRICTED_DIRS><![CDATA[/]]></RESTRICTED_DIRS><SAFE_DIRS><![CDATA[/var/tmp]]></SAFE_DIRS><SHARED><![CDATA[NO]]></SHARED><TM_MAD><![CDATA[ssh]]></TM_MAD><TYPE><![CDATA[SYSTEM_DS]]></TYPE></TEMPLATE></DATASTORE>|0|0|1|1|0
Now to change the datastore type. Grab the data from the 3rd column body
(You can run select body from datastore_pool where oid=0;) and copy to your favourite text editor (that's the chunk starting with <DATASTORE> and ending with </DATASTORE>). Find and replace:
Find: <TM_MAD><![CDATA[ssh]]></TM_MAD>
Replace with: <TM_MAD><![CDATA[shared]]></TM_MAD>
Find: <SHARED><![CDATA[NO]]></SHARED>
Replace with: <SHARED><![CDATA[YES]]></SHARED>
Now to update the SYSTEM datastore record. Run the following command on the database, replacing [datastore-config] with the text block you just modified update datastore_pool set body='[datastore-config]' where oid=0,
Update IMAGE datastore is a little different. There is no SHARED option, but we want to use either shared or qcow2 drivers. I used qcow2. So: select body from datastore_pool where oid=1;:
Find: <TM_MAD><![CDATA[ssh]]></TM_MAD>
Replace: <TM_MAD><![CDATA[qcow2]]></TM_MAD>
Update the record: update datastore_pool set body='[datastore-config]' where oid=1;,
Update the FILES datastore (oid=3) by replacing <TM_MAD><![CDATA[ssh]]></TM_MAD> with <TM_MAD><![CDATA[shared]]></TM_MAD> and update using the method above.
Now that the datastores have been updated to use the shared driver, lets start Sunstone and check that the datastores show up.
systemctl start opennebula && systemctl start opennebula-sunstone
Jump into Sunstone web and go to datastores. Opening each datastore to check whether SHARED is enabled, and the correct drivers show i.e. shared or qcow2.
~DONT DO ANYTHING YET~ Still need to configure the nodes!
Configure the Nodes
So because we stopped and undeployed the VMs, there shouldn't be any data in the node datastores. So we can just set up NFS shares to the datastores folder. Confirm the folders are empty first and make sure to take backups! This is an experimental process so be warned! Right, lets get onto it:
Check the contents of /var/lib/one/datastores. If you are mounting each datastore ID based folder to its own NFS share then you can do this instead of the entire datastore folder. Empty any folders with 0, 1 and 2 folders. otherwise remove all folders from the datastores folder,
If not already installed: apt-get install nfs-common,
Check for NFS shares: showmount -e [nfs-server],
Mount the nfs share to the datastore folder: mount [nfs-server]:/share/one_datastore /var/lib/one/datastores,
Confirm the mount i.e. df,
Edit /etc/fstab adding the mount so its mounted on next boot.
Restart your node to confirm the datastore nfs persists, and to give them a restart!
Repeat with all host nodes.
Test it Out
In Sunstone go to the Hosts TAB and check they are up and running. Next go and grab a VM and deploy it. It should deploy without any issues and start booting.
Once up and running i like to constantly ping the VM while testing live migration. So start ping (ping [vm-ip] -t in windows) and then in Sunstone open the VM and do a 'Live Migrate' to another node. Watch the ping and check the logs to make sure it succeeded. I found i had to refresh the display, and go to the hosts TAB to check the VM had migrated. After that it showed correctly but i think its a caching issue in my browser. After the Live Migration you should still see the ping rolling along, with maybe one failed ping in the results.
Conclusion
So that's the process i used to migrate from ssh local storage to shared storage. I'v tested it and it is working without any issues. However, if you do have any issues or have an opinion on this process please let me know. If there are any pitfalls with this i have overlooked please also let me know.
Ok, have fun with it. I'm off to try moving the shared storage over to some kind of shared cluster like Ceph or GlusterFS!

Why is localhost (DocumentRoot) blocked from running on GoogleDrive, Dropbox or Tresorit?

I am attempting to relocate my DocumentRoot (i.e. localhost) to a synchronised folder (such as Google Drive, Dropbox or Tresorit), but the attempt fails with a 403 error.
On Windows machines I can configure localhost to run from D:/GoogleDrive/SitesG folder; the local site runs perfectly.
On a Mac, however, localhost won't work when running out of a cloud-based storage folder such as Google Drive, Dropbox, Tresorit, etc.
Everything is fine when localhost is at Users/myname/Sites.
However, when I reconfigure the Mac to run from Users/myname/GoogleDrive/SitesG - e.g. by editing the httpd.conf, etc, files - localhost is blocked.
Clearly the problem is to do with permissions on the parent folder (e.g. the Google Drive or Dropbox or Tresorit folder). I can see that the permissions on the various folders are as follows.
drwxr-xr-x 32 myname staff 1024 30 Apr 02:23 Sites
drwxr-xr-x 22 myname staff 704 30 May 21:01 SitesG
drwx------# 61 myname staff 1952 30 May 17:47 GoogleDrive
So my question is: On a Mac (running HighSierra), is it possible to relocate the DocumentRoot to GoogleDrive? Or is there something instrinsic to GoogleDrive that prohibits localhost from being run a Google Drive folder?
Locating an Apache virtual host to a cloud-based storage folder will create many files/folders permissions problems.
Instead of relocating your documentRoot and changing a lot of settings and permissions, you should more easily, for each cloud stored project, create a symlink in your Users/myname/Sites folder, pointing to your GoogleDrive/Dropbox website folder.
Imagine you have a "websiteA" folder inside your Dropbox folder :
1) Go to your "Users/myname/Sites folder" and create such a symlink
cd ~/Sites
ln -s ~/Dropbox/websiteA websiteA
As you can check opening your ~/Sites folder in the Finder, you have created a folder with an arrow on it, pointing to the "websiteA" cloud-based folder.
2) Now, you just have to create a virtual host pointing to ~/Sites/websiteA.
You could, instead, globally change your ~/Sites folder to a symlink pointing to your cloud-based folder, but the project-by-project approch is more flexible as it will allow you to manage both local and cloud-based projects.
Many thanks to #DrFred for the solution above, which I'm confident would work though I have not had the chance to test it.
Here's the solution I devised before receiving any answers. It's very similar to Dr Fred's above, in that both solve the problem with symlinks. I add mine for completeness and extra detail.
As above, I develop on multiple devices (several Macs and Windows PCs, side by side), so my aim was to have a single localhost development folder that would synch almost instantly between different devices without the need to check files into/out of git and without running into the file permissions problems created when using Google Drive to synch code files.
The steps I used to achieve this aim were as follows.
Create a folder called ~/Users/myname/SitesNew on a Mac.
Create a symlink from that folder to an identically named folder in Dropbox on the same Mac. You will then have two identical folders on the Mac:
~/Users/myname/SitesNew <-- Real folder on Mac
~/Users/myname/Dropbox/SitesNew <-- Symbolic folder on Mac
Synchronise Dropbox on all devices (making sure to add the SitesNew folder if you are using selective synch on any device). The symlink folder will now appear as a real folder on Dropbox in the cloud and on the Windows PCs. In my case the new Windows PC folder was at:
D:/Dropbox/SitesNew <-- Real folder on Windows
Update the Apache httpd.conf files on the Mac to recognise localhost at ~/Users/myname/SitesNew.
Update the Apache httpd.conf on the Windows PC to recognise localhost at D:/Dropbox/SitesNew.
From now on, any localhost development work (edit, add, delete) on one device will synch with the localhost on the other, even across different operating systems.
Note 1: This solution works only with Dropbox but not with Google Drive, as Google Drive has problems with symlinks and also messes with permissions in a different way, especially on a Mac.
Note 2: If any files have previously been saved on Google Drive (e.g. originally my Windows sites folder was at D:\GoogleDrive\SitesOld), use chmod both (a) to determine the right values for the permissions (e.g. see https://chmod-calculator.com), and (b) to convert folders and files to the right values.

Accessing external hard drive after logging into a remote machine using ssh command

I am doing an intensive computing project with a super old C program. The program requires a library called Sun Performance Library which is a commercial ware. Instead of purchasing the library by myself, I am running the program by logging onto a Solaris machine in our computer lab with the ssh command, while the working directory to store output data is still on my local Mac.
Now, a problem just occurred: the program uses large amount of disk space to save some intermediate results and the space on my local Mac is quickly filled (50 GB for each user prescribed by the administrator). These results are necessary for the next stage of computing and I cannot delete any of them before it finally produce the output data. Therefore, I have to move the working directory to an external hard drive in order to continue. Obviously,
cd /Volumes/VOLNAME
is not the correct way to do it because the remote machine will give me a prompt saying
/Volumes/VOLNAME: No such file or directory.
So, what is the correct way to do it?
sshfs recently added support for "slave mode" which allows you to do this. Assuming you have sshfs on Solaris (I'm not sure about this), the following command (ran from your Mac) will do what you want: dpipe /usr/lib/openssh/sftp-server = ssh SOLARISHOSTNAME sshfs MACHOSTNAME:/Volumes/VOLNAME MOUNTPOINT -o slave
This will result in the MOUNTPOINT directory on the server being mounted to your local external drive. Note that I'm not sure whether macOS has dpipe. If it doesn't, you can replace it with one of the equivalent solutions at How to make bidirectional pipe between two programs?. Also, if your SFTP server binary is somewhere else, substitute its path.
The common way to mount a remote volume in Solaris is via NFS, but that usually requires root permissions.
Another approach would be to make your application read its data from stdin and output its results to stdout, without using the file system directly. Then you could just redirect the data from/to your local machine through ssh. For instance:
ssh user#host </Volumes/VOLNAME/input.data >/Volumes/VOLNAME/output.data

Unable to mount nfs share using autofs on solaris10

I am trying to mount an nfs share to a solaris 10 machine at bootup without any luck so far.
The nfs share is accessible and mounts without any problems if I do so manually from the command line (mount -F nfs server_hostname:/exported_dir_path/ /mnt/tmpdir). But I don't know how to tell autofs to mount it at bootup.
We have another machine on the same network (also solaris) that has it working, but I can't figure out how is it configured differently from the non-working one.
I googled the problem and found that /etc/vfstab, /etc/auto_master, and /etc/hosts files need to have proper entries to make this work. I compared these files from the non-working machine with the ones on working machine but did not notice any differences.
Could someone please guide me to properly configure autofs to mount nfs shares on a solaris10 machine?
Thanks,
Aashish.

TrueCrypt mounting drive on network

My question is related to TrueCrypt drive created on a server. I want to mount this drive on few computers on network with write access. In order to do so, I installed TrueCrypt on a network computer and mounted the drive.
Problem
It mounts the drive after asking the password but triggers write error. In other words, it is read only.
What I have tried so far
I have looked in the documentation at truecrypt.com and it shows there are two methods of mounting
TrueCrypt Mounted Drive (Mounts drive on a local computer with read only access)
Unmounted Drive (Drive is mounted on the server and shared across the network)
What I want
Option 2 seems to be solving the problem with exception to it doesn't ask for password. It is same as any shared folder on network which makes it less secure. So is it possible to to mount drive on network with write access but after authenticating with TrueCrypt login credentials.
Any help will be greatly appreciated.
Based on what I have read (I haven't tried it myself) when you download the truecrypt file to your local machine, you should be able to mount it there and would be prompted for password. Once mounted, you should be able to write to or modify to your hearts content and then save to the encrypted volume you local machine. You will not be able to save the changes into the original server-based volume as that file is 'read only'. However, you should be able to save your modified volume to the server under a different file name.
What I did:
mounted the TrueCrypt Drive and a TrueCrypt-Container with VeraCrypt
created a windows (samba) and mac (afp) share of the drive and container with a password in the share settings (whatever software you use)
Mounting the container prevented it from being overwritten from some one else opening the container directly.