Original question: Let's say I have a remote folder with .R, .py and .pkl files. How to avoid syncing .pkl files or how to sync .R files only? How to avoid syncing specific sub-folders?
Sorry I was wrong on my understanding. sshfs is not a sync utility but is a software to mount a remote system to a local folder accessible via SSH.
I am not deleting this question so that it helps the future readers.
Now, my next question would be: how to selectively block sub-folders in the remote folder from being shown in the local folder? how to selectively block sub-folders in the local folder from being updated in the remote folder?
You can mount with --bind option needed directories. Mount will exclude them from sshfs.
\For example if you have mounted dir in /var/www/site/ via
sshfs /var/www/site
Then you can mount the cache dir locally
mount /var/www/site/cache /tmp/site1cahe --bind
And cache will be local-only :)
Related
Fed up with struggling with lib install/dependencies problems, I'm starting working with Singularity.
Though, I'm not sure I understand precisely how it works regarding files management in the sandbox mode (not data, programs).
For example, I designed a very simple definition file that is just a "naked" Debian:
Bootstrap: library
From: debian
%post
apt-get update
I create a sandbox with this to add stuff:
sudo singularity build --sandbox Test/ naked_Debian.def
And I try to install a program. But what I don't understand is that I managed to do it, removed the sandbox directory but I think there are still files that were created during the sandbox life (in /dev, /run, /root, etc.). For example, the program that I cloned from git is now in /root of my local (independently of any container).
From what I understood, everything was in the container and should disappear if I remove the sandbox directory. Otherwise, I'm gonna leave a lot of mess with all the tests? And then I can't port the container from system to another.
If I create any new sandbox directory, the program is already there.
Cheers,
Mathieu
By default, singularity mounts $HOME to the container and uses that path as the working directory for singularity shell / exec. Since you're running the sandbox with sudo, /root is being mounted in and that's where any repos you cloned would end up if you didn't cd to a different directory. /tmp is also automatically mounted in, though that is unlikely to cause an issue since it's just temp files.
You have a few options to avoid files ending up in places you don't expect.
Disable automount of home: singularity shell --no-home ...
The default working directory is now / instead of $HOME and files are created directly in sandbox (as opposed to a mounted in directory)
If you want to get files out of the sandbox, you'll either need to copy to /tmp inside the container, and on the host OS from /tmp to the desired location
Set a different location to use as home: singularity shell --home $PWD ...
This mounts in and uses the current directory as $HOME instead of the user's $HOME on the host OS
Simpler to move files between host OS and container, but still creates files in the host OS
Don't mount system directories at all: singularity shell --contain --workdir /some/dir ...
Directories for /tmp and /var/tmp are created inside /some/dir instead of using /tmp and /var/tmp on the host. $HOME has the same path as the host and is used as the working directory, but it is empty and separate from the host OS
Complete separation from host OS, while still allowing some access between container and OS
Additional details on these options can be found in the documentation.
I'm running Ubuntu and have a remote CentOS system which stores (and has access to) various files and network locations. I have SSH access to the CentOS machine and want to be able to work locally on Ubuntu.
I'm trying to mirror a remote directory structure. The remote directory is structured:
/my_data/user/*
And I want to replicate this structure locally (a lot of scripts rely on absolute paths).
However, for reasons of speed, I want a certain subfolder, for example:
/my_data/user/sourcelibs/
To be stored locally on disk. I know the sourcelibs subfolder doesn't change much (but the rest might). So I can comfortably rsync it:
mkdir -p /my_data/user/sourcelibs/
rsync -r remote_user#remote_host:/my_data/user/sourcelibs/ /my_data/user/sourcelibs/
My question is, if I use sshfs to mount /my_data/user:
sudo sshfs -o allow_other,default_permissions, remote_user#remote_host:/my_data/user /my_data/user
Will it overwrite my existing files? Is there a way to have sshfs mount but exclude certain subfolders?
Yes, sshfs will overwrite existing files. I have almost the same use case and just tested this myself. BTW, you'll need to add -o nonempty to your sshfs command since the destination dir /my_data/user already exists.
What I found to work is make a copy of the remote directory excluding the large sub dirs. IDK if keeping 2 copies in sync on the remote machine is feasible for your use case? But if you'll mostly be updating on your local machine and rarely making changes remotely, that could work.
I need to upload all the wordpress 4.9.6 files to a VM running Ubuntu on Google cloud.
So far, I've been able to upload individual files via SSH and move them within directories on the server, but when it comes to upload a folder and subsequently moving them, I just can't.
Can someone please be lovely and help me?
You can remote copy a whole folder with scp.
scp -r user#your.server.example.com:/path/to/foo /home/user/Desktop/
From man scp
-r Recursively copy entire directories
If you are using a version control system as git, you can clone the repository to google cloud. See this useful link.
git clone https://github.com/yourgitaccount/worpress-project.git
My file watchers are setup and working properly for all my projects with locally downloaded files, but when editing files opened with the 'Browse Remote Hosts' file explorer the file watchers are not being executed.
I have tried all scope settings for the file watcher, but it doesn't seem to be possible to get this working for individual remote files.
remote file watchers are not supported, please follow WEB-9724 for updates
Friends, I tried to deploy my yii production application from cloud9 IDE to OpenShift while do so, I got this error message,
CException
Application runtime path "/var/lib/openshift/51dd48794382ecfd530001e8/app-root/runtime/repo/php/protected/runtime" is not valid. Please make sure it is a directory writable by the Web server process.
Even when I changed folder permissions to 775 (chmod -R 775 directory) on Cloud9 IDE and deployed again, but I get the same error coming.
It's an old question, but I just bumped into the same issue very recently.
When you extracted the "yii" package several folders were empty, "framework/protected/runtime" was one of them.
To deploy to OpenShift you need to commit the yii package to git, and the push the commit to OS. But, git won't commit empty folders, so they are not created in your deployment. You need to create some file inside those folders and add those files to your git repo before committing/pushing. The usual procedure would be to add a ".gitkeep" file to those folders (it's just a empty dummy file, so git would see those folders).
That would fix this particular error.
It may be due the ownership given to the folder.
Check the web server user group, is that directory is writable or not and also What effects a web server when we change the platform.
Hope my suggestion would be useful.
For Yii applications, the assets and protected/runtime folders are special. First, both folders must exist and writable by the server (httpd) process. Second, these two folders contains temporary files, and should be ignored by git. If these temporary files got committed, deployment in plain servers (not Openshift servers) would cause git merge conflicts. So I put these two folders in .gitignore :
php/assets/
php/protected/runtime/
In my deployment, I add a shell script to be called by openshift, creating both folders under $OPENSHIFT_DATA_DIR and creating symbolic link to both of them in the application's folders. This is the content of the shell script (.openshift/action_hooks/deploy) which I adapted from here :
#!/bin/bash
if [ ! -d $OPENSHIFT_DATA_DIR/runtime ]; then
mkdir $OPENSHIFT_DATA_DIR/runtime
fi
# remove symlink if already exists, fix problem when with gears > 1 and nodes > 1
rm $OPENSHIFT_REPO_DIR/php/protected/runtime
ln -sf $OPENSHIFT_DATA_DIR/runtime $OPENSHIFT_REPO_DIR/php/protected/runtime
if [ ! -d $OPENSHIFT_DATA_DIR/assets ]; then
mkdir $OPENSHIFT_DATA_DIR/assets
fi
rm $OPENSHIFT_REPO_DIR/php/assets
ln -sf $OPENSHIFT_DATA_DIR/assets $OPENSHIFT_REPO_DIR/php/assets
The shell script ensures the temporary folders created on each gear after openshift deployment. By default, a new directory's right are u+rwx, and it became writable by the httpd process because the gear runs httpd as the gear user (not apache or something else).