For our development purposes, we have a base template installation of ExpressionEngine which is duplicated whenever we start a new project. This process involves copying the directory contents and changing ownership via SSH, then synchronizing the databases via phpMyAdmin.
I was wondering if it might be possible to compile a batch script (or equivalent) to perform all these operations via a single terminal command, and if so, I would appreciate some basic foundation on which to build the script.
You mean something like this?
#!/bin/sh
cp -R /foo /bar
chmod 644 /bar/*
mysql < default.sql
Note that I'm just importing a default file to MySQL, not synchronizing two different sets of data, but it doesn't sound like that's what you meant anyway.
Related
I'm trying to run my static web app using Windows Subsystem for Linux (2), but I can't figure out where on my computer I should store the git repository to be able to run it decently quickly. I have tried storing it on under /mnt/c/{workfolder}, but it takes several minutes to start up (using npm run start), and I have to rerun to see any changes. This is useless when I'm trying to work...
I have also tried to store it in /mnt/wsl/{workfolder}, and in that case it starts up quickly and I can see my changes without rerunning the app. However, it seems to disappears when I restart my computer.
Where should I store the git repository to be able to run the app quickly and see changes without rerunning? I'm assuming there's something I'm not understanding, help me get this it you know.
You'll want it somewhere on the ext4 partition of the WSL distribution. Typically, the best place is going to be under your WSL /home/<username> folder.
I would recommend:
mkdir ~/src
# or
mkdir ~/projects
# or something similar
Then create subdirectories for each project in that directory.
Why the others don't work:
/mnt/c is the Windows C: drive. That drive is mounted into WSL2 using the 9P network file system, and yes, it's (a) slow, and (b) does not support inotify, so apps cannot register for notifications of changes to files.
/mnt/wsl is a tmpfs mount. It's really there for holding things that need to be shared between all running WSL instances. The auto-generated resolv.conf that you see there is one of those things. You can also use it for copying a file from one WSL distribution to another -- Simply copy the file to /mnt/wsl, start another WSL distribution, and copy or move the file out.
But yes, all tmpfs mounts are ephemeral and will terminate when the last WSL2 distribution/instance terminates.
I'm running Ubuntu and have a remote CentOS system which stores (and has access to) various files and network locations. I have SSH access to the CentOS machine and want to be able to work locally on Ubuntu.
I'm trying to mirror a remote directory structure. The remote directory is structured:
/my_data/user/*
And I want to replicate this structure locally (a lot of scripts rely on absolute paths).
However, for reasons of speed, I want a certain subfolder, for example:
/my_data/user/sourcelibs/
To be stored locally on disk. I know the sourcelibs subfolder doesn't change much (but the rest might). So I can comfortably rsync it:
mkdir -p /my_data/user/sourcelibs/
rsync -r remote_user#remote_host:/my_data/user/sourcelibs/ /my_data/user/sourcelibs/
My question is, if I use sshfs to mount /my_data/user:
sudo sshfs -o allow_other,default_permissions, remote_user#remote_host:/my_data/user /my_data/user
Will it overwrite my existing files? Is there a way to have sshfs mount but exclude certain subfolders?
Yes, sshfs will overwrite existing files. I have almost the same use case and just tested this myself. BTW, you'll need to add -o nonempty to your sshfs command since the destination dir /my_data/user already exists.
What I found to work is make a copy of the remote directory excluding the large sub dirs. IDK if keeping 2 copies in sync on the remote machine is feasible for your use case? But if you'll mostly be updating on your local machine and rarely making changes remotely, that could work.
It's been 14 years since I last worked with svn and appearently I have forgotten everything...
I have an existing web-project, consisting of a bunch of php, html, js and other files in a directory tree on a V-Server. Now I want to take these folders under version control and create a copy on my local machine using svn. So I installed subversion according to these instructions: https://www.linuxcloudvps.com/blog/how-to-install-svn-server-on-debian-9/
Using the already-present apache2.
But now I kinda hit a roadblock. If I try svnadmin create on the existing folder, it tells me that is is not empty and does nothing really. All the questions and answers I find here and elsewhere are either
a) focussing on an already existing folder on the local machine
b) assuming more prior knowledge than I have right now aka I don't understand them.
Is there a step-by-step guide for dummies anywhere on how to do this? Or can anyone tell me in laymans terms how to do this?
I can't believe this case never comes up or that it is really very complicated.
At the risk of failing to understand your exact needs, I think you can proceed as follows. I'll use this terms:
Code: it's the unversioned directory at V-Server where you currently have the bunch of php, html, js and other files
Repository: it's the first "special" directory you need to create in order to store your Subversion history and potentially share it with others. There must be one and there can only be one.
Working copy: it's the second "special" directory you need to create in order to work with your php, html, js... files once they are versioned and it'll be linked to a given path and revision of your repository. At a given time there can be zero, one or many of them.
Your code can become a working copy or not, that's up to you, but it can never become a repository:
$ svnadmin create /path/to/code
svnadmin: E200011: Repository creation failed
svnadmin: E200011: Could not create top-level directory
svnadmin: E200011: '/path/to/code' exists and is non-empty
Your repository requires an empty folder but it can be located anywhere you like, as long as you have access to it from the machine you're going to use in your daily work. Access means it's located in your PC (thus you use the file: protocol) or it's reachable through a server you've installed and configured (svn:, http: or https:).
$ svnadmin create /path/to/repo
$ 😎
Your working copies can be created wherever you need to work with your IDE. It can be an empty directory (the usual scenario) or a non-empty one. The checkout command retrieves your files from the repo and puts them in the working copy so, at a later stage, you're able to run a commit command to submit your new and changed files to the repository. As you can figure out it isn't a good idea to create a working copy in random directories because incoming files will mix with existing files. There's however a special situation when it can make sense: when the repository location is new and is still empty. In that case you can choose between two approaches:
If you want code to become a working copy, you can check out right into in and then make an initial commit to upload all files:
$ svn checkout file://path/to/repo /path/to/code
Checked out revision 0.
$ svn add /path/to/code --force
A code/index.php
$ svn commit /path/to/code -m "Import existing codebase"
$ Adding /path/to/code/index.php
$ Transmitting file data .done
$ Committing transaction...
$ Committed revision 1.
If you don't care about code once it's stored in the repository or you want your working copy elsewhere, you can import your files from code and create a working copy in a fresh directory:
$ svn import /path/to/code file://path/to/repo -m "Import existing codebase"
Adding code/index.php
Committing transaction...
Committed revision 1.
$ svn checkout file://path/to/repo fresh
A fresh/index.php
Checked out revision 1.
My goal is to make an Electron application, which synchronizes clients' folder with server. To explain it more clearly:
If client doesn't have the files present on the host server, the application downloads all of the files from server to client.
If client has the files, but some files have been updated on the server, the application deletes ONLY the outdated files (leaving the unmodified ones) and downloads the updated files.
If a file has been removed from the host server, but is present at client's folder, the application deletes the file.
Simply, the application has to make sure, that client has EXACT copy of host server's folder.
So far, I did this via wget -m, however frequently wget did not recognize, that some files changed and left clients with outdated files.
Recently I've heard of zsync-windows and webtorrent npm package, but I am not sure which approach is right and how to actually accomplish my goal. Thanks for any help.
rsync is a good approach but you will need to access it via node.js
An npm package like this may help you:
https://github.com/mattijs/node-rsync
But things will get slightly more difficult on windows systems:
How to get rsync command on windows?
If you have ssh access to the server an approach could be using rsync through a Node.js package.
There's a good article here on how to implement this.
You can use rsync which is widely used for backups and mirroring and as an improved copy command for everyday use. It offers a large number of options that control every aspect of its behaviour and permit very flexible specification of the set of files to be copied.
It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination.
For your use case:
If the client doesn't have the files present on the host server, the application downloads all of the files from a server to the client. This can be achieved by simple rsync.
If the client has the files, but some files have been updated on the server, the application deletes ONLY the outdated files (leaving the unmodified ones) and downloads the updated files. Use: –remove-source-files or -delete based on whether you want to delete the outdated files from the source or the destination.
If a file has been removed from the host server but is present at the client's folder, the application deletes the file. Use: -delete option of rsync.
rsync -a --delete source destination
Given it's a folder list (and therefore having simple filenames without spaces, etc.), you can pick the filenames with below code
# Get last item from each line of FILELIST
awk '{print $NF}' FILELIST | sort >weblist
# Generate a list of your files
find -type f -print | sort >mylist
# Compare results
comm -23 mylist weblist >diffs
# Remove old files
xargs -r echo rm -fv <diffs
you'll need to remove the final echo to allow rm work
Next time you want to update your mirror, you can modify the comm line (by swapping the two file arguments) to find the set of files you don't have, and feed those to wget.
or
rsync -av --delete https://mirror.abcd.org/xyz/xyz-folder/ my-client-xyz-directory/
I have a bash script that I want to be executed before apache starts or restarts.
I want my bash script to be executed when apache starts during the bootup process and when I manually run "/etc/init.d/apache2 restart/start".
There is an init.d script "/etc/init.d/apache2" but I rather not touch that file.
Google is not very helpful :)
Because of the way /etc/init.d/apache2 is written, you can't hijack it by putting your script ahead of apache2ctl in the PATH and modifying or renaming /usr/sbin/apache2ctl would be more likely to get undone during an update. So that leaves you with a choice of modifying /etc/init.d/apache2 or magic.
It may be that the magic comes in the form of creating a symlink to your script in the appropriate /etc/rc?.d directories with an appropriate prefix that would cause it to run before Apache. On my system, the name might be S88scriptname, for example. You can make these links individually for each runlevel and manage them manually or, on systems such as Debian and Ubuntu that support it, you can model your script after /etc/init.d/skeleton and set the options in the LSB header appropriately (particularly the X-Start-Before keyword, perhaps) and use update-rc.d to manage the rc?.d symlinks for you.