How do I backup and restore my files/permissions when preparing for a remove/replace of WSL? - windows-subsystem-for-linux

With the creators update out, I'd like to upgrade my Ubuntu instance to 16.04.
The recommended approach to upgrade (and I agree) is to remove and replace the instance with a clean installation. However I have some files and configurations I would like to keep and transfer to the new install. They suggest copying the files over to a Windows folder to backup the files and restore afterward. However, by putting the files there, it messes up all the permissions of everything.
I had already done the remove/replace on one of my machines and I found that trying to restore all the permissions on all the files was just not worth it and did another clean install and will be copying the contents of the file over instead. This will be an equally tedious solution to restore these files but it has to be done.
Is there an easier way to backup and restore my files and their permissions when doing this upgrade?
I have two more machines I would like to upgrade but do not want to go through this process again if it can be helped.

Just use linux way to backup your files with permission, such as getfacl/setfacl or tar -p

Related

Methods to deploy an npm project to a remote server

I'm trying to find a good cross-platform way to deploy an npm project to a remote server over ssh (or another method). I'm specifically looking for something that copies over the files, while respecting the .gitignore (not copying files that are in .gitignore, and preserving files in .gitignore on the remote server, while pruning spare files.
Notably as a consequence of this requirement, this should neither copy node_modules nor clobber remote node_modules.
The idea is to get the source code to the server this way, and then execute commands over ssh to build it on the server, copy the dist into the appropriate location on the server, and run any other deploy steps.
I already have something that works fairly well. I set up a git repo on my server that I have a remote to locally, and I push my local changes to that remote. A post-recieve hook then takes effect and copies the source to where I need it, similar to what this describes.
This works pretty nicely, but it kind of falls apart when I want to deploy without fully committing everything, and it also feels somewhat fragile. I use a fairly complex local script to checkout a new branch, commit all working changes, and push it, but it fails on certain cases like having untracked files.
Pardon the lengthy context. tl;dr; I'm looking for other options to do this sort of deploy. It seems like rsync would be a natural candidate and I've looked into the npm rsync package, but its Windows support doesn't seem great, requiring cygwin. I've also considered copying manually with scp and leveraging a library to parse the .gitignore, but I'd like to preserve node_modules on the server (so it doesn't have to redownload everything), so I can't just overwrite the directory.
Any ideas?

Where to store git repo to run swa efficiently using wsl2?

I'm trying to run my static web app using Windows Subsystem for Linux (2), but I can't figure out where on my computer I should store the git repository to be able to run it decently quickly. I have tried storing it on under /mnt/c/{workfolder}, but it takes several minutes to start up (using npm run start), and I have to rerun to see any changes. This is useless when I'm trying to work...
I have also tried to store it in /mnt/wsl/{workfolder}, and in that case it starts up quickly and I can see my changes without rerunning the app. However, it seems to disappears when I restart my computer.
Where should I store the git repository to be able to run the app quickly and see changes without rerunning? I'm assuming there's something I'm not understanding, help me get this it you know.
You'll want it somewhere on the ext4 partition of the WSL distribution. Typically, the best place is going to be under your WSL /home/<username> folder.
I would recommend:
mkdir ~/src
# or
mkdir ~/projects
# or something similar
Then create subdirectories for each project in that directory.
Why the others don't work:
/mnt/c is the Windows C: drive. That drive is mounted into WSL2 using the 9P network file system, and yes, it's (a) slow, and (b) does not support inotify, so apps cannot register for notifications of changes to files.
/mnt/wsl is a tmpfs mount. It's really there for holding things that need to be shared between all running WSL instances. The auto-generated resolv.conf that you see there is one of those things. You can also use it for copying a file from one WSL distribution to another -- Simply copy the file to /mnt/wsl, start another WSL distribution, and copy or move the file out.
But yes, all tmpfs mounts are ephemeral and will terminate when the last WSL2 distribution/instance terminates.

Backing up source files managed by source control software: TortoiseSVN

I am new to source control and I am confused with something I read on a webpage yesterday (I don't have the link). I have followed these instructions: "create folder structure", then "Start Reprobrowser", then copy source files into trunk folder. Please see the screen shot below:
However, when I navigate to the folder using Windows Explorer I do not see this folder structure. I see this:
Therefore I am wandering: where are the files physically stored? The reason I ask is because I want to ensure that NetBackup (corporate backup tool) backs up the correct directories.
To make sense of the repository structure you need to read all the documentation on SVN, but the preferred way to backup a SVN repository is through the command
svnadmin dump your_svn_repository_path > destination_filename_backup.svn
You could put this command in a scheduled task running sometime before your corporate tool execute the full backup of your data and include the destination_filename_backup.svn in your backup job
If you ever need to restore the backup (after recreating the repository) you could use the command
svnadmin load your_svn_repository_path < destination_filename_backup.svn

Restoring Apache Tomcat after an accidental delete

I have a server running apache tomcat. The path to the server is following:
root#serverb:/usr/tomcat/apache-tomcat-7.0.23# pwd
/usr/tomcat/apache-tomcat-7.0.23
root#serverb:/usr/tomcat/apache-tomcat-7.0.23# ls
LICENSE NOTICE RELEASE-NOTES RUNNING.txt bin conf lib logs temp webapps work ws.war
From time to time, I have to go logs/ folder and run following command:
find . -mtime +2 -exec rm {} \;
However, I accidentally ran this command in /usr/tomcat/apache-tomcat-7.0.23 as a result, my ws.war file and other files from within bin/ folder got deleted.
I have the backup of ws.war but not of the apache folder. Is there anyway I can reinstall the apache and restore my server.
Most likely you're not asking how to create a backup after you need it (not before...), right?
Of course, you can get tomcat at http://tomcat.apache.org, but if you don't have your configuration and changed settings (e.g. memory settings, host setup etc.) you'll have to redo it from memory or until nobody complains any more.
Congratulations, you've learnt about the importance of backups. When you're done with the new installation, consider having a proper backup from now on. Keep in mind: IMHO you're only allowed to call something a backup if you have demonstrated that you can use it to restore to a new environment in the time that you specify as acceptable downtime.

FTP Concurrency issues using Ipswitch WS-FTP Pro

I think we have a problem in our FTP scripts that pull files from a remote server to a local machine. I couldn't find an answer in their knowledge base, nor scripting documentation.
We are doing an MGET *.* and then a MDELETE *.* immediately after it. I think what is happening is that, while we are copying files from the server, additional files are copied into the same directory and then the delete command deletes everything from the server. So we end up deleting file we never copied down.
Is there a straight-forward way to delete only the files that were copied, or is it going to be some sort of hack job where we generate a dynamic delete script based on what we actually copied down?
Answers that are product specific would be much appreciated!
Here were the options that I came up with and what I ended up doing.
Rename the extension on the server, copy the renamed files, and then delete the renamed files. This could not work because there is no FTP rename command that works with wildcards (Windows rename command will by the way).
Move the files to a subdirectory on the server, copy the files from that location, and then delete from the remote location. This could not work because there is no FTP command to move the files on the remote server.
Copy the files down in one script and SHELL a batch file on the local side that dynamically builds a script to connect to the server and delete the files that were copied down. This is the solution I ended up using to solve this problem.