Restoring Apache Tomcat after an accidental delete - apache

I have a server running apache tomcat. The path to the server is following:
root#serverb:/usr/tomcat/apache-tomcat-7.0.23# pwd
/usr/tomcat/apache-tomcat-7.0.23
root#serverb:/usr/tomcat/apache-tomcat-7.0.23# ls
LICENSE NOTICE RELEASE-NOTES RUNNING.txt bin conf lib logs temp webapps work ws.war
From time to time, I have to go logs/ folder and run following command:
find . -mtime +2 -exec rm {} \;
However, I accidentally ran this command in /usr/tomcat/apache-tomcat-7.0.23 as a result, my ws.war file and other files from within bin/ folder got deleted.
I have the backup of ws.war but not of the apache folder. Is there anyway I can reinstall the apache and restore my server.

Most likely you're not asking how to create a backup after you need it (not before...), right?
Of course, you can get tomcat at http://tomcat.apache.org, but if you don't have your configuration and changed settings (e.g. memory settings, host setup etc.) you'll have to redo it from memory or until nobody complains any more.
Congratulations, you've learnt about the importance of backups. When you're done with the new installation, consider having a proper backup from now on. Keep in mind: IMHO you're only allowed to call something a backup if you have demonstrated that you can use it to restore to a new environment in the time that you specify as acceptable downtime.

Related

Where to store git repo to run swa efficiently using wsl2?

I'm trying to run my static web app using Windows Subsystem for Linux (2), but I can't figure out where on my computer I should store the git repository to be able to run it decently quickly. I have tried storing it on under /mnt/c/{workfolder}, but it takes several minutes to start up (using npm run start), and I have to rerun to see any changes. This is useless when I'm trying to work...
I have also tried to store it in /mnt/wsl/{workfolder}, and in that case it starts up quickly and I can see my changes without rerunning the app. However, it seems to disappears when I restart my computer.
Where should I store the git repository to be able to run the app quickly and see changes without rerunning? I'm assuming there's something I'm not understanding, help me get this it you know.
You'll want it somewhere on the ext4 partition of the WSL distribution. Typically, the best place is going to be under your WSL /home/<username> folder.
I would recommend:
mkdir ~/src
# or
mkdir ~/projects
# or something similar
Then create subdirectories for each project in that directory.
Why the others don't work:
/mnt/c is the Windows C: drive. That drive is mounted into WSL2 using the 9P network file system, and yes, it's (a) slow, and (b) does not support inotify, so apps cannot register for notifications of changes to files.
/mnt/wsl is a tmpfs mount. It's really there for holding things that need to be shared between all running WSL instances. The auto-generated resolv.conf that you see there is one of those things. You can also use it for copying a file from one WSL distribution to another -- Simply copy the file to /mnt/wsl, start another WSL distribution, and copy or move the file out.
But yes, all tmpfs mounts are ephemeral and will terminate when the last WSL2 distribution/instance terminates.

sshfs: will a mount overwrite existing files? Can I tell it to exclude a certain subfolder?

I'm running Ubuntu and have a remote CentOS system which stores (and has access to) various files and network locations. I have SSH access to the CentOS machine and want to be able to work locally on Ubuntu.
I'm trying to mirror a remote directory structure. The remote directory is structured:
/my_data/user/*
And I want to replicate this structure locally (a lot of scripts rely on absolute paths).
However, for reasons of speed, I want a certain subfolder, for example:
/my_data/user/sourcelibs/
To be stored locally on disk. I know the sourcelibs subfolder doesn't change much (but the rest might). So I can comfortably rsync it:
mkdir -p /my_data/user/sourcelibs/
rsync -r remote_user#remote_host:/my_data/user/sourcelibs/ /my_data/user/sourcelibs/
My question is, if I use sshfs to mount /my_data/user:
sudo sshfs -o allow_other,default_permissions, remote_user#remote_host:/my_data/user /my_data/user
Will it overwrite my existing files? Is there a way to have sshfs mount but exclude certain subfolders?
Yes, sshfs will overwrite existing files. I have almost the same use case and just tested this myself. BTW, you'll need to add -o nonempty to your sshfs command since the destination dir /my_data/user already exists.
What I found to work is make a copy of the remote directory excluding the large sub dirs. IDK if keeping 2 copies in sync on the remote machine is feasible for your use case? But if you'll mostly be updating on your local machine and rarely making changes remotely, that could work.

Sync clients' files with server - Electron/node.js

My goal is to make an Electron application, which synchronizes clients' folder with server. To explain it more clearly:
If client doesn't have the files present on the host server, the application downloads all of the files from server to client.
If client has the files, but some files have been updated on the server, the application deletes ONLY the outdated files (leaving the unmodified ones) and downloads the updated files.
If a file has been removed from the host server, but is present at client's folder, the application deletes the file.
Simply, the application has to make sure, that client has EXACT copy of host server's folder.
So far, I did this via wget -m, however frequently wget did not recognize, that some files changed and left clients with outdated files.
Recently I've heard of zsync-windows and webtorrent npm package, but I am not sure which approach is right and how to actually accomplish my goal. Thanks for any help.
rsync is a good approach but you will need to access it via node.js
An npm package like this may help you:
https://github.com/mattijs/node-rsync
But things will get slightly more difficult on windows systems:
How to get rsync command on windows?
If you have ssh access to the server an approach could be using rsync through a Node.js package.
There's a good article here on how to implement this.
You can use rsync which is widely used for backups and mirroring and as an improved copy command for everyday use. It offers a large number of options that control every aspect of its behaviour and permit very flexible specification of the set of files to be copied.
It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination.
For your use case:
If the client doesn't have the files present on the host server, the application downloads all of the files from a server to the client. This can be achieved by simple rsync.
If the client has the files, but some files have been updated on the server, the application deletes ONLY the outdated files (leaving the unmodified ones) and downloads the updated files. Use: –remove-source-files or -delete based on whether you want to delete the outdated files from the source or the destination.
If a file has been removed from the host server but is present at the client's folder, the application deletes the file. Use: -delete option of rsync.
rsync -a --delete source destination
Given it's a folder list (and therefore having simple filenames without spaces, etc.), you can pick the filenames with below code
# Get last item from each line of FILELIST
awk '{print $NF}' FILELIST | sort >weblist
# Generate a list of your files
find -type f -print | sort >mylist
# Compare results
comm -23 mylist weblist >diffs
# Remove old files
xargs -r echo rm -fv <diffs
you'll need to remove the final echo to allow rm work
Next time you want to update your mirror, you can modify the comm line (by swapping the two file arguments) to find the set of files you don't have, and feed those to wget.
or
rsync -av --delete https://mirror.abcd.org/xyz/xyz-folder/ my-client-xyz-directory/

How do I backup and restore my files/permissions when preparing for a remove/replace of WSL?

With the creators update out, I'd like to upgrade my Ubuntu instance to 16.04.
The recommended approach to upgrade (and I agree) is to remove and replace the instance with a clean installation. However I have some files and configurations I would like to keep and transfer to the new install. They suggest copying the files over to a Windows folder to backup the files and restore afterward. However, by putting the files there, it messes up all the permissions of everything.
I had already done the remove/replace on one of my machines and I found that trying to restore all the permissions on all the files was just not worth it and did another clean install and will be copying the contents of the file over instead. This will be an equally tedious solution to restore these files but it has to be done.
Is there an easier way to backup and restore my files and their permissions when doing this upgrade?
I have two more machines I would like to upgrade but do not want to go through this process again if it can be helped.
Just use linux way to backup your files with permission, such as getfacl/setfacl or tar -p

CException Error while deploying yii application on OpenShift?

Friends, I tried to deploy my yii production application from cloud9 IDE to OpenShift while do so, I got this error message,
CException
Application runtime path "/var/lib/openshift/51dd48794382ecfd530001e8/app-root/runtime/repo/php/protected/runtime" is not valid. Please make sure it is a directory writable by the Web server process.
Even when I changed folder permissions to 775 (chmod -R 775 directory) on Cloud9 IDE and deployed again, but I get the same error coming.
It's an old question, but I just bumped into the same issue very recently.
When you extracted the "yii" package several folders were empty, "framework/protected/runtime" was one of them.
To deploy to OpenShift you need to commit the yii package to git, and the push the commit to OS. But, git won't commit empty folders, so they are not created in your deployment. You need to create some file inside those folders and add those files to your git repo before committing/pushing. The usual procedure would be to add a ".gitkeep" file to those folders (it's just a empty dummy file, so git would see those folders).
That would fix this particular error.
It may be due the ownership given to the folder.
Check the web server user group, is that directory is writable or not and also What effects a web server when we change the platform.
Hope my suggestion would be useful.
For Yii applications, the assets and protected/runtime folders are special. First, both folders must exist and writable by the server (httpd) process. Second, these two folders contains temporary files, and should be ignored by git. If these temporary files got committed, deployment in plain servers (not Openshift servers) would cause git merge conflicts. So I put these two folders in .gitignore :
php/assets/
php/protected/runtime/
In my deployment, I add a shell script to be called by openshift, creating both folders under $OPENSHIFT_DATA_DIR and creating symbolic link to both of them in the application's folders. This is the content of the shell script (.openshift/action_hooks/deploy) which I adapted from here :
#!/bin/bash
if [ ! -d $OPENSHIFT_DATA_DIR/runtime ]; then
mkdir $OPENSHIFT_DATA_DIR/runtime
fi
# remove symlink if already exists, fix problem when with gears > 1 and nodes > 1
rm $OPENSHIFT_REPO_DIR/php/protected/runtime
ln -sf $OPENSHIFT_DATA_DIR/runtime $OPENSHIFT_REPO_DIR/php/protected/runtime
if [ ! -d $OPENSHIFT_DATA_DIR/assets ]; then
mkdir $OPENSHIFT_DATA_DIR/assets
fi
rm $OPENSHIFT_REPO_DIR/php/assets
ln -sf $OPENSHIFT_DATA_DIR/assets $OPENSHIFT_REPO_DIR/php/assets
The shell script ensures the temporary folders created on each gear after openshift deployment. By default, a new directory's right are u+rwx, and it became writable by the httpd process because the gear runs httpd as the gear user (not apache or something else).