Get Apache Vhosts from Currently Running Apache - apache

So... I accidentally deleted the vhosts files in my sites-available folder.
I would like to get my vhosts back. Is there any way to get it from the currently running apache config? I have not restarted yet.
This person says no, but this was a few years ago.
Apache : Recover "sites-enabled" config files

Remote disk recovery on a VPS -- here we go!
First, try:
lsof | grep /etc/apache2. If you see something like:
apache 1224 www-data 22r REG 8,5 1282410 1294349 /etc/apache2/sites-available/foo
you're in luck! From extundelete's website:
If you think the file may be still open by some program (for example,
if it is a movie file currently being played by a movie player), and
you know the filename, then first follow this procedure:
lsof | grep "/path/to/file"
progname 5559 user 22r REG 8,5 1282410 1294349 /path/to/file
Notice the number in the second column is 5559 and the
number in the fourth column is 22. The command to restore that file
is:
cp /proc/5559/fd/22 restored.file
If this doesn't work, well. Lots of people believe you are screwed. But I think there is hope!
Note that I rate this as a <50% chance of working, just to set expectations.
Shut off your system ASAP.
Make a full bit-for-bit backup of your disk image over the network.
On a second Linux machine, do apt-get install extundelete.
Run extundelete on that disk image and see what you can get back.
If you can't back up your disk over the network (not enough space, no access to another Linux box) you can try booting into Linode recovery mode and attempting extundelete on the disk directly. This risks data corruption, so don't do it if you really value the disk -- or, again, back it up first.
Of course -- nagging time -- the best solution is to have backups turned on in the first place.

Related

redis snapshot location not as specified in config

After running fine for a while, I am getting write error on my redis instance:
(error) MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.
In the log I see:
9948:C 22 Mar 20:49:32.241 # Failed opening the RDB file root (in server root dir /var/spool/cron) for saving: Read-only file system
However, my redis config file is /etc/redis/redis.conf as confirmed by:
redis-cli -p 6379 info | grep 'config_file'
config_file:/etc/redis/redis.conf
And there I have:
dir /mnt/data/redis
And indeed, there is a snapshot there.
But despite the above, redis now thinks my data directory is
redis-cli -p 6379 CONFIG GET dir
1) "dir"
2) "/var/spool/cron"
Corresponding to the error I was getting as quoted above.
Can anyone tell me why/how my data directory is changing after redis starts, such that it is no longer what is specified in the config file?
So the answer is that the redis server was hacked and the configuration changed, which is very easy to do as it turns out. (I should point out that I had no reason to think it wasn't easy to do. I just assumed security by obscurity was sufficient in this case--wrong. No matter, this was just a playground not any sort of production server).
So don't open your redis port to the world. Use security groups if on AWS to limit access to machines that need it, or use AUTH (which is still not awesome because then all clients need to know the single password which also apparently gets sent in the clear), or have some middleware controlling access.
Hacking redis is easy to do, can compromise your data, and even enable unauthorized SSH access to your server. And that's why you shouldn't highline.

How to disable NFS client caching?

I have a trouble with NFS client file caching. The client read the file which was removed from the server many minutes before.
My two servers are both CentOS 6.5 (kernel: 2.6.32-431.el6.x86_64)
I'm using server A as the NFS server, /etc/exports is written as:
/path/folder 192.168.1.20(rw,no_root_squash,no_all_squash,sync)
And server B is used as the client, the mount options are:
nfsstat -m
/mnt/folder from 192.168.1.7:/path/folder
Flags: rw,sync,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,nosharecache,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.20,minorversion=0,lookupcache=none,local_lock=none,addr=192.168.1.7
As you can see, "lookupcache=none,noac" options are already used to disable the caching, but seems doesn't work...
I did the following steps:
Create a simple text file from server A
Print the file from the server B by cmd "cat", and it's there
Remove the file from the server A
Wait couple minutes and print the file from the server B, and it's still there!
But if I do "ls" from the server B at that time, the file is not in the output. The inconsistent state may last a few minutes.
I think I've checked all the NFS mount options...but can't find the solution.
Is there any other options I missed? Or maybe the issue is not about NFS?
Any ideas would be appreciated :)
I have tested the same steps you have given with below parameters. Its working perfectly. I have added one more parameter "fg" in the client side mounting.
sudo mount -t nfs -o fg,noac,lookupcache=none XXX.XX.XX.XX:/var/shared/ /mnt/nfs/fuse-shared/

Running EC2 instance suddenly refuses SSH connection

I've set up the EC2 instance couple days ago and even last night I was able to SSH to it with no problems. Today morning, I can't ssh to it. Port 22 is already open in the security group and I haven't changed anything since last night.
Error:
ssh: connect to host [ip address] port 22: Connection refused
I had similar issue recently and i couldn't figure out why it was happening, so I had to create a new instance, set it up again, and connect and configure all EBS storages to the new one. Took me couple hours... and now it's happening again. In the previous one, I've installed denyhost, which might have blocked me, but in the current one, there are only apache2, and mysql running.
The current instance has been up for 16 hours now, so I don't think it's because it didn't finish booting... Also, port 22 is open to all sources (0.0.0.0/0) and is using tcp protocol.
Any ideas?
Thanks.
With the help of #abhi.gupta200297, we were able to resolve it.
The issue was the error in /etc/fstab, and sshd was supposed to be started after fstab is successful. But it wasn't, hence, the sshd wouldn't start and that's why it was refusing the connection. Solution was to create a temporary instance, mount the root EBS from the original instance, and comment out stuff from the fstab and voila, it's letting me connect again. And for the future, I just stopped using fstab and created bunch of shell commands to mount the EBS volumes to directories and added them in /etc/init.d/ebs-init-mount file and then run update-rc.d ebs-init-mount defaults to initialize the file and I'm no longer having issues with locked ssh.
UPDATE 4/23/2015
Amazon team created a video tutorial of similar issue and show how to debug using this method: https://www.youtube.com/watch?v=_P29ZHu_feU
Looks like sshd might have stopped for some reason. Is the instance EBS backed? if thats the case, try shutting it down and starting it back up. That should solve the problem.
Also, are you able to ssh from AWS web console? They have a java plugin there to ssh into the instance.
For those of you who came across this post because you are unable to SSH into your EC2 instance after a reboot, this is cross-posted to a similar question at serverfault:
From the AWS Developer Forum post on this topic:
Try stopping the broken instance, detaching the EBS volume, and
attaching it as a secondary volume to another instance. Once you've
mounted the broken volume somewhere on the other instance, check the
/etc/sshd_config file (near the bottom). I had a few RHEL instances
where Yum scrogged the sshd_config inserting duplicate lines at the
bottom that caused sshd to fail on startup because of syntax errors.
Once you've fixed it, just unmount the volume, detach, reattach to
your other instance and fire it back up again.
Let's break this down, with links to the AWS documentation:
Stop the broken instance and detach the EBS (root) volume by going into the EC2 Management Console, clicking on "Elastic Block Store" > "Volumes", the right-clicking on the volume associated with the instance you stopped.
Start a new instance in the same region and of the same OS as the broken instance then attach the original EBS root volume as a secondary volume to your new instance. The commands in step 4 below assume you mount the volume to a folder called "data".
Once you've mounted the broken volume somewhere on the other instance,
check the "/etc/sshd_config" file for the duplicate entries by issuing these commands:
cd /etc/ssh
sudo nano sshd_config
ctrl-v a bunch of times to get to the bottom of the file
ctrl-k all the lines at the bottom mentioning "PermitRootLogin without-password" and "UseDNS no"
ctrl-x and Y to save and exit the edited file
#Telegard points out (in his comment) that we've only fixed the symptom. We can fix the cause by commenting out the 3 related lines in the "/etc/rc.local" file. So:
cd /etc
sudo nano rc.local
look for the "PermitRootLogin..." lines and delete them
ctrl-x and Y to save and exit the edited file
Once you've fixed it, just unmount the volume,
detach by going into the EC2 Management Console, clicking on "Elastic Block Store" > "Volumes", the right-clicking on the volume associated with the instance you stopped,
reattach to your other instance and
fire it back up again.
This happened to me on a Red Hat EC2 instance because these two lines were being automatically appended the end of the /etc/ssh/sshd_config file every time I launched my instance:
PermitRootLogin without-passwordUseDNS no
One of these append operations was done without a line break, so the tail of the sshd_config file looked like:
PermitRootLogin without-passwordUseDNS noPermitRootLogin without-passwordUseDNS no
That caused sshd to fail to start on the next launch. I think this was caused by the bug reported here: https://bugzilla.redhat.com/show_bug.cgi?id=956531 The solution was to remove all the duplicate entries at the bottom of the sshd_config file, and add extra line breaks at the end.
Go to your AWS management console > select instance > right click and select "Get System Logs"
This will list what went wrong.
Had the same issue, but sys logs had this:
Starting sshd: /var/empty/sshd must be owned by root and not group or world-writable.
[FAILED]
Used the same steps described above to detach volume and attach to connectable instance. Then used:
sudo chmod 755 /var/empty/sshd
sudo chown root:root /var/empty/sshd
(https://support.microsoft.com/en-us/help/4092816/ssh-fails-because-var-empty-sshd-is-not-owned-by-root-and-is-not-group)
Then detached and reattached to original EC2 Instance and could now access via ssh.
I got similar ssh locked out by detach an EBS but forgot to modify the /etc/fstab
If your ubuntu has systemd, you can edit /lib/systemd/system/local-fs.target and comment out the last two lines:
#OnFailure=emergency.target
#OnFailureJobMode=replace-irreversibly
I haven't tested this extensively and don't know if there are any risks or side effects involved, but so far it works like a charm. It mounts the root volume and all other volumes (except those that are misconfigured, obviously), then it continues the boot process until SSH is up, so you can connect to the instance and fix the incorrect fstab entries.
In my case, the volume was out of space and a service was failing to start. I used the AWS tutorial (from Sherzod's post) to mount it on a good EC2 instance and clean it up and remove the service from startup before remounting it and verifying that things worked.
For me it was that my IP had changed. Hope this helps someone. Navigate to the security groups and update your My IP in the inbound rules.
I had the same issue not able to connect to the aws instance with permission denied error.
I was able to connect with aws team on screen share call and they guided me to change the folder permission on the aws instance using the following user meta script.
steps :
stop the instance
Actions > Instance setting > Edit user meta
Enter the below script and save
**Content-Type: multipart/mixed; boundary="//" MIME-Version: 1.0
--// Content-Type: text/cloud-config; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config cloud_final_modules:
[scripts-user, always]
--// Content-Type:
text/x-shellscript; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash chown root:root /home chmod 755 /home chmod 700 /home/ubuntu chmod 700 /home/ubuntu/.ssh chmod 600 /home/ubuntu/.ssh/authorized_keys ls -ld /home /home/ubuntu /home/ubuntu/.ssh /home/ubuntu/.ssh/authorized_keys chown ubuntu:ubuntu /home/ubuntu -R
--//**
save and connect to the instance with correct pem key.
Resolved my problem
*change ubuntu to your instance username

Cygwin Openssh can't see /etc/sshd_config

I can't get the openssh server to work on Windows Server 2008. I have it working on two other servers, but one of them just won't work.
I run ssh-host-config, and choose privilege separation. Two users are created sshd and sshd_server.
Then I run net start sshd, and I see this:
The CYGWIN sshd service is starting.
The CYGWIN sshd service could not be started.
The service did not report an error.
Then I run cat /var/log/sshd.log and I see this output:
/etc/sshd_config: No such file or directory
I then check permissions on /etc/sshd_config:
-rw-r--r-- 1 sshd_server root 3344 Sep 7 09:15 /etc/sshd_config
So now, it seems sshd cannot see a file which is there and has the right permissions. Even on windows, that file is owned by sshd_server.
had this happen too .
a Procmon session revealed to me that the sshd service was trying to locate /etc in the root directory c:\etc instead of c:\cygwin\etc.
further investigation showed that sshd was loading an incorrect cygwin1.dll which was living in my system PATH environment variable.
solution was to either to remove the bad cygwin1.dll or remove the "bad" path from the system variables and assigning that path it to user specific environment variables.
afterwards running the sshd daemon under a dedicated user who did not have this "bad "path worked as it should.
thanks mark

How can I play a wav sound on the server side using cgi?

How can I run a command from a (bash) CGI script to play a wav sound on the server side?
You can execute your command line audio player as described by nak, but this may not work due to the permissions of the user running Apache. By default Apache is run as www-data:www-data (or apache:apache or www:www on some distros). As a quick fix/test you can set Apache to run as a user that has permissions to access the audio device on the machine by modifying your /etc/apache2/apache2.conf (or /etc/httpd/httpd.conf") file to have:
User USER_THAT_CAN_PLAY_AUDIO
Group USER_THAT_CAN_PLAY_AUDIO
Warning: this is not secure and is not intended to be a permanent solution!
This is how I would do it
#!/bin/sh
echo Content-type: text/plain
echo ""
echo "Server is playing sine.wav!"
aplay -q sine.wav
I stumbled over this old question looking how to solve the same problem: to have my personal Apache webserver warning me when someone makes a specific request (in my case a call for chat without the need to have any IM running).
The solution below is what I use on Slackware 14.1: according to your distro YMMV.
launch visudo
add the line TheUserRunningApache ALL=(ALL) NOPASSWD: /usr/bin/play (TheUserRunningApache is the user name used by your Apache)
In the PHP page you want to play a sound add this line: system ("sudo /usr/bin/play SOUND.WAV");
If you don't want to give access to Apache to the /usr/bin folder, even if limited just to play, you can copy the sox executable (the program used to run /usr/bin/play) elsewhere, but you'll have to modify the last two instructions above accordingly.