how to get the env variables set inside an apache server process - apache

i am using an apache server.
Actually i have written a few modules which run when the apache service is started with
apachectl.
I have set certain env variables in envvars at
/usr/local/apache2/bin/envvars
Now i start the httpd process with
/usr/local/apache2/bin/apachectl -k start.
Now this apache process will initiate another child apache process.
I would like to know the enviroment variables set in both these processes? How can i see that?

ok sorry, i found it.
cat /proc//environ

Related

Unable to run parallel apache httpd instances

I have created 2 conf files httpd.conf and httpd1.conf in my apache server. In order to run two instances of it.
When I try to start both apache instance
httpd -f /etc/httpd/conf/httpd1.conf -k start and
httpd -f /etc/httpd/conf/httpd.conf -k start only one instance starts. I am able to run either first instance or second. But unable to run them parallel.
The error I am getting httpd (pid 51415) already running.
Where as a server should not have any problem in running multiple instance of any application.
Ok, I found problem. Every httpd service instance stores value of process id in separate file. Inorder to run every instance a new process id needs to generated. The location for process id file is defined in httpd.conf

How to make changes to httpd.conf of apache running inside DOCKER container and restart apache

I am new to docker. In our docker environment - Apache has been installed and it is up and running.
Now I need to get into the container, modify the httpd.conf, save it and then I need to restart the apache.
Can you guys please let me know, what needs to be done.
I am pretty much confused about -
'exec' and 'attach' commands.
No need to attach or exec (which is really a debug feature anyway)
You can use docker cp to copy a local version of your httpd.conf to the container. (That way, you can modify the file from the comfort of your local environment)
docker cp httpd.conf <yourcontainer_name>:/path/to/httpd.conf
Once that is done, you can send an USR1 signal to ask for a graceful restart (see docker kill syntax):
docker kill --signal="USR1" <yourcontainer_name>
Replace <yourcontainer_name> by the container id or name which is running Apache.
That will only work if the main process launched by your container is
CMD ["apachectl", "-DFOREGROUND"]
See more at "Docker: How to restart a service running in Docker Container"
To update Apache configs you need to:
Replace Apache configs.
If you have config folder mapped from outside of container you should update configs outside of container.
If your apache configs are stored inside of container, you will need to run something like this:
docker cp httpd.conf YOUR_CONTAINER_NAME:/path/to/httpd.conf
Do Graceful Apache restart:
sudo docker exec -it YOUR_CONTAINER_NAME apachectl graceful
Enter a container by opening a bash shell:
docker exec -it containerName bash
I guess you better just reload apache config and not reboot apache.
But I wouldn't go this route and just modify Dockerfile and rebuild and rerun the image.
edit for link: https://docs.docker.com/engine/reference/commandline/exec/

Execute command on VZ container start and stop

Is there any way of executing a console command on the host machine when a specific VZ container is started (and/or stopped)? Something like in the /etc/network/interfaces where you can specify an on up and on down command.
Yes, You can setup the cron on your VPS to start the network service. You will have to setup the cron #reboot on your VPS.

fabric appears to start apache2 but doesn't

I'm using fabric to remotely start a micro aws server, install git and a git repository, adjust apache config and then restart the server.
If at any point, from the fabfile I issue either
sudo('service apache2 restart') or run('sudo service apache2 restart') or a stop and then a start, the command apparently runs, I get the response indicating apache has started, for example
[ec2-184-73-1-113.compute-1.amazonaws.com] sudo: service apache2 start
[ec2-184-73-1-113.compute-1.amazonaws.com] out: * Starting web server apache2
[ec2-184-73-1-113.compute-1.amazonaws.com] out: ...done.
[ec2-184-73-1-113.compute-1.amazonaws.com] out:
However, if I try to connect, the connection is refused and if I ssh into the server and run
sudo service apache2 status it says that "Apache is NOT running"
Whilst sshed in, if run
sudo service apache start, the server is started and I can connect. Has anyone else experienced this? Or does anyone have any tips as to where I could look, in log files etc to work out what has happened. There is nothing in apache2/error.log, syslog or auth.log.
It's not that big a deal, I can work round it. I just don't like such silent failures.
Which version of fabric are you running?
Have you tried to change the pty argument (try to change shell too, but it should not influence things)?
http://docs.fabfile.org/en/1.0.1/api/core/operations.html#fabric.operations.run
You can set the pty argument like this:
sudo('service apache2 restart', pty=False)
Try this:
sudo('service apache2 restart',pty=False)
This worked for me after running into the same problem. I'm not sure why this happens.
This is an instance of this issue and there is an entry in the FAQ that has the pty answer. Unfortunately on CentOS 6 doesn't support pty-less sudo commands and I didn't like the nohup solution since it killed output.
The final entry in the issue mentions using sudo('set -m; service servicename start'). This turns on Job Control and therefore background processes are put in their own process group. As a result they are not terminated when the command ends.
When connecting to your remotes on behalf of a user granted enough privileges (such as root), you can manage system services as shown below:
from fabtools import service
service.restart('apache2')
https://fabtools.readthedocs.org/en/0.13.0/api/service.html
P.S. Its requires the installation of fabtools
pip install fabtools
Couple of more ways to fix the problem.
You could run the fab target with --no-pty option
fab --no-pty <task>
Inside fabfile, set the global environment variable always_use_pty to False, before your target code executes
env.always_use_pty = False
using pty=False still didn't solve it for me. The solution that ended up working for me is doing a double-nohup, like so:
run.sh
#! /usr/bin/env bash
nohup java -jar myapp.jar 2>&1 &
fabfile.py
...
sudo("nohup ./run.sh &> nohup.out", user=env.user, warn_only=True)
...

Why does running "apachectl -k start" not work, but "sudo apachectl -k start" does?

I'm working on my OS X with the default installation of Apache. For some reason, when I run the "apachectl" command without the "sudo" I get "no listening sockets available / unable to open logs." I'm guessing this is a permissioning thing, so can someone help me out? I'm using Apache 2.2.
Also, side question, where the the Apache script file that is basically the "exe" that linux executes? I'm trying to intergrate my server with Aptana Studio, and it requires the path to the Apache install. I know in Windows, this would be "C:\path\to\httpd.exe", but I don't know how this works in linux.
Is your server listening on port 80? (Usually) only root is allowed to open ports below 1024. Hence the need for sudo.
As you can see, lots of people wonder how to get around this. One possible solution is to perform port-forwarding on your router. (I'm assuming here that you are behind a router...). Then incoming connections on port 80 can be forwarded to e.g. port 8080. Thus only locally does one need to connect to port 8080. (There may be more elegant solutions... somebody else will post them.)
I think generally (on both OS X and Linux - I'm not sure which one you're referring to) the httpd binary is located at: /usr/sbin/httpd
If you need to be able to restart Apache, and you can't do so as root (for whatever reason..), then you may have to settle for a non 'well known' port.
try this
(with php)
$a = shell_exec('sudo -u root -S /etc/init.d/apache2 restart < /home/$user/passfile');
password should stored in passfile