fabric appears to start apache2 but doesn't - ssh

I'm using fabric to remotely start a micro aws server, install git and a git repository, adjust apache config and then restart the server.
If at any point, from the fabfile I issue either
sudo('service apache2 restart') or run('sudo service apache2 restart') or a stop and then a start, the command apparently runs, I get the response indicating apache has started, for example
[ec2-184-73-1-113.compute-1.amazonaws.com] sudo: service apache2 start
[ec2-184-73-1-113.compute-1.amazonaws.com] out: * Starting web server apache2
[ec2-184-73-1-113.compute-1.amazonaws.com] out: ...done.
[ec2-184-73-1-113.compute-1.amazonaws.com] out:
However, if I try to connect, the connection is refused and if I ssh into the server and run
sudo service apache2 status it says that "Apache is NOT running"
Whilst sshed in, if run
sudo service apache start, the server is started and I can connect. Has anyone else experienced this? Or does anyone have any tips as to where I could look, in log files etc to work out what has happened. There is nothing in apache2/error.log, syslog or auth.log.
It's not that big a deal, I can work round it. I just don't like such silent failures.

Which version of fabric are you running?
Have you tried to change the pty argument (try to change shell too, but it should not influence things)?
http://docs.fabfile.org/en/1.0.1/api/core/operations.html#fabric.operations.run
You can set the pty argument like this:
sudo('service apache2 restart', pty=False)

Try this:
sudo('service apache2 restart',pty=False)
This worked for me after running into the same problem. I'm not sure why this happens.

This is an instance of this issue and there is an entry in the FAQ that has the pty answer. Unfortunately on CentOS 6 doesn't support pty-less sudo commands and I didn't like the nohup solution since it killed output.
The final entry in the issue mentions using sudo('set -m; service servicename start'). This turns on Job Control and therefore background processes are put in their own process group. As a result they are not terminated when the command ends.

When connecting to your remotes on behalf of a user granted enough privileges (such as root), you can manage system services as shown below:
from fabtools import service
service.restart('apache2')
https://fabtools.readthedocs.org/en/0.13.0/api/service.html
P.S. Its requires the installation of fabtools
pip install fabtools

Couple of more ways to fix the problem.
You could run the fab target with --no-pty option
fab --no-pty <task>
Inside fabfile, set the global environment variable always_use_pty to False, before your target code executes
env.always_use_pty = False

using pty=False still didn't solve it for me. The solution that ended up working for me is doing a double-nohup, like so:
run.sh
#! /usr/bin/env bash
nohup java -jar myapp.jar 2>&1 &
fabfile.py
...
sudo("nohup ./run.sh &> nohup.out", user=env.user, warn_only=True)
...

Related

Not able to open the deck UI for spinnaker

I installed spinnaker using the command
bash <(curl --silent https://spinnaker.bintray.com/scripts/InstallSpinnaker.sh)
on a local ubuntu machine.
After installation I am not able to connect to the Deck UI of spinnaker using URL: http://localhost:9000
Check logs in /var/log/apache2 for errors, and /etc/apache2/ports.conf to see if it is is listening on 127.0.0.1:9000
The install script should have made those changes for you, but maybe you had a permissions issue or some other kind of local system policy preventing the installation from working properly.

Apache script config with loggly

I am trying to configure loggly in apache in my ubuntu machine.
What I have done is
curl -O https://www.loggly.com/install/configure-apache.sh
sudo bash configure-apache.sh -a XXXXXX -u XXXXXX
After entering the last line it's saying
ERROR: Apache logs did not make to Loggly in time. Please check network and firewall settings and retry.
Manual instructions to configure Apache2 is available at https://www.loggly.com/docs/sending-apache-logs/. Rsyslog troubleshooting instructions are available at https://www.loggly.com/docs/troubleshooting-rsyslog/
Any idea why it's showing and how to solve it?
This is likely a network issue or a delay in sending the logs or even an issue with the script. Check out the following link that has the manual instructions. https://www.loggly.com/docs/sending-apache-logs/ that you can follow and use to verify the script created the configuration files correctly.

Jenkins - j_acegi_security_check

I am trying to setup jenkins, but I cant get the authentication to work. I am running jenkins on Tomcat6 on CentOS 6.2. I enable logging in, and everything goes fine until I try to log in. After giving my credential and pressing login, tomcat gives me a error:
"HTTP Status 404 - The requested resource () is not available." on http://myserver:8080/jenkins/j_acegi_security_check
By googling I can find this:
https://issues.jenkins-ci.org/browse/JENKINS-3761
Two suggested fixes I have found:
Run jenkins on tomcat instead of running the standalone version - I
am already doing so.
Edit a file: WEB-INF/security/SecurityFilters.groovy - I tried to
edit, but I can't get it to change anything
Is there something I could do to make this work?
Spent ages wrestling with this one, make sure a Security Realm is set when you are choosing your Authorization method in Jenkins.
That is, in Manage Jenkins → Configure Global Security select an option in the Security Realm list.
For example:
You may have forgotten to select a Security Realm as specified below
https://wiki.jenkins-ci.org/display/JENKINS/Standard+Security+Setup
In case you have locked yourself out, you can revert the Jenkins config.xml file to set <useSecurity>true</useSecurity> node value to false by following instructions here
https://wiki.jenkins-ci.org/display/JENKINS/Disable+security
As mentioned on the bug page:
The error was caused by a proxy pass rule "/jenkins http://localhost:9080/jenkins/" which led to the incomming (jenkins) request "/jenkins//j_acegi_security_check" (double //). So the login page was rejected with 404 (while all other pages where served).
Make sure your /jenkins ProxyPass does not end with a trailing slash in the destination URL.
I had the same problem with 404 on the "/jenkins/j_acegi_security_check" page.
Using Jenkins with Tomcat, after a lot of tries to solve it, I came to following solution - I´m using 18080 as default port without SSL-redirection.
It´s related to the redirection, but in that case (as using Tomcat) it has to be changed in the tomcat-server-configuration:
Look in /conf/server.xml for the following entry:
<Connector port="18080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
Just commenting out the redirectPort helped for me:
<Connector port="18080" protocol="HTTP/1.1"
connectionTimeout="20000" />
I had the same HTTP 404 on the "/jenkins/j_acegi_security_check" URI problem .
And the same issue as pointed out by pga above: Tomcat was being started as user root.
This was because I had setup tomcat to restart automatically by creating a startup tomcat script at "/etc/init.d/".
Fixed the issue with:
su - tomcatuser - c /cde/pkgs/../tomcat/start.sh
I was facing the same issue on Ubuntu as well as on AIX, where I desperately needed it to work in production settings. I even tried Tomcat and Apache web servers, still the same issue.
Finally changing the class loader as follows made it work in stand-alone mode:
java -jar jenkins.war --httpPort=79802 --preferredClassLoader=java.net.URLClassLoader &
By the way, this is default settings in standard Jenkins distribution for Ubuntu, from where I got the clue.
Probably the issue is related with packaging, but for now this solution works. Check if this resolves similar issues.
My bookmarked jenkins login url was: https://jenkins.foo.com/login?from=%2F
If security is disabled and you hit that url with any credentials or blank ones, it brings up the j_acegi error.
Instead, use https://jenkins.foo.com/ and it will take you straight to the dashboard.
I had the same HTTP 404 on the "/jenkins/j_acegi_security_check" URI.
In my case, Jenkins was running on a Tomcat started by user 'root'. Stopped Tomcat, and started it again by the proper separate application user. Problem solved.
Seeing vote down: I did the steps again on fresh server.
There were ** characters and I removed that
There were missing $sign for tomcat like this $TOMCAT_VERSION
(Both corrected and it is working) (updated on 28.03.2016)
Disable the security as given below:
http://markunsworth.com/2012/02/13/locked-yourself-out-of-jenkins/
Unable to login jenkins, and can't disable login option either
Or
Locked in login with Jenkins on Tomcat.
http://xx.xxx.xxx.xxx:8080/jenkins/login?from=/jenkins/ and after filling the userId Password, which was not set up at all, will always take to this page
http://xx.xxx.xxx.xxx:8080/jenkins/j_acegi_security_check
HTTP Status 404 - description The requested resource is not available.
I had .war file installed in tomcat
It took me long to fix this issue.
I had many times completely removed Tomcat, Jenkins all folders .jenkins etc reinstalled and what not...
Remove both Tomcat and Jenkins completely once again...
The solution is proper use of user and group, let us see how to it by running the following commands one by one.
You are logged in with user(e.g. vimal) with sudo permission.
vimal#h123:~$ sudo apt-get update
vimal#h123:~$ BASE_USER=vimal
vimal#h123:~$ sudo chown -Rf $BASE_USER:$BASE_USER /opt/
vimal#h123:~$ USER=apache-tomcat
vimal#h123:~$ GROUP=myjenkins
vimal#h123:~$ TOMCAT_INSTALL_DIR=/opt
vimal#h123:~$ TOMCAT_VERSION=apache-tomcat-8.0.23
vimal#h123:~$ TOMCAT_URL=http://archive.apache.org/dist/tomcat/tomcat-8/v8.0.23/bin/apache-tomcat-8.0.23.zip
For TOMCAT_URL, copy the link that you need from archive/src (.zip) of tomcat download site
vimal#h123:~$ mkdir -p $TOMCAT_INSTALL_DIR
vimal#h123:~$ cd $TOMCAT_INSTALL_DIR
vimal#h123:~$ wget $TOMCAT_URL
vimal#h123:~$ unzip -q $TOMCAT_VERSION.zip
vimal#h123:~$ rm $TOMCAT_VERSION.zip
Before running command below you need to have JAVA_HOME set up in like JAVA_HOME="/usr/lib/jvm/java-8-oracle/" by adding this into
sudo nano /etc/environment
vimal#h123:~$ sudo chmod +x $TOMCAT_INSTALL_DIR/$TOMCAT_VERSION/bin/*.sh
vimal#h123:~$ $TOMCAT_INSTALL_DIR/$TOMCAT_VERSION/bin/catalina.sh start
vimal#h123:~$
vimal#h123:~$ cd $TOMCAT_INSTALL_DIR/$TOMCAT_VERSION/webapps/
vimal#h123:~$ wget http://mirrors.jenkins-ci.org/war-stable/latest/jenkins.war
Wait for a couple of minutes till Jenkins is fully loaded. Needs 2GB memory.
Try going to the browser on http://xx.xxx.xxx.xxx:8080/jenkins/ and it will work...
Took me one day to find the solution.
Here is how I resolved this issue:
# service tomcat status
tomcat start/running, process 996
# service tomcat stop
tomcat stop/waiting
# service jenkins status
Jenkins Continuous Integration Server is not running
# service jenkins restart
* Restarting Jenkins Continuous Integration Server jenkins [ OK ]
# service tomcat start
tomcat start/running, process 3839
# service jenkins status
Jenkins Continuous Integration Server is running with the pid 3694
Refresh your browser and Jenkins should be up and running.
Hope this helps!

S3 Error: The difference between the request time and the current time is too large

I have error The difference between the request time and the current time is too large when call method amazons3.ListObjects
ListObjectsRequest request = new ListObjectsRequest() {
BucketName = BucketName, Prefix = fullKey
};
using (ListObjectsResponse response = s3Client.ListObjects(request))
{
bool result = response.S3Objects.Count > 0;
return result;
}
What it could be?
The time on your local box is out of sync with the current time. Sync up your system clock and the problem will go away.
For those using Vagrant, a vagrant halt followed by vagrant up worked for me.
The clock is out of sync.
I followed the steps in this post to get it working again, but also had to run the following command.
sudo ntpdate ntp.ubuntu.com
sudo apt-get install ntp
If at any time you get a message saying the NTP socket is still in use, stop it with sudo /etc/init.d/ntp stop and re-run your command.
I had the same error and I'm using Docker for Mac. Simply restarting Docker worked for me.
On WSL 2 or any Deb-based Linux (Ubuntu, Mint ...):
Check date:
date
Now run:
sudo apt install ntpdate
sudo ntpdate time.nist.gov
Output example:
18 Feb 14:27:36 ntpdate[24008]: step time server 132.163.97.4 offset 1009.140848 sec
Check date again:
date
Alternatively look for correctClockSkew option in AWS CLI/SDK config, and set it to true
For those using Docker in Windows try restarting the Docker Engine in Setting->Reset->Restart Docker.
In case anyone finds this using Laravel and Homestead, simply running
homestead halt
followed by
homestead up
And you're good to go again.
2021 answer:
AWS.config.update({
accessKeyId: 'xxx',
secretAccessKey: 'xxxx',
correctClockSkew: true
});
As other's have said, your local clock is out of sync with AWS. You can keep it synced to Amazon's servers directly using NTP so you won't have to worry about clock drift now or in the future.
Note: The below instructions are for *nix users. I've added a comment with how you might do it in Windows, but as a non-Windows user I can't verify their accuracy.
To install NTP, simply choose one of the following, depending on your distribution:
apt-get install ntp
or
yum install ntp
etc.
Configure NTP to use Amazon servers, like so:
vim /etc/ntp.conf
And in it, comment out the default servers and add these:
server 0.amazon.pool.ntp.org iburst
server 1.amazon.pool.ntp.org iburst
server 2.amazon.pool.ntp.org iburst
server 3.amazon.pool.ntp.org iburst
Restart ntp service:
sudo service ntp restart
Source: https://allcloud.io/blog/how-to-fix-amazon-s3-requesttimetooskewed/
And a more general article on keeping your time synchronized with NTP:
https://www.digitalocean.com/community/tutorials/how-to-set-up-time-synchronization-on-ubuntu-12-04
This can also be caused by using async/await with the construction of the request object outside the task and the actual call to AWS inside the task. If there are lots of tasks running and the task isn't scheduled in time, or there is some other operation delaying the actual call to AWS, this exception may be thrown. This is more common than you might guess because the default task scheduler does not process tasks in FIFO order, resulting in starvation for some tasks, especially under heavy load.
This reset my system clock correctly on OSX. S3 uploads using the JS SDK works for me now in local dev
ntpdate us.pool.ntp.org
Read more about this here
if this problem in you localhost for windows 10
set time automatically ON and set time zone automatically ON
this solve my problem.
If you get this error in windows follow these steps to solve your problem.. Change your local time setting:
step 1: click on change date and time settings
step 2: from the popup Date and Time window click on Internet Time Tab
step 3: next Click on Change Settings
step 4: from the Server drop down select time.nist.gov or check this website
step 5: click on OK
Restart your console and check. It works...
For those facing same problem on Microsoft WLS2 Ubuntu, the only workarounds right now are:
sudo hwclock -s
Or
wsl --shutdown
Clock offset is occurring after waking up Windows from sleep. Keep an eye on https://github.com/microsoft/WSL/issues/5324 for fix from microsoft.
If you're working with a VM, restarting the VM just works on mine
If you are using a virtualbox, the time into virtual machine is sync with the time of the real machine. Just fix the time into the virtual machine will not fix the problem.
I had this error because my local machine's time and timezone were set incorrectly. Changing them to the correct time and timezone worked for me.
I had same problem in Windows 10 with Docker. You should run this commands step for step
docker run --rm --privileged alpine hwclock -s
again
docker run --rm --privileged alpine hwclock -s
and last command , don't forget to set your username and password and your timezone, to run minIO while Docker is
docker run -p 9000:9000 -e "MINIO_ACCESS_KEY=yourUserName" -e "MINIO_SECRET_KEY=YourPassword" -e "TZ=Europe/Berlin" -v /etc/localtime:/etc/localtime:ro minio/minio server /data
It is a little crude but this worked for me
Did a curl to s3 server
curl s3.amazonaws.com -v
Then got this
* Trying 52.216.141.158...
* TCP_NODELAY set
* Connected to s3.amazonaws.com (52.216.141.158) port 80 (#0)
> GET / HTTP/1.1
> Host: s3.amazonaws.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 307 Temporary Redirect
< x-amz-id-2: q2wUOf5ZC7iu2ymbRWUpZaM6GpPLLf/irrntuw/JNB7QYxDzQvcLHQbsbF2dp5zT8rBrGwqnOz0=
< x-amz-request-id: T4H1W4WKBE3F39RM
< Date: Sat, 09 Oct 2021 19:21:24 GMT
< Location: https://aws.amazon.com/s3/
< Server: AmazonS3
< Content-Length: 0
<
* Connection #0 to host s3.amazonaws.com left intact
* Closing connection 0
Got this date
Sat, 09 Oct 2021 19:21:24 GMT
Set the date in ubuntu
sudo date --set "Sat, 09 Oct 2021 19:21:24 GMT"
My code stopped throwing exceptions
Now I have a script that does this periodically every month
To get rid of this problem, you have to adjust the client's timing so that there is a maximum time stamp difference of up to 15 minutes. Also set the standard time and zone for your system.
Check the full details here.
I have the exact same error message but it's not the same cause as any of the others above.
In my case I have a React browser app doing something like this:
import { Storage } from '#aws-amplify/storage'
...
await Promise.all(files.map(file => Storage.put(...)))
I am uploading a lot of files over a slow network connection.
With this code, the promises are all started at once, so the request time for all the requests is the same, but because the browser (or amplify?) is throttling the number of concurrent connections, the later requests don't actually hit the server until more than 15 minutes after they were created.
The solution is to limit the concurrency of the promise creation - e.g. use something like bluebird Promise.map with the concurrency option
Using ntp may not work on all version of your Linux based server (e.g. an out of date Ubuntu server version that is no longer supported which will block you from downloading ntp if it is not already installed).
If this is your situation, you can set independent time zones for your Linux VM:
https://community.rackspace.com/products/f/25/t/650
After you do this you may need to reset the time/date. Instructions for doing this are in this article:
http://codeghar.wordpress.com/2007/12/06/manage-time-in-ubuntu-through-command-line
If u are in 2016 and in Istanbul here is a weird situation that Turkey decided not to switch to winter time standards anyway set your local timezone to Moscow then restart your machine.
I ran into this issue running Jet (Codeship) and Terraform on MacOS using Docker for Mac Beta channel 1.13.1-beta42.
Failed to read state: Error reloading remote state: RequestTimeTooSkewed: The difference between the request time and the current time is too large.
status code: 403, request id: 9D32BA2A5360FC18
This was resolved by restarting Docker.
I've just started getting this error, and syncing my clock doesn't help. (I've spent 2 hours syncing it to every timeserver I can find, including the AWS servers, but nothing makes a difference.)
Exactly the same thing started happening a year ago on Dec 31 2017. In that case, rebooting my system, and rebuilding my server (that uses the aws java sdk) fixed it. I don't know why. I assumed that AWS had some end-of-year timezone peculiarity. It's also possible that while I was doing these things, AWS timeservers fixed themselves. I have no way to test that hypothesis.
Now, the same thing has suddenly started to happen on Dec 30, 2018. It's not right at year-end, but close enough to seem suspicious. (Never got this error except on these dates.) Rebooting and rebuilding isn't helping this time.
My dev environment on this box is Windows 10 under Parallels. Nothing else on my system has changed - as I've double-checked by rolling back to prior Parallels snapshots. The clocks on both my host MacOS and the virtual Windows 10 are correct.
I'm suspecting an AWS bug.
Rebooting my windows server fixed it for me
The time was identical to ~1 second to the site time.in, so it wasn't off.
I was running into the same issue on my Mac. When I moved to a different timezone(PST to IST), somehow OSX was not picking timezone and time change automatically. So I had to set the two manually and that caused a lag of some 15-20 seconds on my laptop. After setting the automatic sync, the time got synched and the S3 copy command started working: For reference
You can use this tool for organizing your time with AWS and local system.
To synchronize time:
sudo yum -y install chrony
sudo systemctl enable chronyd
sudo systemctl start chronyd
This issue generally occurs when s3cmd client machine time is not synced with server.
Check time of both machine.
either sync time between them using date command
Client# sudo date --set="string"
Client# sudo date --set="15 MAY 2011 1:40 PM"
or
install chrony and restart its service on both machine
Client# sudo apt-get install chrony
Client# vi /etc/chrony/chrony.conf
pool ntp-server iburst
Client# sudo systemctl restart chronyd

Why does running "apachectl -k start" not work, but "sudo apachectl -k start" does?

I'm working on my OS X with the default installation of Apache. For some reason, when I run the "apachectl" command without the "sudo" I get "no listening sockets available / unable to open logs." I'm guessing this is a permissioning thing, so can someone help me out? I'm using Apache 2.2.
Also, side question, where the the Apache script file that is basically the "exe" that linux executes? I'm trying to intergrate my server with Aptana Studio, and it requires the path to the Apache install. I know in Windows, this would be "C:\path\to\httpd.exe", but I don't know how this works in linux.
Is your server listening on port 80? (Usually) only root is allowed to open ports below 1024. Hence the need for sudo.
As you can see, lots of people wonder how to get around this. One possible solution is to perform port-forwarding on your router. (I'm assuming here that you are behind a router...). Then incoming connections on port 80 can be forwarded to e.g. port 8080. Thus only locally does one need to connect to port 8080. (There may be more elegant solutions... somebody else will post them.)
I think generally (on both OS X and Linux - I'm not sure which one you're referring to) the httpd binary is located at: /usr/sbin/httpd
If you need to be able to restart Apache, and you can't do so as root (for whatever reason..), then you may have to settle for a non 'well known' port.
try this
(with php)
$a = shell_exec('sudo -u root -S /etc/init.d/apache2 restart < /home/$user/passfile');
password should stored in passfile