I have a VPS setup with Centos 6, with the new Apple requirements for HTTPS access I was looking at using Certbot. Now the problem is I go through the setup process and the following error pops up
The apache plugin is not working; there may be problems with your
existing configuration. The error was: NoInstallationError('Cannot
find Apache control command apachectl',)
When I run the following
find / -name apachectl
I get
/home/cpeasyapache/src/httpd-2.4/support/apachectl
/usr/local/apache.backup/bin/apachectl /usr/local/apache/bin/apachectl
Is there a way I can alter the config so that the system can use apachectl. The system came pre-built from UK2.
Thanks
Related
Thanks you for taking the time out for helping. I am facing an issue with my apache server and the story goes like this:
I was running an ubuntu 18.04 server and my SSL(letsencrypt ssl obtained through certbot) got expired when I ran the command : certbot renew but it gave me errors relating to DNS. Then I thought it would be a good idea if I simply delete the existing certificate and install a new one so I googled how to delete a ssl certificate using certbot and got to know about sudo certbot delete but it didn't worked as expected and when I restarted the server apache didn't started and when I ran the command apache2ctl configtest it returned an error saying :
AH00526: Syntax error on line 20 of /etc/apache2/sites-enabled/000-default-le-ssl.conf:
SSLCertificateFile: file '/etc/letsencrypt/live/tomebox.in/fullchain.pem' does not exist or is empty
Action 'configtest' failed.
The Apache error log may have more information.
Can anyone please help me understanding and resolving the issue and getting my website back to normal.
You'll need to edit the etc/apache2/sites-enabled/000-default-le-ssl.conf file to change the file being referenced, as you have deleted that file. You can create a new certificate to reference using certbot instructions, or delete 000-default-le-ssl.conf altogether if you are no longer using SSL.
To edit the file, navigate to the directory in SSH using
cd etc/apache2/sites-enabled/
and then edit the file as root using
sudo nano 000-default-le-ssl.conf
ctrl+ x will exit and give option to save when done. You may want to restart apache2 after:
sudo service apache2 restart
I'm trying to start ProFTPD Server but i recieve the next message:
Starting proftpd (via systemctl): proftpd.serviceJob for proftpd.service failed because the control process exited with error code.
See "systemctl status proftpd.service" and "journalctl -xe" for details.
failed!
And when i get more information about it, i have:
fatal: TLSRSACertificateFile. '/etc/ssl/certs/proftp.crt' does not exists on line 8 '/etc/proftpd/conf.d/virtualmin.conf/'
I have to say i did'nt installed ProFTPD Server because this module came with the webmin installation.
Hope you can help me to know why proftp.crt fie does not exists and how can i fix this issue.
Thanks.
Don't know Webmin (I used it once approximalety 20 years ago..) but there should at least be options to disable TLS for the FTP Server and to change the path to the certificate if it is able to manage ProFTPD for You.
Other alternative (better then turning SSL/TLS off): copy Your cert for the server there (you have to do something similiar for the key I assume), if You do not have one You can get one or create Your own self-signed (not sure whether Webin can help You with that, but on command line it's pretty simple to create one with openssl.)
Come on...
1 - Check if your plugin is active in VirtualMin.
1.1 - Shell check if your SFTP is installed
== ProFTPD install
UBUNTU = apt install proftpd -y
CENTOS = yum install proftpd -y
2 - Check if your domain is SFTP enabled.
3 - Create an SSL within the domain, after created click on the option to use SSL for SFTP.
VIRTUALMIN // Domain >> Server Configuration >> SSL
4 - Create an SSL for the domain you use to access WebMin.
E Use this SSL for WebMin // VirtualMIn
5 - Check the proftpd settings via WEBMIN.
Good luck! Send news about your progress ...
I am trying to create localhost Apache Ambari cluster on CentOS7. I am using Ambari 2.2.2 binaries downloaded and installed from the Ambari repository with the following commands
cd /etc/yum.repos.d/
wget http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.2.2.0/ambari.repo
yum install ambari-server
ambari-server setup
ambari-server start
Before starting the server I have done all the necessary preparations steps described on the Hortonworks including the setup of passwordless ssh, which is frequent reason of problems according to the posts found on the internet. I verify it with
ssh root#localhost
During the creation of cluster in the "Install options" window I enter the name of the host I want to create (localhost in my case) and have already tried both of the options, which are
providing rsa secret key direktly - in this case the next window
simply stucks in the "Installing" stage and does not go any further,
showing no errors
performing manual registration of hosts.
For the second option I have downloaded and installed ambari-agent
yum install ambari-agent
ambari-agent start
In case of manual host registration I am getting the following error
"Host checks were skipped on 1 hosts that failed to register.".
When I click on "Failed", which in some cases described over the internet is supposed to deliver more precise description of a problem I see the following
"Registering with the server...
Registration with the server failed."
As a result I don't even now where to start searching for the possible reasons of this error.
Ambari cluster nodes need to be configured with a Fully Qualified Domain Name (FQDN). localhost is not an FQDN. You will need to configure the node with an FQDN and then retry the installation. You could use something like: localhost.local which is an FQDN. This requirement and how to configure the node to meet it are documented in the pre-requirements. From the HDP documentation:
All hosts in your system must be configured for both forward and and reverse DNS.
If you are unable to configure DNS in this way, you should edit the /etc/hosts file on every host in your cluster to contain the IP address and Fully Qualified Domain Name of each of your hosts.
I had the same "Registering with the server... Registration with the server failed." problem just recently.
I found the response on the same topic recommending to take a look at the log file which is located here /var/log/ambari-agent/ambari-agent.log from there was able to check that the hostname was set up incorrectly during some other phase of installation (I had it something like ambari.hadoop instead of localhost). So I went to the /etc/ambari-agent/conf/ambari-agent.ini and fixed it there.
I know that I'm digging some quite old question, but seems that compiling all that at one place might help someone with the same problem.
So I want to do SSL certification on HAProxy to make the connection secure. I started of downloading HAProxy through appstore but later found out that the installation package doesnt support SSL. So I downloaded HAProxy 1.5.14 and compiled it with USE_OPENSSL=1. when I do haproxy -vv I am able to see that SSL is enabled in it.
The issue that I am facing is that when I compile and then install the file by using the command (sudo make install), I am unable to find the haproxy.cfg. I dont know where it is so I am unable to configure and set the setting to the requirement.
The installation package that I got is from the HAProxy official site and I would like someone's help. Please advice me how to solve this issue.
Thank you,
Safiul Hasan
The default config file location is:
/etc/haproxy/haproxy.cfg
You can also search your system for the file with this command:
find / -name 'haproxy.cfg'
If haproxy is already running successfully you can find out what config file it is using by looking at the command that is used to run it:
ps x | grep haproxy
This will result in output like this:
28548 ? S 0:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg
The part after the "-f" is the path to the config file haproxy is currently using.
There are no default haproxy.cfg file, you have to create it from scratch.
Look for some samples on the internet to get one fitting your needs.
You can put your configuration file anywhere and ask to haproxy to use it by using the "-f" parameter.
I use Debian 8.0 running an Apache v.2.4.10 and I try to add the OpenID Connect module named libapach2-mod-auth-openidc version 1.6.0.
After installing the module, I enable it with the command: sudo a2enmod auth_openidc. This works fine and now I want to restart the Apache server with sudo service apache2 restart, which leads me to an error
"Job for apache2 failed. See 'systemctl status apache2.service' and 'journalctl -xn' for details."
The result of
systemctl status apache2.service
shows an error while starting the server, but no detailed information of the error (code=exited, status=1/FAILURE).
And the result of
journalctl -xn
tells, that there are no journals.
So if I am disabling the auth_openidc module, the Apache server starts again without problems.
Details of the Configuration:
Apache runs with its default settings. I did not change anything!
auth_openidc module was not changed by me neither at this time!
Can someone explain why Apache with the enabled auth_openidc module would not start anymore?
After installing libapache2-mod-auth-openidc you will have to configure some settings before the module can be used successfully. Two of the mandatory settings are OIDCRedirectURI and OIDCCryptoPassphrase. Most probably you'll also have to configure client credentials for your OpenID Connect provider. You can take a look at the sample configurations at: https://github.com/pingidentity/mod_auth_openidc#openid-connect-sso-with-google-sign-in
Errors/warnings about the missing configuration directives should be displayed in: /var/log/apache2/error.log
While we're at it, I would also advise you to use the latest version 1.8.1 from https://github.com/pingidentity/mod_auth_openidc/releases