sending email with PHP sendmail not working for primary domain - apache

example.com has a web server and a mail server.
Email sent from the web server to any email address (except #example.com) works.
Email sent to #example.com works from any other source.
Web server is setup to use webmail. The web server is Ubuntu 12.04 with Apache and PHP
Any help appreciated. Thanks.
=== edit
This fixed the problem, thanks
https://serverfault.com/questions/65365/disable-local-delivery-in-sendmail/128450#128450

Apache was trying to handle the email locally on our server. When I looked at the log response I saw it was returning the following error.
User unknown
550 5.1.1 recepient#example.com
After following these steps the email was routed to the correct mail server. My setup is Ubuntu 12.04 with PHP 5 Apache2 and Sendmail.
Edit the sendmail config file on the Apache server:
sudo nano /etc/mail/sendmail.mc
At the end of the file add the following lines of code to handle email correctly:
define(`MAIL_HUB', `example.com.')dnl define(`LOCAL_RELAY',
`example.com.')dnl
Save the file and exit.
Update the sendmail setup in the command prompt:
sudo sendmailconfig
Follow the steps, I said yes to everything
Restart sendmail:
sudo service sendmail restart
Try sending the email again. It should work now.

Related

Why can't I see https webpage after using sudo certbot --apache

I've just done a fresh install of Ubuntu 20.04 and followed the Digital Ocean instructions to get my apache server up and running:
https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mysql-php-lamp-stack-on-ubuntu-20-04
Which worked fine for HTTP traffic, then I used the Digital Ocean instructions (which I knew, but followed them anyway) to set up for SSL (https) access:
https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-20-04
I selected the option to redirect all traffic to https. I opened my firewall using sudo ufw allow 'Apache Full'.
But I am unable to see my sites - the browsers just timeout. I have tried disabling ufw just to see, and nope, nothing.
SSL Labs just gives me an "Assessment failed: Unable to connect to the server" error.
I also ran https://check-your-website.server-daten.de/?q=juglugs.com
and it timed out:
I have deleted the letsencrypt stuff and ran through it again three times with the same result, and now I'm stuck...
Everything I've searched points to a firewall error, but as I've said, I've disabled that and have the same result. The router settings have not been changed since I did my fresh Ubuntu install.
Any help gratefully received.
Thanks in advance.
on8tom answered this one for me - In setting up the new build of Ubuntu, my local IP address for the apache server had changed, and my Virgin Media Hub only had port 443 open to the old IP address.
Many thanks for pointing me at that (but I should have checked that before posting this - kicking myself!)

Webmin send fatal error when i try to satart ProFTPD Server

I'm trying to start ProFTPD Server but i recieve the next message:
Starting proftpd (via systemctl): proftpd.serviceJob for proftpd.service failed because the control process exited with error code.
See "systemctl status proftpd.service" and "journalctl -xe" for details.
failed!
And when i get more information about it, i have:
fatal: TLSRSACertificateFile. '/etc/ssl/certs/proftp.crt' does not exists on line 8 '/etc/proftpd/conf.d/virtualmin.conf/'
I have to say i did'nt installed ProFTPD Server because this module came with the webmin installation.
Hope you can help me to know why proftp.crt fie does not exists and how can i fix this issue.
Thanks.
Don't know Webmin (I used it once approximalety 20 years ago..) but there should at least be options to disable TLS for the FTP Server and to change the path to the certificate if it is able to manage ProFTPD for You.
Other alternative (better then turning SSL/TLS off): copy Your cert for the server there (you have to do something similiar for the key I assume), if You do not have one You can get one or create Your own self-signed (not sure whether Webin can help You with that, but on command line it's pretty simple to create one with openssl.)
Come on...
1 - Check if your plugin is active in VirtualMin.
1.1 - Shell check if your SFTP is installed
== ProFTPD install
UBUNTU = apt install proftpd -y
CENTOS = yum install proftpd -y
2 - Check if your domain is SFTP enabled.
3 - Create an SSL within the domain, after created click on the option to use SSL for SFTP.
VIRTUALMIN // Domain >> Server Configuration >> SSL
4 - Create an SSL for the domain you use to access WebMin.
E Use this SSL for WebMin // VirtualMIn
5 - Check the proftpd settings via WEBMIN.
Good luck! Send news about your progress ...

Exim v4.91: Cant Enable IGNORE_SMTP_LINE_LENGTH_LIMIT = 1 macro to allow long lines

Ever since upgrading to Exim 4.91, legitimate email notifications are being rejected with an error "T=remote_smtp: message is too big (transport limit = 1)".
This appears to be related to a new ACL in Exim as described here to block messages that contain lines longer than 998 octets :
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=828801
A macro was supposedly added to v4.88~RC6-2, which disables this ACL named IGNORE_SMTP_LINE_LENGTH_LIMIT and to disable the ACL you can configure it to 1.
In my configuration, I have a server that sends email notifications. This server uses another server as a smarthost. I am running centos and have a config at /etc/exim/exim.conf on both servers.
I can't seem to disable this ACL no matter what I do.
I have added IGNORE_SMTP_LINE_LENGTH_LIMIT=1 to the top of both servers exim.conf files and continue to get errors.
Any suggestions on what to do?
I use "one big config-file" (not split-config), and adding
IGNORE_SMTP_LINE_LENGTH_LIMIT=1
to /etc/exim4/exim4.conf.localmacros works.
My configuration is also splitted in multiple files and uses a smarthost. Following these instructions, it works:
Create a new file in acl directory with nano /etc/exim4/conf.d/acl/00_local and put this:
IGNORE_SMTP_LINE_LENGTH_LIMIT=1
Reload configuration with systemctl reload exim4 or restart the service with systemctl restart exim4
Send an email and check the exim logs from /var/log/exim4/mainlog.

Why does ownCloud 9.0.1 install on shared hosting fails?

I'm trying to install owncloud 9.1 on a shared hosting (kreativmedia) but i get the following error:
Error
ownCloud is NOT installed
download of ownCloud source file failed.
SSL: certificate subject name '.owncloud.com' does not match target host > name 'download.owncloud.org'SSL: certificate subject name '.owncloud.com' > does not match target host name 'download.owncloud.org'
I've tried to change this two option in the setup-owncloud.php file to FALSE
if (Setup::isCertInfoAvailable()){
curl_setopt($ch, CURLOPT_CERTINFO, TRUE);
}
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, TRUE);
but whit the same error.
I've tried to download the files via FTP, and to run the installer, it failed whith this error:
Error
ownCloud is NOT installed
The selected folder seems to already contain a ownCloud installation. - You cannot use this script to update existing installations.
I have no admin rights on this server, just a Plesk 12 access. Any idea?
I have the same problem. I opened the script and I found url https://download.owncloud.org/download/community/owncloud-latest.zip
Tried in web browser and it downloads fine. Also tried without HTTPS.
So open the file setup-owncloud.php go to line 139 (or search for https://download.owncloud.org/download/community/owncloud-latest.zip) and replace https with http. Upload modified script to the server and try again.
It worked for me.
It's looks like you catch temporary SSL certificate's issue with download.owncloud.org:
'.owncloud.com' does not match target host > name 'download.owncloud.org'
Now there everything fine with https://download.owncloud.org but it's forwards to owncloud.org

Vagrant share producing a 400 bad request

I'm using Vagrant with apache2 and specifically the command
vagrant share --https 443
It all starts fine and provides a URL. When I access that URL I'm presented with a 400 error:
Bad Request
Your browser sent a request that this server could not understand.
Apache/2.4.12 (Ubuntu) Server at *.vagrantshare.com Port 443
I have been accessing the vagrant machine using https just fine, but it doesn't seem to like to work with vagrant share.
This is a known Vagrant Share bug: https://github.com/webdevops/vagrant-docker-vm/issues/51
The only workarounds I've seen discussed are to use a custom domain or to use another product entirely (e.g. ngrok) to create the share. See the bug discussion here: https://github.com/mitchellh/vagrant/issues/5493#issuecomment-159792794
Vagrant Share docs for custom domains are here: https://atlas.hashicorp.com/help/vagrant/shares/custom-domains