I have a client who I am trying to setup an SSL certificate for via SSL for Free, like I have done 100 times before.
I created the file structure under public_html:
.well-known > pki-validation > <uploaded verification file>
I then tried to download the certificate and got the following failure message:
Warning: Your verification URL is not returning the correct contents
to our verification servers. The URL looks like it is blocking bots
and which inadvertently blocks our servers from receiving the correct
content. Contact your host, a professional developer or admin for
further help with fixing it.
I assumed this was due to the robots.txt file, had a look and it included the following:
User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php
So I changed it to temporarily allow all bots using:
User-agent: *
Disallow:
I then retried the process, uploading the new verification file, but still, I get the same warning message about the URL blocking the bots. Has anyone come across this before and know how to fix it?
I've also tried verification via a DNS .txt record and via FTP, both returning the same failure notices.
Hosting is through GoDaddy and they're using Linux cPanel. The website is made through WordPress.
Typing in example.com/robots.txt returns the correct robots.txt file.
Thank you for all and any help!
I've now fixed this issue. It was indeed to do with the robots.txt file as I had initially thought. The reason it didn't initially work is the site was still cached with the old file, which was blocking bots throughout the site.
I added some code to my .htaccess file to force re-validation of the cache:
ExpiresActive On
ExpiresDefault A1
Header append Cache-Control must-revalidate
After doing this and with my new robots.txt file, I was able to install the new SSL certificate to the website which is now working! I hope this helps anyone having the same issue.
Just remember to remove the above code afterwards, as it will temporarily stop your site from caching.
I resolved this issue within the SSL For Free portal. After clicking on renew within the "certificates" link, I hit the "regenerate account" link. Then went through the usual steps to renew and it worked.
Related
Hard to explain in the subject what's my problem, but it will have sense I promise.
I debug locally with an URL different than localhost (due to azure AD redirect), I have all the HOST configurations in place, everything works fine except for this "Not Secure" warning, that sometimes annoys my UI while calling the BE.
But if I navigate my page by using LOCALHOST:same port, the issue is not present.
What steps should I configure in order to prevent this warning while using an URL? So it works similar to LOCALHOST.
I tried so far to include the certificate in the Personal folder (following several tutorials) but did nothing.
Regards.
I am struggling with generating LetsEncrypt SAN SSL for SBS 2011 for few days. All is going fine, until ACME CHALLENGE verification.
I cannot use DNS verification, because DNS is at ISP and it takes days for any change to get live. So only HTTP validation can be used.
Where IIS stucks?
Simply when it tries to server extension-less ACME VALIDATION file, IIS returns 404 ERROR. File is there, Acme client generates it just fine in proper folder, but it does not show up via web browser, just 404 error due to MIME type. When testing with test.html file in same folder it gets displayed properly, no problem.
I've already tried:
Adding MIME type text/plain for "." and ".*" extensions, but no go
Moved StaticFile mappings above ExtensionLessUrlHandlers, but still no go
Edited applicationhost.config file and set to Allow: <section name="handlers" overrideModeDefault="Allow" />
Restarted IIS and whole server, still at no avail
Used different LE clients, but all of them use IIS and stuck at the same point
Solution from here does NOT work: IIS: How to serve a file without extension?
When I try localy, I always get this 404 Error in browser:
HTTP Error 404.0 - Not Found
The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.
Module IIS Web Core
Notification MapRequestHandler
Handler StaticFile
Error Code 0x80070002
Any more idea?
Sorry, folks! It was my bad, not being carefull enough when passing details to you.
The solution to add "." as MIME Type "text/plain" is the only thing needed in my OP case.
What was wrong in my case was the "autodiscover" sub-domain, which I still do not know, where it's being served from, but definitelly it is NOT from "Autodiscover" application under Default Web Site. As of now, when I browse "autodiscover.domain.com..." link I still get cached test.html content, but I've deleted all test.html files which I planted there.
Ok, but, that's not the subject here.
BTW...LE test failed on my Firewall on Country Blocking rules. Oh, my...
Thank you for participation.
I have installed moodle many times. But this time when i install moodle all steps are completed uptil update profile. (localhost/moodle/user/editadvanced.php id=2) when i enter admin details and update file nothing is displayed. when i try to access moodle module through localhost chrome displays message "web page has a redirect loop".
localhost/moodle/admin/index.php page is not redirected. i have reinstalled Xampp.
in Apache error log i found following
RSA certificate configured for www.example.com:443 does NOT include an
ID which matches the server name
.
i found solution some where to comment include conf/extra/httpd-ssl.conf but it another error is activated in error log
"Sessioncache is not configured"
. Also i have changed port 443 but it didn't work
Find the moodledata folder.
Inside the moodledata
there are many folders
Delete all from Cache
2.Delete all from Session
Restart your browser.
It worked for me.
Hope that works for you.
Thanks for reply. I have found another solution.
While i was accessing a service provided by Linux server i got message that service has been blocked by security settings. I searched and while searching for that problem i found the solution of both.
https://productforums.google.com/forum/#!topic/chrome/DYk8tSV8qM4
go to control panel, programs, click on java, security.
set security to medium.
Java security was set to high which was blocking moodle application.
Delete the cache and the sesson folder data's in the moodle data folder. It will prevent you from redirecting the loop.
It is worked for me.But deleting the whole moodledata folder is not recommended
Just a guess... but if you are using a certificate, have you tried using https in your config.php?
$CFG->wwwroot = 'https://...'
Today I stumbled upon a folder on my web host called 'error.log'. I thought I'd take a look.
I see multiple 'file does not exist' errors - there are three types of entries:
robots.txt
missing.html
apple-touch-icon-precomposed.png
I have some guesses about what these files are used for, but would like to know definitively:
What are the files in question?
Should I add them to my server?
What prompts an error log to be written for these? Is it someone explicitly requesting them? If so, who and how?
A robots.txt file is read by web crawlers/robots to allow/disallow it from scraping resources on your server. However, it's not mandatory for a robot to read this file, but the nice ones do. There are some further examples at http://en.wikipedia.org/wiki/Robots.txt An example file may look like and would reside in the web root directory:
User-agent: * # All robots
Disallow: / # Do not enter website
or
User-Agent: googlebot # For this robot
Disallow: /something # do not enter
The apple-touch-icon-precomposed.png is explained https://stackoverflow.com/a/12683605/722238
I believe the usage of missing.html is used by some as a customized 404 page. It's possible that a robot may be configured to scrape this file, hence the requests for it.
You should add a robots.txt file if you want to control the resources a robot will scrape off your server. As said before, it's not mandatory for a robot to read this file.
If you wanted to add the other two files to remove the error messages you could, however, I don't believe it is necessary. There is nothing to say that joe_random won't make a request on your server for /somerandomfile.txt in which case you will get another error message for another file that doesn't exist. You could then just redirect them to a customized 404 page.
I have purchased an ssl certificate, I have enabled the SSL setting in the settings and I have changed both config files to go to https but when I visit http://bit.ly/TCkEBv the first page is https the rest are not. How can I fix this?
I realize this is an old thread but considering the recent google SSL-everywhere indexing changes, i figured it was relevant. The following example will make OC use https in all links. You have to change 3 characters in system/library/url.php. They deleted this on the forums which is understandable, but we have ran it for a week of production traffic on mixed SSL multistores with no issues.
WARNING: Your mods may be different - run through them all in a test after enabling this...especially any redirect managers. Here is the tweak for 1.5.6:
Open store/system/library/url.php and find $url = $this->url; in an IF statement somewhere near line 18. Change it to $url = $this->ssl; and there ya go.
PS: Also there is a vastly untested method to send the https-preferred as a header using $response->addHeader('Strict-Transport-Security: max-age=31536000'); but i am unsure of best spot to put it besides index.php. Also, although it works in test, unsure of all-server implications. Header controller seems logical, but not all OC areas use header controller :). Experiment with best placement for that....just dont do it in the $url replicator even if it seems like it works.
As per the forum thread, this is not actually a bug just the way that the cart is set up - that is most pages are not set as HTTPS and will revert to HTTP once you click on a non HTTPS link
Let's say you have a Domain called example.org
Instead of changing the code, in Apache, you could do this...
In addition to your Domain-SSL.conf, you can copy that configuration to Domain.conf and edit it to use port 80 instead of 443
Then, add this line in the Server definitions at the top, right before DirectoryIndex...
Redirect / https://example.org
This will simply redirect every request back to the SSL configuration, adding the https:// in front of every link. No code changes required to OC.
This has been working on my busy production server for several years without a single problem.