FAILED_RATE_LIMITED - Google managed SSL certificate - ssl

It's been stuck on this status for over a day and I'm unable to successfully provision a new managed SSL certificate (to attach to a load balancer)
Google docs say to contact support but we're not on any role / plan (small company), does anyone know if this limit resets after certain period of time? Or a way for customers without a support plan to resolve?
Using terraform to deploy a complete web application if that helps, this includes all the load balancer parts, managed instance group, template, ssl policy etc, I would destroy and bring up the entire project to ensure I haven't missed any settings, but after a few re-deployments I trigger this rate limit :(
Try again after a full day has passed, getting worried this is not resolvable without paying for a support plan?

note that the Google managed SSL certificates are based on Let’s Encrypt
Let’s Encrypt provides rate limits to ensure fair usage, the main limit is Certificates per Registered Domain (50 per week).
The Failed Validation have a limit of 5 failures per account, per hostname, per hour. You can use the Staging Environment to increase the limit to 60 fails per hour if you need more flexibility.
As you can see in the Creating and Using SSL Certificates this error means that you may temporarily be rate-limited and you do need to contact with Google to discuss your limit.
If you want to contact GCP support you can always try the Free tier. Then if you are happy with the support and you need it you can get a paid support after ending the free trial.
If you don’t want to use this way to contact with google you can always try to get some visibility to your issue using the Report Issue with Issue Tracker

Related

firebase ssl generation takes over 24 hours

Its a pity google does not offer its domain service in Germany, otherwise launching a react website vial firebase would really be super easy, great service!
The only problem I faced is that the SSL generation seam not to work as described in the documentation.
In Firebase Hosting it keeps on saying: Needs setup
And the documentations says on that topic:
In most cases, your DNS A records haven't propagated from your domain
name provider to Firebase Hosting servers. Troubleshooting tip: If
it's been more than 24 hours, check that you've pointed your records
to Firebase Hosting.
By pointing the records I assume they mean adding two A Type files with provided IP´s.
I added those more than 24 Hours ago and they are as valid as they can be, I checked them multiple times now on my DNS provider checkdomain.de.
Or am I getting something wrong here?
Thanks for any help!
Ok, the problem was that the provider had an additional field outside of the "repository scope" which was titled main IP. This IP also translated into a A entry which could be discovered by the prompt suggested by #FrankvanPuffelen (thanks for that):
dig +noall +answer <your-domain-name>
One the field value has also been changed to one of the IP's provided by firebase the SSL Certificate has been successfully generated.
I hope that it does help someone else in the same situation and I'll try to convince firebase support to add those hints to the documentation.

Error SSL NET::ERR_CERT_DATE_INVALID Even SSL Not Expired Yet

My website get traffic drop recently. I found that my user cannot access my website when their computer in wrong set of time. However, they can open other website as usual.
The error said "NET::ERR_CERT_DATE_INVALID" in google chrome and "Warning: Potential Security Risk Ahead" in firefox. So, I assumed that the problem is the SSL. Previously I use free ssl from Cloudflare, thinked that its because its free then the error appeared, I the purchased for Dedicated SSL form Cloudflare. But, I keep get the same Error.
Is there is a solution for this situastion?
Changing the user computer time its not my solution here, because other website working just fine.
Thank You
I found that my user cannot access my website when their computer in wrong set of time.
The expiration of the certificate is checked against the local time of the system. If the local time is wrong the check might fail even if the certificate is not really expired yet.
If it fails depends on how wrong the local time is compared to the expiration time in the certificate, i.e. it might be so wrong that some certificates look expired while others are not yet expired. Some sites use more short-lived certificates and thus are more likely to run into this kind of problems. For example Let's Encrypt certificates are only valid for 3 month, while other CA issue certificates for a year or even longer. And of course sites which only use HTTP instead of HTTPS don't have this problem since no certificates are involved in the first place.
Changing the user computer time its not my solution here, because other website working just fine.
There is nothing you can do against this from the server side. And while some other sites work just fine for the moment it is very likely that there are some sites apart from yours which will not work too. So the problem is not restricted to your site only.

Google Cloud Directory Sync and AD link through LDAP

I have been working to link my AD to G-Suit and have an auto sync established. The reason I put this here because I have had hard time to figure out everything. I am still not at the end of this procedure and I would appreciate if the skilled people would contribute to help me and I guess many others as well, on this topic.
I downloaded GCDS tool (4.5.7) and installed on a member server. Tried to go through the steps and failed, except to the first one to authenticate to Google.
Learnt: It is a Java (Sun) based product and when it come to authentication or SSL it will through errors that need to be sorted.
Step 1, Google Auth - done and very simple as long as you can logon to your GAE account
Step 2, LPAD config... this was tricky
I created a service account to use
Learnt:
You need to have the SAMS account matching with the displayname and name as well; only this way I could authenticate.
In most cases you don't need any admin rights, a domain user should be able to read the DN structure from LDAP.
I have the OU structure, but I need LDAP working on the DC (this works somehow)
Learnt:
Simple connection through port 389;
SSL would use port 636;
in most cases
GCDS only uses Simple authentication!
Learnt:
With port 389
Domain group policy needed to changed to non LDAP auth required (Domain controller: LDAP server signing requirements changed to none!) to be able to logon - this one is working and good for DEVSERV
Question: Should I use it for PRODSERV or I need to aim to use SSL?
Learnt:
With port 636 (SSL) you need a certificate
Question: I tried to add self cert based on the following article, added to the trusted cert root but Google cannot see it?
BASE DN can be read out through LDP.EXE (built in LDAP browser by MS)
Learnt:
You can add your OU you wanted doesn't have to be the root of the tree
Question: does it mean you have implemented extra security?
Step 3,Defining what data I can collect. OU and person I picked.
Learnt
Profile will collect extra information to Google, such as job title, phone etc. I only wanted them for the company signature... Well that is still not clear if this can be done. If that is not possible, I can't see the reason why I should disclose unwanted information to store on another server.
Question: Can job description be included to the Google Mail signature?
I am keep adding my finding to it as I am working through but would appreciate any input from people who managed to set it up.
Step 4, Searching in the Organisation Unit - confusing again but it is done. (More to follow.)

Uploading SSL Certificate to AWS Elastic Load Balancer

My SSL Certificate on my AWS Elastic Load Balancer is going to expire very soon and I need to replace it with a new one.
I've got the new certificate / bundle / key, uploaded to IAM but it won't show in the drop down in the Load Balancer settings that should let me choose the certificate to apply.
Here is the output when I put
aws iam list-server-certificates
To my mind this shows that I have uploaded the new certificate to IAM ok. The top certificate in the list is the one which is due to expire any moment now and the other two are ones I have recently uploaded with the intention of replacing it (They are actually two attempts to upload using the same pem files).
The image below shows that only one certificate is available to choose to apply to the load balancer. Unfortunately it is the one that is about to expire.
The one thing that does strike me as a little odd is that the certificate name in the dropdown - ptdsslcert - is different to the names in the aws iam list-server-certificates output, even though it is the same certificate that expires imminently.
I'm really stuck here and if I don't figure this out soon I'm going to have an expired certificate on my domain so I would be really appreciative of any help on this.
The AWS CLI uses a provider chain to look for AWS credentials in a number of different places, including system or user environment variables and local AWS configuration files.
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
Although it's hard to guess the specific local machine configuration issue that resulted in the behavior observed, as noted in the comments, this appeared to be an issue where aws cli was using two different sets of credentials to access two different services, and these two sets of credentials were actually from two different AWS accounts.
The ServerCertificateName returned by the API (accessed through the CLI) should have matched the certificate name shown in the console drop-down for Elastic Load Balancer certificate selection.
The composition of ARNs (Amazon Resource Names) varies by service, but often includes the AWS account number. In this case, the account number shown in the CLI output did not match what was visible in the AWS console... leading to the conclusion that the issue was that an AWS account other than the intended one was being accessed by aws cli.
As cross-confirmed by the differing display names, the "existing" certificate, uploaded a year ago, may have had the same content but was in fact a different IAM entity than the one seen in the dropdown, as the two certificates were associated with entirely different accounts.

How to fix login for google-sites-liberation to backup google apps for domain sites again?

For a few days now the backup of google sites using google-sites-liberation stopped working.
The call
java -cp google-sites-liberation.jar com.google.sites.liberation.export.Main -d "$DOMAIN" -w wiki -u "$USER" -p "$PASSWORD" -f "$DIR/" 2>&1
which always worked before now fails with:
May 29, 2015 1:48:23 PM com.google.sites.liberation.export.Main doMain
SEVERE: Invalid User Credentials!
Exception in thread "main" java.lang.RuntimeException: com.google.gdata.util.AuthenticationException: Error authenticating (check service name)
at com.google.sites.liberation.export.Main.doMain(Main.java:89)
at com.google.sites.liberation.export.Main.main(Main.java:97)
Caused by: com.google.gdata.util.AuthenticationException: Error authenticating (check service name)
at com.google.gdata.client.GoogleAuthTokenFactory.getAuthException(GoogleAuthTokenFactory.java:614)
at com.google.gdata.client.GoogleAuthTokenFactory.getAuthToken(GoogleAuthTokenFactory.java:490)
at com.google.gdata.client.GoogleAuthTokenFactory.setUserCredentials(GoogleAuthTokenFactory.java:336)
at com.google.gdata.client.GoogleService.setUserCredentials(GoogleService.java:362)
at com.google.gdata.client.GoogleService.setUserCredentials(GoogleService.java:317)
at com.google.gdata.client.GoogleService.setUserCredentials(GoogleService.java:301)
at com.google.sites.liberation.export.Main.doMain(Main.java:79)
... 1 more
I checked the credentials, the credentials of the account are correct. However it is the main account's password, which probably has more strict security settings on Google now.
I tried to find a solution using Google-Search but only stumbled over old suggestions which had solutions which are no more available today. Also I did not find a way to add an user/password application login to the account used to backup the wiki.
Has anybody a pointer how to fix that and make backup of google site available again?
All answers are good which offer a solution to backup a site:
Use some other fully^2 automated tool which does the job of copying an entire site to a directory or archive format, for example .tar.bz2
Change google-sites-liberation such, that it uses another authentication method then given in the docs which are a couple of years old now. I did not manage to find it.
Note that the account used for backup must not have full google apps for domains administrator access, as this is crucial.
Please no external vendor links except if it is from Google. The data of the site(s) must not be shared with a third party, only Google and me.
Note that the process must be fully^2 automated, but I would like to have it even fully^4 automated:
fully^1, because it must run at regular intervals.
fully^2, because it must start without user intervention whatsoever (some people define "fully automated" as to start something manually such that it runs by itself, while "automated" means to have a script which still may ask for some additional input)
fully^3, because it should not involve user intervention to get the process started (like issuing something like a google authenticator token) at the first run (even if it later runs fully^2 automated)
fully^4, because I want to be able to setup the process for several thousands sites in an automated, noninteractive way, when the process which prepares the setup runs on a host which is offline (so the setup can be uploaded to the fully^3 automated system without any additional manual setup steps for example using IPoAC. YKWIM).
Not much of a problem if it is only fully^2 automated, as I only want to backup my little single site (only a few thousand pages with attachments). However I am curious how to get it fully^4 automated, because automating everything (including, but not limited to, the Universe) was my motivation getting into the computer business several decades ago ..
Thanks.
Links:
https://code.google.com/p/google-sites-liberation/ a bit dated code to retrieve sites
https://www.google.com/settings/takeout does not include google apps for domain sites
http://blog.famzah.net/2014/08/06/authentication-for-google-sites-liberation/ the noted account setting is not (no more) available
Was unable to find any suitable link how to implement a google apps for domain backup with another tool, the all result pages I looked at (several!) seem to be exclusively for third party vendors on this matter with more or less unknown trustworthyness. So perhaps I am unable to define the right google search on this matter.
Update 2015-06-23:
My scripts run every day and they tell if something goes wrong, but not if they work as intended. So I oversaw that it suddenly worked for a few days. But today it failed again:
2015-05-27 to 2015-06-11 (15 days) authentication failure
2015-06-12 to 2015-06-22 (11 days) it works again
2015-06-23 (today) authentication failure again
I have no idea why it suddenly worked for 11 days. I'll probably update this question again on the next ok-to-fail transition. ;)
Google uses OAuth2 instead of user account/password.
I fixed the GUI interface.
https://github.com/sih4sing5hong5/google-sites-liberation
But I have no idea about OAuth2 with auto scripts.
I developed a console script in Python which exports Google Sites:
https://github.com/famzah/google-sites-backup
This works with automated scripts. It needs more testing but functions properly for my sites.
Because of the nature of OAuth2, the first time you ever start the script, you will need to obtain a token manually by visiting a web page. There is no other way. Once you've done this, the Python script caches the authentication token and the backup works in a completely non-interactive mode. It is a decision by Google when this cached token expires.