Cloud Run with Fastly or Cloudflare - cloudflare

When I want to map a custom domain to my cloud run services. Is this a one time only thing I need to do via CNAME record? Or is this validated on a continuous basis?
I would like to have a CNAME record from Fastly which shield my Cloud Run service.
The same functionality applies on Cloudflare with a DNS record (without proxy) pointing to Cloud Run service and then enabling the proxy functionality. Everything seems to work fine (with Cloudflare) but I don't know if this will break in the future? I would also like to able to do the same for Fastly

You need to update your DNS record for 2 things when doing a domain mapping of your Cloud Run service: domain ownership verification and domain mapping creation.
The domain verification is a one time operation, once verified, you can clean up your DNS records.
The actual mapping of the verified domain to the Cloud Run service requires to set either CNAME or A records, and these must stay in your DNS records in order for the mapping to keep working.

Related

Cloudflare wildcard DNS entry - still Protected If target IS a CloudFlare Worker?

I see that in CloudFlare’s DNS FAQs they say this about wildcard DNS entries:
Non-enterprise customers can create but not proxy wildcard records.
If you create wildcard records, these wildcard subdomains are served directly without any Cloudflare performance, security, or apps. As a result, Wildcard domains get no cloud (orange or grey) in the Cloudflare DNS app. If you are adding a * CNAME or A Record, make sure the record is grey clouded in order for the record to be created.
What I’m wondering is if one would still get the benefits of CloudFlares infrastructure of the target of the wildcard CNAME record IS a Cloudflare Worker, like my-app.my-zone.workers.dev? I imagine that since this is a Cloudflare controlled resource, it would still be protected for DDoS for example. Or is it that much of the Cloudflare security and performance happening at this initial DNS stage that will be lost even if the target is a Cloudflare worker?
Also posted to CloudFlare support: https://community.cloudflare.com/t/wildcard-dns-entry-protection-if-target-is-cloudflare-worker/359763
I believe you are correct that there will be some basic level of Cloudflare services in front of workers, but I don't think you'll be able to configure them at all if accessing the worker directly (e.g. a grey-cloud CNAME record pointed at it). Documentation here is a little fuzzy on the Cloudflare side of things however.
They did add functionality a little while back to show the order of operations of their services, and Workers seem to be towards the end (meaning everything sits in front). However, I would think this only applies if you bind it to a route that is covered under a Cloudflare-enabled DNS entry.
https://blog.cloudflare.com/traffic-sequence-which-product-runs-first/
The good news is you should be able to test this fairly easily. For example, you can:
Setup a worker with a test route
Point a DNS-only (grey cloud)
record at it
Confirm you can make a request to worker
Add a firewall rule to block the rest route
See if you can still make the request to worker
This will at least give you an answer on whether your zone settings apply when accessing a worker (even through a grey cloud / wildcard DNS entry). Although it will not answer what kind of built-in / non-configurable services there are in front of workers.

How to achieve high availability for Active Directory LDAPS (Secure LDAP)

We have around 50 applications currently configured with LDAP and we have around 20 Domain Controllers. As per the security best practice we have to migrate all these applications from LDAP to LDPAS.
Currently, all applications are connected using Domain's "NETBIOS" name so there no need to worry about high availability.
What is the best design approach to achieve high availability for LDAPS?
Prefer not to configure individual DC servers as LDAPS servers in the application.
Note: all the servers (DC and application servers) are enrolled in on-prem PKI.
In my enterprise environment, there is a load balancer with a virtual IP which distributes traffic accross multiple DCs. Clients access ad.example.com, and each DC behind ad.example.com has a cert valid both for hostname.example.com and ad.example.com (SAN, subject alternative name). This has the advantage of allowing the load balancer to manage which hosts are up -- if a target does not respond on port 636, it is automatically removed from the virtual IP. When the target begins responding, it is automatically added back. LDAP clients don't need to do anything unusual to use this high availability AD LDAPS solution. The down side is that the server admin has ongoing maintenance as DCs are replaced -- we build a new server and then remove the old one. In doing so, the old IP is retired. The new IP needs to be added to the load balancer virtual IP config.
Another approach would be to use DNS to find the domain controllers -- there are SRV records registered both for the Site domain controllers and all domain controllers. Something like _ldap.tcp.SiteName._sites.example.com will give you the DCs in example.com's SiteName site. For all DCs in the example.com domain, look up _ldap._tcp.example.com ... this approach, however, requires the LDAP client to be modified to perform the DNS lookups. The advantage of this approach is that the DCs manage their DNS entries. No one needs to remember to add a new DC to the DNS service records.

Action Required: S3 shutting down legacy application server capacity

I got a mail from amazon s3 webservices stating below details
"We are writing to you today to let you know about changes which impact your use of the Amazon Simple Storage Service (S3). In efforts to best serve our customers, we have improved the systems powering the Amazon S3 API and are in the process of shutting down legacy application server capacity. We have detected access on the legacy capacity for Amazon S3 buckets that you own. The legacy capacity is no longer in service, as the DNS entry for the S3 endpoint no longer includes the IP addresses associated with it. We will be shutting down the legacy capacity and retiring the set of IP addresses fronting this capacity after April 1, 2020."
I want to find out which legacy system I am using, and how to prevent from affecting my services.
Imagine you had a web site, www.example.com.
In DNS, that name was pointed to your web server at 203.0.113.100.
You decide to buy a new web server, and you give it a new IP address, let's say 203.0.113.222.
You update the DNS for example.com to point to 203.0.113.222. Within seconds, traffic starts arriving at the new server. Over the coming minutes, more and more traffic arrives at the new server, and less and less arrives at the old server.
Yet, for some strange reason, a few of your site's prior visitors are still hitting that old server. You check the DNS and it's correct. Days go by, then weeks, and somehow a few visitors who used your old server before the cutover are still hitting it.
How is that possible?
That's the gist of the communication here from AWS. They see your traffic arriving on unexpected S3 server IP addresses, for no reason that they can explain.
You're trying to connect to the right endpoint -- that's not the issue -- the problem is that for some reason you have somehow "cached" (using the term in a very imprecise sense) an old DNS lookup and are accessing a bucket by hitting a wrong, old S3 IP address.
If you have a Java backend service accessing S3, those can notorious for holding on to DNS lookups forever. You might need to restart that service, and look into how to resolve that issue and enable correct behavior which is -- as I understand it -- not how Java behaves by default. (Not claiming to be a Java expert but I've encountered this sort of DNS behavior many times.)
If you have an HAProxy or Nginx server that's front-ending for an S3 bucket and has been up for a while, those might need a restart and you should look into how to correctly configure them not to resolve DNS only at startup. I ran into exactly this issue once, years ago, except my HAProxy was forwarding requests to Amazon CloudFront on only 1 of the several IP addresses it could have been using. They took that CloudFront edge server offline, or it failed, or whatever, and the DNS was updated... but my proxy was not able to re-query DNS so it just kept trying and failing until I restarted it. Then I fixed it so that it periodically repeated the DNS lookup so it always had a current address.
If you have your own DNS resolver servers, you might want to verify that they aren't somehow misbehaving, and you might want to ensure that you don't for some reason have any /etc/hosts (or equivalent) static host entried for anything related to S3.
There could be any number of causes but I'm confident at least in my interpretation of what they say is happening.

Is it possible to host only subdomain on cloudflare

I want to host only a subdomain to cloudflare. I do not want to change the nameserver of my main domain to theirs. Is it really possible?
Yes, this is possible, however it needs to be set-up via a CloudFlare Partner or you need to be on the Business or Enterprise plan. They can set-up domains via a CNAME record instead of moving nameservers.
There is a complete list of partners at: https://www.cloudflare.com/hosting-partners
We use this at Creare, it allows us to set-up a clients site on CloudFlare yielding the performance and security benefits without altering their nameservers (where it is impractical or the client doesn't want us to), we provide this option without them needing a Business or Enterprise plan leading to it being at a lower price for the client.

One domain name "load balanced" over multiple regions in Google Compute Engine

I have service running on Google Compute Engine. I've got few instances in Europe in a target pool and few instances in US in a target pool. At the moment I have a domain name, which is hooked up to the Europe target pool IP, and can load balance between those two instances very nicely.
Now, can I configure the Compute Engine Load Balancer so that the one domain name is connected to both regions? All load balancing rules seem to be related to a single region, and I don't know how I could get all the instances involved.
Thanks!
You can point one domain name (A record) at multiple IP addresses, i.e. mydomain.com -> 196.240.7.22 204.80.5.130, but this setup will send half the users to the U.S., and the other half to Europe.
What you probably want to look for is a service that provides geo-aware or geo-located DNS. A few examples include loaddns.com, Dyn, or geoipdns.com, and it also looks like there are patches to do the same thing with BIND.
You should configure your DNS server. Google does not have a DNS service, as one part of their offering, at the moment. You can use Amazon's Route 53 to route your requests. It has a nice feature called latency based routing which allows you to route clients to different IP addresses (in your case - target pools) based on latency. You can find more information here - http://aws.amazon.com/about-aws/whats-new/2012/03/21/amazon-route-53-adds-latency-based-routing/
With Google's HTTP load balancing, you can load balance traffic over these VMs in different regions by exposing via one IP. Google eliminates the need for GEO DNS. Have a look at the doc:
https://developers.google.com/compute/docs/load-balancing/
Hope it helps.