I use mod_rpaf to get the real IPs in the logs. However, it requires load balancers IP. Earlier, I used to see the "ELB-HealthChecker/1.0" useragent to get the ELB's IP. But, strangely today I can see two health check requests every time on each of the instances from two different IPs. My ELB and the EC2 instances are in same availability zone.
Anyone faced a similar situation? Is this an expected behavior or some anomaly?
It is normal to see a variety of ELB IP addresses. ELBs scale up and down with traffic to your site and those actions will cause you to see different IP addresses.
Using mod_rpaf with ELB is never going to be a reliable solution.
If you want to know the source IP that the ELB saw, then you can use use the last value from teh X-Forwarded-For list.
See also this blog post for a patch to mod_rpaf.
Related
I see that in CloudFlare’s DNS FAQs they say this about wildcard DNS entries:
Non-enterprise customers can create but not proxy wildcard records.
If you create wildcard records, these wildcard subdomains are served directly without any Cloudflare performance, security, or apps. As a result, Wildcard domains get no cloud (orange or grey) in the Cloudflare DNS app. If you are adding a * CNAME or A Record, make sure the record is grey clouded in order for the record to be created.
What I’m wondering is if one would still get the benefits of CloudFlares infrastructure of the target of the wildcard CNAME record IS a Cloudflare Worker, like my-app.my-zone.workers.dev? I imagine that since this is a Cloudflare controlled resource, it would still be protected for DDoS for example. Or is it that much of the Cloudflare security and performance happening at this initial DNS stage that will be lost even if the target is a Cloudflare worker?
Also posted to CloudFlare support: https://community.cloudflare.com/t/wildcard-dns-entry-protection-if-target-is-cloudflare-worker/359763
I believe you are correct that there will be some basic level of Cloudflare services in front of workers, but I don't think you'll be able to configure them at all if accessing the worker directly (e.g. a grey-cloud CNAME record pointed at it). Documentation here is a little fuzzy on the Cloudflare side of things however.
They did add functionality a little while back to show the order of operations of their services, and Workers seem to be towards the end (meaning everything sits in front). However, I would think this only applies if you bind it to a route that is covered under a Cloudflare-enabled DNS entry.
https://blog.cloudflare.com/traffic-sequence-which-product-runs-first/
The good news is you should be able to test this fairly easily. For example, you can:
Setup a worker with a test route
Point a DNS-only (grey cloud)
record at it
Confirm you can make a request to worker
Add a firewall rule to block the rest route
See if you can still make the request to worker
This will at least give you an answer on whether your zone settings apply when accessing a worker (even through a grey cloud / wildcard DNS entry). Although it will not answer what kind of built-in / non-configurable services there are in front of workers.
I got a mail from amazon s3 webservices stating below details
"We are writing to you today to let you know about changes which impact your use of the Amazon Simple Storage Service (S3). In efforts to best serve our customers, we have improved the systems powering the Amazon S3 API and are in the process of shutting down legacy application server capacity. We have detected access on the legacy capacity for Amazon S3 buckets that you own. The legacy capacity is no longer in service, as the DNS entry for the S3 endpoint no longer includes the IP addresses associated with it. We will be shutting down the legacy capacity and retiring the set of IP addresses fronting this capacity after April 1, 2020."
I want to find out which legacy system I am using, and how to prevent from affecting my services.
Imagine you had a web site, www.example.com.
In DNS, that name was pointed to your web server at 203.0.113.100.
You decide to buy a new web server, and you give it a new IP address, let's say 203.0.113.222.
You update the DNS for example.com to point to 203.0.113.222. Within seconds, traffic starts arriving at the new server. Over the coming minutes, more and more traffic arrives at the new server, and less and less arrives at the old server.
Yet, for some strange reason, a few of your site's prior visitors are still hitting that old server. You check the DNS and it's correct. Days go by, then weeks, and somehow a few visitors who used your old server before the cutover are still hitting it.
How is that possible?
That's the gist of the communication here from AWS. They see your traffic arriving on unexpected S3 server IP addresses, for no reason that they can explain.
You're trying to connect to the right endpoint -- that's not the issue -- the problem is that for some reason you have somehow "cached" (using the term in a very imprecise sense) an old DNS lookup and are accessing a bucket by hitting a wrong, old S3 IP address.
If you have a Java backend service accessing S3, those can notorious for holding on to DNS lookups forever. You might need to restart that service, and look into how to resolve that issue and enable correct behavior which is -- as I understand it -- not how Java behaves by default. (Not claiming to be a Java expert but I've encountered this sort of DNS behavior many times.)
If you have an HAProxy or Nginx server that's front-ending for an S3 bucket and has been up for a while, those might need a restart and you should look into how to correctly configure them not to resolve DNS only at startup. I ran into exactly this issue once, years ago, except my HAProxy was forwarding requests to Amazon CloudFront on only 1 of the several IP addresses it could have been using. They took that CloudFront edge server offline, or it failed, or whatever, and the DNS was updated... but my proxy was not able to re-query DNS so it just kept trying and failing until I restarted it. Then I fixed it so that it periodically repeated the DNS lookup so it always had a current address.
If you have your own DNS resolver servers, you might want to verify that they aren't somehow misbehaving, and you might want to ensure that you don't for some reason have any /etc/hosts (or equivalent) static host entried for anything related to S3.
There could be any number of causes but I'm confident at least in my interpretation of what they say is happening.
AWS
I am using network load balancer infront of private VPC in the API gateway. Basically for APIs in the gateway the endpoint is network load balancer's DNS name.
The issue is, performance sucks (+5 seconds).. If I use the IP address of the EC2 instead of NLB DNS the response is very good (less than 100ms).
Can somebody point me what is the issue? Any configuration screw up I did while creating NLB?
I have been researching for the past 2 days and couldn't find any solution.
Appreciate your response.
I had a similar issue that was due to failing health checks. When all health checks fails, the targets are tried randomly (typically target in each AZ), however, at that stage I had only configured an EC2 in one of the AZs. The solution was to fix the health checks. They require the SecurityGroup (on the EC2 instances) to allow the entire VPC CIDR range (or at least the port the health checks are using).
I set up a NLB cluster given two servers (WS 2008 R2). Each server has one NIC card which I set up for a static ip address. I assigned the cluster an internet name (MyCluster), and assigned it a static ip address. The third box is acting as a client sending TCP data (over WCF) to the cluster's IP I configured (static IP). I am observing the NLB cluster from the NLB manager at one of the nodes - both nodes are green, say started. However, I am only able to see traffic coming in to one of the NLB servers. When I suspend it, I see traffic going to the other NLB server, and so on. I was expecting traffic to be split equally between them. I can't figure out what I missed, any tips please?
If you need more detailed information please ask, not sure how much detail to put in here.
Thanks/.
By default, a port rule created with a Filtering mode of multiple host will use single affinity. In other words, multiple requests from the same client will get directed to the same host. To see traffic going to both hosts try accessing the cluster from multiple clients. You could also set the affinity to "none", but this can lead to other problems.
There's good information on the affinity parameter and how to use it in the NLB help file.
Does anyone know how AWS ELB with SSL work behind the scenes? Running an nslookup on my ELB's domain name I get 4 unique IP addresses. If my ELB is SSL enabled, is it possible for AWS to share these same IPs with other SSL enabled ELBs (not necessarily owned by me)?
As I understand it the hostname in a web request is inside of the encrypted web request for a https request. If this is the case, does AWS have to give each SSL-enabled ELB unique IP addresses that are never shared with anyone else's SSL ELB instance? Put another way -- does AWS give 4 unique IP addresses for every SSL ELB you've requested?
Does anyone know how AWS ELB with SSL work behind the scenes? [...] Put another way --
does AWS give 4 unique IP addresses for every SSL ELB you've
requested?
Elastic Load Balancing (ELB) employs a scalable architecture in itself, meaning the number of unique IP addresses assigned to your ELB does in fact vary depending on the capacity needs and respective scaling activities of your ELB, see section Scaling Elastic Load Balancers within Best Practices in Evaluating Elastic Load Balancing (which provides a pretty detailed explanation of the Architecture of the Elastic Load Balancing Service and How It Works):
The controller will also monitor the load balancers and manage the
capacity [...]. It increases
capacity by utilizing either larger resources (resources with higher
performance characteristics) or more individual resources. The Elastic
Load Balancing service will update the Domain Name System (DNS) record
of the load balancer when it scales so that the new resources have
their respective IP addresses registered in DNS. The DNS record that
is created includes a Time-to-Live (TTL) setting of 60 seconds,[...]. By default, Elastic Load Balancing will return multiple IP
addresses when clients perform a DNS resolution, with the records
being randomly ordered [...]. As the traffic
profile changes, the controller service will scale the load balancers
to handle more requests, scaling equally in all Availability Zones. [emphasis mine]
This is further detailed in section DNS Resolution, including an important tip for load testing an ELB setup:
When Elastic Load Balancing scales, it updates the DNS record with the
new list of IP addresses. [...] It is critical that you factor this
changing DNS record into your tests. If you do not ensure that DNS is
re-resolved or use multiple test clients to simulate increased load,
the test may continue to hit a single IP address when Elastic Load
Balancing has actually allocated many more IP addresses. [emphasis mine]
The entire topic is explored in much more detail within Shlomo Swidler's excellent analysis The “Elastic” in “Elastic Load Balancing”: ELB Elasticity and How to Test it, which meanwhile refers to the aforementioned Best Practices in Evaluating Elastic Load Balancing by AWS as well, basically confirming his analysis but lacking the illustrative step by step samples Shlomo provides.