Action Required: S3 shutting down legacy application server capacity - amazon-s3

I got a mail from amazon s3 webservices stating below details
"We are writing to you today to let you know about changes which impact your use of the Amazon Simple Storage Service (S3). In efforts to best serve our customers, we have improved the systems powering the Amazon S3 API and are in the process of shutting down legacy application server capacity. We have detected access on the legacy capacity for Amazon S3 buckets that you own. The legacy capacity is no longer in service, as the DNS entry for the S3 endpoint no longer includes the IP addresses associated with it. We will be shutting down the legacy capacity and retiring the set of IP addresses fronting this capacity after April 1, 2020."
I want to find out which legacy system I am using, and how to prevent from affecting my services.

Imagine you had a web site, www.example.com.
In DNS, that name was pointed to your web server at 203.0.113.100.
You decide to buy a new web server, and you give it a new IP address, let's say 203.0.113.222.
You update the DNS for example.com to point to 203.0.113.222. Within seconds, traffic starts arriving at the new server. Over the coming minutes, more and more traffic arrives at the new server, and less and less arrives at the old server.
Yet, for some strange reason, a few of your site's prior visitors are still hitting that old server. You check the DNS and it's correct. Days go by, then weeks, and somehow a few visitors who used your old server before the cutover are still hitting it.
How is that possible?
That's the gist of the communication here from AWS. They see your traffic arriving on unexpected S3 server IP addresses, for no reason that they can explain.
You're trying to connect to the right endpoint -- that's not the issue -- the problem is that for some reason you have somehow "cached" (using the term in a very imprecise sense) an old DNS lookup and are accessing a bucket by hitting a wrong, old S3 IP address.
If you have a Java backend service accessing S3, those can notorious for holding on to DNS lookups forever. You might need to restart that service, and look into how to resolve that issue and enable correct behavior which is -- as I understand it -- not how Java behaves by default. (Not claiming to be a Java expert but I've encountered this sort of DNS behavior many times.)
If you have an HAProxy or Nginx server that's front-ending for an S3 bucket and has been up for a while, those might need a restart and you should look into how to correctly configure them not to resolve DNS only at startup. I ran into exactly this issue once, years ago, except my HAProxy was forwarding requests to Amazon CloudFront on only 1 of the several IP addresses it could have been using. They took that CloudFront edge server offline, or it failed, or whatever, and the DNS was updated... but my proxy was not able to re-query DNS so it just kept trying and failing until I restarted it. Then I fixed it so that it periodically repeated the DNS lookup so it always had a current address.
If you have your own DNS resolver servers, you might want to verify that they aren't somehow misbehaving, and you might want to ensure that you don't for some reason have any /etc/hosts (or equivalent) static host entried for anything related to S3.
There could be any number of causes but I'm confident at least in my interpretation of what they say is happening.

Related

AWS - NLB Performance Issue

AWS
I am using network load balancer infront of private VPC in the API gateway. Basically for APIs in the gateway the endpoint is network load balancer's DNS name.
The issue is, performance sucks (+5 seconds).. If I use the IP address of the EC2 instead of NLB DNS the response is very good (less than 100ms).
Can somebody point me what is the issue? Any configuration screw up I did while creating NLB?
I have been researching for the past 2 days and couldn't find any solution.
Appreciate your response.
I had a similar issue that was due to failing health checks. When all health checks fails, the targets are tried randomly (typically target in each AZ), however, at that stage I had only configured an EC2 in one of the AZs. The solution was to fix the health checks. They require the SecurityGroup (on the EC2 instances) to allow the entire VPC CIDR range (or at least the port the health checks are using).

How do I send requests for a specific domain to Apache without it serving that domain yet?

I have an AWS EC2 server that hosts 3 domains with Apache 2. This server sits behind an AWS ELB load balancer which sends it requests. If I want to update this server, instead of taking the server down, I can create a new identical EC2 server and install all the software using the same scripts that built the first server and when it is ready I can add the new server to the ELB and then remove the old server. This gives me zero downtime which is great.
But before I remove the old server how do test the new server to prove everything is working and it is serving those 3 domains? DNS points to the ELB for these domains, the ELB sendsthe requests to the server, and the Apache install on the server routes the traffic to the appropriate site depending on what subdomain was requested. Is there a way make a request to the new server via IP address since that is the only way to address it before it is behind the ELB but tell it I want to make a request to a specific subdomain? If not how else can I prove all 3 sites are running and working properly without just adding it to the ELB, removing the old server, and crossing my fingers?
P.S. Sorry for the poor title. Please edit it if you can think of a better one that better represents what I am asking.
Use ELB healthcheck to perform the check. I recommend you to enable Apache server status mod. Use health check against /server-status and if it returns 200 for certain period of time, ELB will mark the instance as active and healthy.

Verifying individual servers in a load balancing configuration

Here is my situation. Recently, my production environment has been burned by a few Windows updates that caused some production servers to stop responding. While we have since resolved the issue of both of the servers (which are in a load balancing configuration) getting updates on the same day, the question arouse, how do we check that the application running on each server is still working? If we call the load balancing IP, we may or may not hit a server that is working. So if the update takes out the application on one server, how do we know that this has happened
The only idea I have for this is to purchase 2 more SSL certificates and allocate 2 ip addresses and assign one to each server. This way I would be guaranteed that I would know each server is up (we have a 3rd party service pinging our servers). But I have to believe that there is a better way to do this?
Please note that I am a .Net developer by trade with only an extremely small smattering of networking and IIS experience, but I'm what my small company has. So please assume I don't know where a lot of stuff is and dumb down the answer.
Load balancer maintains live status of the servers ( based on timeouts or http health checks ). It uses this status to route the traffic only to active servers.
Generally, LBs have a dashboard through which you can check this status. If not, you can check it's logs.

Understanding Apache Traffic

I run a 2GB RAM Linode (Ubuntu) that hosts a few WordPress websites. Recently my server has been OOMing and crashing and I have been up all night trying to find out what's causing it. I have discovered there I get an enormous influx of traffic (a tiny DoS) that brings the whole thing down.
I have access logs setup across all of the virtual hosts and I am using tcptrack to monitor activity on the server.
The traffic appearing in my access logs does not account for the traffic I am seeing on tcptrack. i.e. there are a dozen i.p. addresses that are constantly opening and closing connections on the server, but are nowhere to be seen in the access logs for each virtual host.
Clearly it's because these i.ps are not hitting the virtual hosts, but I have tried to set up access logs to monitor server-wide traffic so that I can see what requests their making but I'm really struggling.
Can anyone please point me in the right direction, perhaps tcptrack is just too simplified to provide any meaningful insight?
Start using mod_security
https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#Installation_for_Apache
Debian has it which means Ubuntu likely does as well. You should also make sure the kernel is setup properly, search google for SYN_COOKIES. Look into iptables/shorewall etc. Shorewall is a package that wraps iptables. Iptables can be configured for detect floods and start dropping packets.

Cocoa server with user friendly automatic port forwarding or external ip lookup

I am coding a mac app, which will be a server that serve files to each user's mobile device.
The issues with this of course are getting the actual ip/port of the server host, as it will usually be inside of a home network. If the ip/port changes, its no big as i plan to send that info to a middle-man-server first, and have my mobile app get the info from there.
I have tried upnp with https://code.google.com/p/tcmportmapper/ but even though I know my router supports upnp, the library does not work as intended.
I even tried running a TURN server on my amazon ec2 instance, but i had a very hard time figuring how what message to communicate with it to get the info i need.
I've been since last night experimenting with google's libjingle, but am having a hard time even getting the provided ios example to run.
Any advice on getting this seemingly difficult task accomplished?
The port of your app will not change. The IP change could be handled by posting your servers IP to a web service every hour or whatever time period you want.
Server should run a URL http://your-web-service.com/serverip.php?ip=your-updated-ip and then have your serverip.php handle the rest (put it into a mySQL db or something)
When your client start it should ask your site for the IP and then connect to your server with that.
This is a pretty common way of handling this type of things.