AWS Cloudsearch re-resolve DNS to IP regularly? - amazon-cloudsearch

In order to submit queries to CloudSearch dommain, the documentation says that "You should also re-resolve the endpoint DNS to an IP address regularly". But I cannot find a way to do that using AWS JavaScript SDK.

Depending on the context in which you're using the SDK (eg, in-browser, NodeJS, ...), you probably do not need to touch this.
DNS caching is fairly low-level stuff that is usually handled for you by whatever framework you're using. For example, if using NodeJS, the DNS Time-to-Live (TTL) can be configured like this
https://nodejs.org/api/dns.html#dns_dnspromises_resolveany_hostname , whereas browsers maintain their own DNS caches (default to 60s for chrome and firefox) -- How long google chrome and firefox cache DNS records
To be safe, I would just check the DNS cache TTL that applies to your specific implementation.

Related

If I change web hosting and re-point my domain to it, can it still read secure cookies from the previous server? [duplicate]

I have two HTTP services running on one machine. I just want to know if they share their cookies or whether the browser distinguishes between the two server sockets.
The current cookie specification is RFC 6265, which replaces RFC 2109 and RFC 2965 (both RFCs are now marked as "Historic") and formalizes the syntax for real-world usages of cookies. It clearly states:
Introduction
...
For historical reasons, cookies contain a number of security and privacy infelicities. For example, a server can indicate that a given cookie is intended for "secure" connections, but the Secure attribute does not provide integrity in the presence of an active network attacker. Similarly, cookies for a given host are shared across all the ports on that host, even though the usual "same-origin policy" used by web browsers isolates content retrieved via different ports.
And also:
8.5. Weak Confidentiality
Cookies do not provide isolation by port. If a cookie is readable by a service running on one port, the cookie is also readable by a service running on another port of the same server. If a cookie is writable by a service on one port, the cookie is also writable by a service running on another port of the same server. For this reason, servers SHOULD NOT both run mutually distrusting services on different ports of the same host and use cookies to store security sensitive information.
According to RFC2965 3.3.1 (which might or might not be followed by browsers), unless the port is explicitly specified via the port parameter of the Set-Cookie header, cookies might or might not be sent to any port.
Google's Browser Security Handbook says: by default, cookie scope is limited to all URLs on the current host name - and not bound to port or protocol information. and some lines later There is no way to limit cookies to a single DNS name only [...] likewise, there is no way to limit them to a specific port. (Also, keep in mind, that IE does not factor port numbers into its same-origin policy at all.)
So it does not seem to be safe to rely on any well-defined behavior here.
This is a really old question but I thought I would add a workaround I used.
I have two services running on my laptop (one on port 3000 and the other on 4000).
When I would jump between (http://localhost:3000 and http://localhost:4000), Chrome would pass in the same cookie, each service would not understand the cookie and generate a new one.
I found that if I accessed http://localhost:3000 and http://127.0.0.1:4000, the problem went away since Chrome kept a cookie for localhost and one for 127.0.0.1.
Again, noone may care at this point but it was easy and helpful to my situation.
This is a big gray area in cookie SOP (Same Origin Policy).
Theoretically, you can specify port number in the domain and the cookie will not be shared. In practice, this doesn't work with several browsers and you will run into other issues. So this is only feasible if your sites are not for general public and you can control what browsers to use.
The better approach is to get 2 domain names for the same IP and not relying on port numbers for cookies.
An alternative way to go around the problem, is to make the name of the session cookie be port related. For example:
mysession8080 for the server running on port 8080
mysession8000 for the server running on port 8000
Your code could access the webserver configuration to find out which port your server uses, and name the cookie accordingly.
Keep in mind that your application will receive both cookies, and you need to request the one that corresponds to your port.
There is no need to have the exact port number in the cookie name, but this is more convenient.
In general, the cookie name could encode any other parameter specific to the server instance you use, so it can be decoded by the right context.
In IE 8, cookies (verified only against localhost) are shared between ports. In FF 10, they are not.
I've posted this answer so that readers will have at least one concrete option for testing each scenario.
I was experiencing a similar problem running (and trying to debug) two different Django applications on the same machine.
I was running them with these commands:
./manage.py runserver 8000
./manage.py runserver 8001
When I did login in the first one and then in the second one I always got logged out the first one and viceversa.
I added this on my /etc/hosts
127.0.0.1 app1
127.0.0.1 app2
Then I started the two apps with these commands:
./manage.py runserver app1:8000
./manage.py runserver app2:8001
Problem solved :)
It's optional.
The port may be specified so cookies can be port specific. It's not necessary, the web server / application must care of this.
Source: German Wikipedia article, RFC2109, Chapter 4.3.1

Cloudflare wildcard DNS entry - still Protected If target IS a CloudFlare Worker?

I see that in CloudFlare’s DNS FAQs they say this about wildcard DNS entries:
Non-enterprise customers can create but not proxy wildcard records.
If you create wildcard records, these wildcard subdomains are served directly without any Cloudflare performance, security, or apps. As a result, Wildcard domains get no cloud (orange or grey) in the Cloudflare DNS app. If you are adding a * CNAME or A Record, make sure the record is grey clouded in order for the record to be created.
What I’m wondering is if one would still get the benefits of CloudFlares infrastructure of the target of the wildcard CNAME record IS a Cloudflare Worker, like my-app.my-zone.workers.dev? I imagine that since this is a Cloudflare controlled resource, it would still be protected for DDoS for example. Or is it that much of the Cloudflare security and performance happening at this initial DNS stage that will be lost even if the target is a Cloudflare worker?
Also posted to CloudFlare support: https://community.cloudflare.com/t/wildcard-dns-entry-protection-if-target-is-cloudflare-worker/359763
I believe you are correct that there will be some basic level of Cloudflare services in front of workers, but I don't think you'll be able to configure them at all if accessing the worker directly (e.g. a grey-cloud CNAME record pointed at it). Documentation here is a little fuzzy on the Cloudflare side of things however.
They did add functionality a little while back to show the order of operations of their services, and Workers seem to be towards the end (meaning everything sits in front). However, I would think this only applies if you bind it to a route that is covered under a Cloudflare-enabled DNS entry.
https://blog.cloudflare.com/traffic-sequence-which-product-runs-first/
The good news is you should be able to test this fairly easily. For example, you can:
Setup a worker with a test route
Point a DNS-only (grey cloud)
record at it
Confirm you can make a request to worker
Add a firewall rule to block the rest route
See if you can still make the request to worker
This will at least give you an answer on whether your zone settings apply when accessing a worker (even through a grey cloud / wildcard DNS entry). Although it will not answer what kind of built-in / non-configurable services there are in front of workers.

Action Required: S3 shutting down legacy application server capacity

I got a mail from amazon s3 webservices stating below details
"We are writing to you today to let you know about changes which impact your use of the Amazon Simple Storage Service (S3). In efforts to best serve our customers, we have improved the systems powering the Amazon S3 API and are in the process of shutting down legacy application server capacity. We have detected access on the legacy capacity for Amazon S3 buckets that you own. The legacy capacity is no longer in service, as the DNS entry for the S3 endpoint no longer includes the IP addresses associated with it. We will be shutting down the legacy capacity and retiring the set of IP addresses fronting this capacity after April 1, 2020."
I want to find out which legacy system I am using, and how to prevent from affecting my services.
Imagine you had a web site, www.example.com.
In DNS, that name was pointed to your web server at 203.0.113.100.
You decide to buy a new web server, and you give it a new IP address, let's say 203.0.113.222.
You update the DNS for example.com to point to 203.0.113.222. Within seconds, traffic starts arriving at the new server. Over the coming minutes, more and more traffic arrives at the new server, and less and less arrives at the old server.
Yet, for some strange reason, a few of your site's prior visitors are still hitting that old server. You check the DNS and it's correct. Days go by, then weeks, and somehow a few visitors who used your old server before the cutover are still hitting it.
How is that possible?
That's the gist of the communication here from AWS. They see your traffic arriving on unexpected S3 server IP addresses, for no reason that they can explain.
You're trying to connect to the right endpoint -- that's not the issue -- the problem is that for some reason you have somehow "cached" (using the term in a very imprecise sense) an old DNS lookup and are accessing a bucket by hitting a wrong, old S3 IP address.
If you have a Java backend service accessing S3, those can notorious for holding on to DNS lookups forever. You might need to restart that service, and look into how to resolve that issue and enable correct behavior which is -- as I understand it -- not how Java behaves by default. (Not claiming to be a Java expert but I've encountered this sort of DNS behavior many times.)
If you have an HAProxy or Nginx server that's front-ending for an S3 bucket and has been up for a while, those might need a restart and you should look into how to correctly configure them not to resolve DNS only at startup. I ran into exactly this issue once, years ago, except my HAProxy was forwarding requests to Amazon CloudFront on only 1 of the several IP addresses it could have been using. They took that CloudFront edge server offline, or it failed, or whatever, and the DNS was updated... but my proxy was not able to re-query DNS so it just kept trying and failing until I restarted it. Then I fixed it so that it periodically repeated the DNS lookup so it always had a current address.
If you have your own DNS resolver servers, you might want to verify that they aren't somehow misbehaving, and you might want to ensure that you don't for some reason have any /etc/hosts (or equivalent) static host entried for anything related to S3.
There could be any number of causes but I'm confident at least in my interpretation of what they say is happening.

How can I use Mixpanel in Iran?

Mixpanel is using "SoftLayer" which blocks all the request from IPs coming from Iran. Is there a workaround to redirect these request to IPs in another country to be able to bypass their filter and send the data to Mixpanel?
There are multiple ways depending on your configuration and platform
what is your hosting? If its shared then your options are limited but if you deployed your application on a dedicated server or VPS you can route your traffic via transparent proxies or through a vpn tunnel. And there are many services for that either!
for example Squid is a well-documented and easy to use service for that! But keep in mind that it works better on linux! you can read these articles for configuring a transparent proxy with squid: On Ubuntu, On CentOS
But given the circumstances I recommend using an open-source analytical system such as:
Matomo (formerly known as Piwik)
Open Web Analytics
Heap (a famous iranian event site (Evand) was using Heap)
You can connect through a VPN tunnel. It works the way that you connect to a computer somewhere else (in your case in another country) and then you connect from that computer to the rest of the internet. So from the rest of the internet it looks like you're somewhere else.
You can check out ProtonVPN, they have VPN tunnels through a bunch of countries.

Changing server IP after connecting to CloudFlare

I recently signed up for CloudFlare to take advantage of the security feautres the service provides. Specifically, I'm interested in its use against DDOS attacks (which are a problem I'm facing).
My web application employs nginx as a reverse proxy (with gunicorn as the application server). The Ubuntu-based virtual machine - procured via Azure - has a static/reserved IP (used as a VIP). I've read that after connecting to CloudFlare, it's best practice to change server IP so that malicious actors can't directly DDOS the said server.
Being a newbie, I'm unsure whether this guideline was applicable to the public VIP (virtual IP) or to the internal IP (which is entirely different). Can someone please conceptually and functionally clarify this for me? Can really use some help in setting this up!
What services like CloudFlare do is acting like a CDN for your website. They become front-end of your content delivery to clients while they have vast network for doing so (resources i.e. bandwidth which are consumed by DDoS). Then your IP is just known by the anti-DDoS service provider to fetch the content and deliver on your behalf.
You see if the IP is leaked by any mean the whole defense mechanism become useless since attackers can directly point to your machine while dynamic DNS of CloudFlare would distribute requests to its network and serve clients via them.
Since your website was up for a while before you migrate to CloudFlare your current public IP is known to attackers and hiding behind CloudFlare is useless since they don't ask CloudFlare DNS service and directly attack your server. This is the reason you need a new IP and the new one should not be revealed by any mean. Just set it in your CloudFlare panel and don't use it for other purposes.
I faced attacks too and used CloudFlare to prevent them, however, I have learned how to perform those attacks by myself and also how to bypass CloudFlare and take down the protected website. The best practice is to secure your server by yourself. Using nginx as a reverse proxy is a good option.