What is a Ray ID (Cloudflare)? - cloudflare

Every time I visit a website that is using Cloudflare's Under-Attack-Mode, it shows me the usual text telling me to wait a few seconds until Cloudflare verified I am not a bot. Every time I reload the page it changes my current Ray ID.
What is the purpose of a Ray ID? Is it some kind of session ID?

It is a UID which can be used by the website operator (and Cloudflare support) to potentially debug issues. The ray id is actually returned in the headers of most requests through Cloudflare, just not as visibly as what you see in the case of I'm under attack mode.

I will try to explain both in simple words, Cloudfare and Ray ID.
Cloudfare:
This is a service sitting in front of servers tasked with the ability to handle requests and allow response depending upon multiple security parameters. The service provides website security, performance optimization and DDoS protection. The service mimics a reverse proxy.
When a request is sent to Cloudfare, it initially hits one of the edge servers (Content Delivery Network) strategically placed across the continent. This helps to fetch cached data at a rapid pace without the need to traverse to the origin server to fetch the content.
Ray ID:
Ray ID is a unique identifier generated by Cloudflare's edge servers, and is included in the response headers of each request processed by these edge servers and is used to help identify specific requests when troubleshooting issues. Other than issues, it helps to track the request, request specifier, topo location and auditing becomes easier.
Hope, it clears the doubt

Related

This site or app is sending too much traffic to rawgit.com.

This site or app is sending too much traffic to rawgit.com. Please contact its owner and ask them to use cdn.rawgit.com instead, which has no traffic limit.
I am getting this above notification on my website. Can anyone help me to resolve out this issue.How I can remove it.
Rawgit has two Domains for displaying your Github files:
cdn.rawgit.com is for when you request that site a lot (Updates take a while to show).
rawgit.com is for testing (Updates are visible immediately).
You need to use the cdn.rawgit.com link as you are sending to much traffic.
For example:
https://rawgit.com/Oisins/Mod/master/.project
becomes
https://cdn.rawgit.com/Oisins/Mod/master/.project
See http://rawgit.com/ for more info.

How many active, simultaneous connections can a web server accept?

I know this is a difficult question but here it is, in context:
Our company has a request to build a WordPress website for a certain client. The caveat is that, on one day per year, for a period of about 20 minutes, 5,000 - 10,000 people will attempt to access the home page of this website. Their purpose: Only to acquire an outbound link to another site.
My concern is, no matter what kind of hosting we provide, the server may reject the connections after a certain number of connections are reached.
Any ideas on this?
This does not depend on WordPress. WordPress is basically software to render webpages: it helps you to quickly modify the content content of a page. Other software like for instance Apache accepts connections and redirects the calls to for instance WordPress.
Apache can be configured to accept more connections. I think the default is about 200. If that's bad really depends. If the purpose is only to give another URL, you can say that connections will be terminated fast. So that's not really an issue. If on the other hand you want to generate an entire page using PHP and MySQL it can take some time before a client is satisfied. In that case 200 connections are perhaps not sufficient.
As B-Lat points out. You can use cloud computing platforms like Google App Engine or Microsoft Azure that provide a lot of server power. But only bill their clients on the consumption on these resources. In other words you can accept thousands of connections at once. But you don't need to pay for the other days when clients visit your website less often.

Google plus determine changes in network

I am trying to determine changes in the Google+ network in an efficient manner (profile changes). My first idea was to use the eTags of the People.List and People.Get. My assumption was that the eTag in the List (person) would be the same as the one in the Get. This is not the case!
I rather not want to get the details of all the people in the network and checking the eTag for each of them. I will run out of daily api-calls very quickly using that scenario.
Are there any other ways of determining the changes in the network?
Thanks!
I'm not aware of a way to notify your service when changes occur on a user's profile. I don't think that etags will work for what you are trying to do and the client libraries should already be using the etags to manage any query caching. You can perform a few tricks to make queries lighter on your backend though:
Batch API calls
Use a fields filter to just only get the data that matters for your application
If you are running out of quota, you can also request to have your limits raised from the Google APIs console by clicking the Quotas link on the left. The developer relations team from Google+ checks the request regularly and will raise your quota limits if your usage justifies it.

Is it a bad idea to have a web browser query another api instead of my site providing it?

Here's my issue. I have a site that provides some investing services, I pay for end of day data which is all I really need for my service but I feel its a bit odd when people check in during the day and it only displays yesterdays closing price. End of day is fine for my analytics but I want to display delayed quotes on my site.
According to the yahoo's YQL faq: If you use IP based authentication then you are limited to 1000 calls/day/IP, if my site grows I may exceed that but I was thinking of trying to push this request to the people browsing my site themselves since its extremely unlikely that the same IP will visit my site 1,000 times a day(my site itself has no use for this info). I would call a url from their browser, then parse the results so I can allow them to view it in the format of the sites template.
I'm new to web development so I'm wondering is it a common practice or a bad idea to have the users browser make the api call themselves?
It is not a bad idea at all:
You stretch up limitations this way;
Your server will respond faster (since it does not have to contact the api);
Your page will load faster because the initial response is smaller;
You can load the remaining data from the api in async manner while your UI is already responsive.
Generally speaking it is a great idea to talk with api's from the client. It's more dynamic, you spread traffic, more responsiveness etc...
The biggest downside I can think of is depending on the availability of other services. On the other hand your server(s) will be stressed less because of spreading the traffic.
Hope this helped a bit! Cheers!

Do i need a Content Delivery Network If my audience is in one city?

So ive asked question earlier about having some sort of social network website with lots of images and the problem is the more users , the more images the website will have and i was afraid it would take a LONG time for the images to load on the client side.
How to handle A LOT of images in a webpage?
So the feedback i got was to get a content delivery network. Base on my limited knowledge of what a content delivery network is, it is series of computures containing copies of data and clients access that certain servers/computers depending where they are in the world? What if im planning to release my website only for a university, only for students. Would i need something like a CDN for my images to load instantly? Or would i need to rent a REALLY expensive server? Thanks.
The major hold up for having lots of images is the number of requests the browser has to make to the server, and then, in turn, the number of requests the server has to queue up and send back.
While one benefit of a CDN is location (it will load assets from the nearest physical server) the other benefit is that it's another server. So instead of one server having to queue up and deliver all 20 file requests, it can maybe do 10 while the other server is simultaneously doing 10.
Would you see a huge benefit? Well, it really doesn't matter at this point. Having too much traffic is a really good problem to have. Wait until you actually have that problem, then you can figure out what your best option is at that point.
If you're target audience will not be very large, you shouldn't have a big problem with images loading. A content delivery network is useful when you have a large application with a distributed userbase and very high traffic. Underneath that, and you shouldnt have a problem.
Hardware stress aside, another valuable reason for using a CDN is that browsers limit the number of simultaneous connections to one host, so let's say the browser is limited to 6 connections and you have in one page load 10 images, 3 CSS files and 3 javascript files. If all 10 of those images are coming from one host, then it will take a while to get through all 16 of those connections. If however, the 10 images are loaded from a CDN that uses different hosts, that load time can be drastically reduced.
Even if all your users are geographically close, they may have very different network topologies to reach your hosting provider. If you choose a hosting provider that has peering agreements with all the major ISPs that provide service in your town, then a CDN may not provide you much benefit. If your hosting provider has only one peer who may also be poorly connected to the ISPs in your town, then a CDN may provide a huge benefit, if it can remove latency from some or all of your users.
If you can measure latency to your site from all the major ISPs in your area to your hosting provider, that will help you decide if you need a CDN to help shorten the hops between your content and your clients.