I'm currently developing a REST API for a project. I prefer to stick to industry conventions in the design and thus I have spend some time investigating best practices around some topic, included the implementation of Rate Limits. The tendency I have observed seems to be that rate limiting is implemented typically through various time restrictions such as seconds, minutes, hours and days.
Naturally rate limiting is needed to secure the backend server from being overloaded with trafic, but what are the reasoning for implementing multiple rate limit restrictions? I mean, wouldnt it be sufficient to "only" implement one time restriction e.g. minute limits?
What is the purpose of implementing both minute and hourly limits into the same endpoint? As stated I have spend quite some time investigating the matter, but I have not found an explanation of the implementation of multiple limits. Thank you in advance.
Related
Is it normal to get lesser ecommerce purchase data accuracy in GA4 (even lesser upto 20 to 30%) comparing the data base record? What is standard here? Also please mention the reason of the record and tracking discrepency.
To get the understanding
Well, the most obvious reason is adblockers. They would block your tracking.
The percentage depends on the likeliness of particular audience to block it. We generally expect it to be around 10%, but, as an example, it can reach even 50% when you look at the STEM student traffic.
Other common issue is poorly implemented tracking. Which, in the case of conversions, is more likely to double count the conversions rather than undercount them, but that is possible too.
Finally, GA4's interface is still not a very reliable tool to access your data, so you may take a glance at it in BQ if you're friendly with SQL just to make sure what you're seeing in a GA4 interface is really what's in data.
I have been using Pusher for some time now. I always assumed "Real time" meant "instantaneous". Lately I have step into this article: https://en.wikipedia.org/wiki/Real-time_computing, and a sentence grab my attention:
"Real-time programs must guarantee response within specified time
constraints"
They give an example based on audio processing:
"Consider an audio DSP example; if a process requires 2.01 seconds to
analyze, synthesize, or process 2.00 seconds of sound, it is not
real-time. However, if it takes 1.99 seconds, it is or can be made
into a real-time DSP process."
My questions:
1. This definition only applies to hardware/electronic devices or can be applied to software too?
2. If applies to software, does it apply to remote services like Pusher?
3. What is the time constraint for pusher to be considered "Real time"?
4. What is the time constraint for other services like WebRTC, Firebase?
Sorry for the lengthly post that doesn't specifically answer your question, but I hope it will make you better undestand where the "real time" definition comes from.
Yes, it is an understandable confusion that "real time" means "instantaneous". But if you really start to think about it you will soon find out that "instantaneous" is difficult to define.
What does instantaneous mean? 0 (zero) seconds response time (as in 0 sec 0 ms 0 ns 0 ps) from the time of the command to the time of the response is physically impossible. We can then try to say that instantaneous would mean that the command-response time is perceived instantaneously, i.e. it would not be seen as a delay. But then... what exactly does "perceived instantaneously" mean? Perceived by humans? Ok, that is good, we are getting somewhere. Human eye and the brain image processing are a very very complex machine and it does not really simply work in fps, but we can use data to approximate some. A human eye can "perceive an image flashed on the screen for 1/250th of a second". That would be 0.004 seconds or 250 fps. So by this approximation a graphical program would be real time if it has a response time < 0.004 sec or would run faster than #250 fps. But we know that in practice games are perceived smooth by most people at just 60 fps, or 0.01666 seconds. So now we have two different answers. Can we somehow justify them both? Yes. We can say in theory realtime would mean 0.004 seconds, but in practice 0.01666 seconds is enough.
We could be happy and stop here, but we are on a journey of discovery. So lets think further. Would you want a "real time" avionic automation system to have 0.01666 seconds response time? Would you deem acceptable a 0.01666 seconds response time for a "real time" nuclear plant system? Would an oil control system where a valve takes physically 15 seconds to close be defined as "real time" if the command-completion time is 0.0166 seconds? The answer to all these questions is most definitely no. Why? Answer that and you answer why "real time" is defined as it is: "Real-time programs must guarantee response within specified time constraints".
I am sorry, I am not familiar at all with "Pusher", but I can answer your first question and part of your second one: "real times" can be applied to any system that needs to "react" or respond to some form of input. Here "system" is more generic than you might think. A brain would qualify, but in the context of engineering means the whole stack: hardware + software.
This definition only applies to hardware/electronic devices or can be applied to software too?
It applies to software too. Anything that has hard time constraints. There are real-time operating systems, for example, and even a real-time specification for Java.
If applies to software, does it apply to remote services like Pusher?
Hard to see how, if a network is involved. More probably they just mean 'timely', or maybe it's just a sloppy way of saying 'push model', as the name implies. Large numbers of users on this site seem to think that 'real-time' means 'real-world'. In IT it means a system that is capable of meeting hard real-time constraints. The Wikipedia definition you cited is correct but the example isn't very satisfactory.
What is the time constraint for pusher to be considered "Real time"?
The question is malformed. The real question is whether Pusher can actually meet hard real-time constraints at all, and only then what their minimum value might be. It doesn't seem at all likely without operating system and network support.
What is the time constraint for other services like WebRTC, Firebase?
Ditto.
Most interpretations of the term "real-time " refer to the traditional static type, often referred to as "hard real-time." Although there is not much of a consensus on the meanings of the terms "hard real-time" and "soft real-time," I provide definitions, based on scientific first principles, of these and other essential terms in Introduction to Fundamental Principles of Dynamic Real-Time Systems.
I have been hoping to find out what different server setups equate to in theory for concurrent page requests, and the answer always seems to be soaked in voodoo and sorcery. What is the approximation of max concurrent page requests for the following setups?
apache+php+mysql(1 server)
apache+php+mysql+caching(like memcached or similiar (still one server))
apache+php+mysql+caching+dedicated Database Server (2 servers)
apache+php+mysql+caching+dedicatedDB+loadbalancing(multi webserver/single dbserver)
apache+php+mysql+caching+dedicatedDB+loadbalancing(multi webserver/multi dbserver)
+distributed (amazon cloud elastic) -- I know this one is "as much as you can afford" but it would be nice to know when to move to it.
I appreciate any constructive criticism, I am just trying to figure out when its time to move from one implementation to the next, because they each come with their own implementation feat either programming wise or setup wise.
In your question you talk about caching and this is probably one of the most important factors in a web architecture r.e performance and capacity.
Memcache is useful, but actually, before that, you should be ensuring proper HTTP cache directives on your server responses. This does 2 things; it reduces the number of requests and speeds up server response times (if you have Apache configured correctly). This can also be improved by using an HTTP accelerator like Varnish and a CDN.
Another factor to consider is whether your system is stateless. By stateless, it usually means that it doesn't store sessions on the server and reference them with every request. A good systems architecture relies on state as little as possible. The less state the more horizontally scalable a system. Most people introduce state when confronted with issues of personalisation - i.e serving up different content for different users. In such cases you should first investigate using the HTML5 session storage (i.e store the complete user data in javascript on the client, obviously over https) or if the data set is smaller, secure javascript cookies. That way you can still serve up cached resources and then personalise with javascript on the client.
Finally, your stack includes a database tier, another potential bottleneck for performance and capacity. If you are only reading data from the system then again it should be quite easy to horizontally scale. If there are reads and writes, its typically better to separate the read write datasets into a separate database and have the read only in another. You can then use more relevant methods to scale.
These setups do not spit out a single answer that you can then compare to each other. The answer will vary on way more factors than you have listed.
Even if they did spit out a single answer, then it is just one metric out of dozens. What makes this the most important metric?
Even worse, each of these alternatives is not free. There is engineering effort and maintenance overhead in each of these. Which could not be analysed without understanding your organisation, your app and your cost/revenue structures.
Options like AWS not only involve development effort but may "lock you in" to a solution so you also need to be aware of that.
I know this response is not complete, but I am pointing out that this question touches on a large complicated area that cannot be reduced to a single metric.
I suspect you are approaching this from exactly the wrong end. Do not go looking for technologies and then figure out how to use them. Instead profile your app (measure, measure, measure), figure out the actual problem you are having, and then solve that problem and that problem only.
If you understand the problem and you understand the technology options then you should have an answer.
If you have already done this and the problem is concurrent page requests then I apologise in advance, but I suspect not.
I was recently given the task of rebuilding an existing RIA. The new RIA that I've designed is based on Silverlight, with a WCF service to connect to MS SQL Server. This is my first time doing something like this, so I'm not sure how to design the entire thing.
Basically, the client can look through graphs of "stocks" (allowing the client to choose different time periods, settings, etc). I've written the whole application essentially, but I'm not sure how to put it together.
The graphs are supposed to be directly based on the database, and to create the datapoints on the graph, some calculations need to be done (not very expensive ones).
The problem I'm having is to decide where to put the calculations (client or serverside? Or half and half?)
What factors should I look for to help me decide where the calculations should be done? And how can I go about optimizing this (caching, etc)?
Obviously this is a very broad subject, so I'm not expecting an immediate answer, but any help/pointing in the right direction/resources would be appreciated.
A few tips for this kind of app.
Put as much logic as possible on the client.
Make the client responsible for session data, making all your server code stateless.
Try to minimize traffic to and from the server (Bigger requests are more efficient than multiple smaller ones) so consolidate requests when possible.
If this project is likely to grow beyond it's current feature set I think it's probably a good idea to perform the calculations client side. This can avoid scaling issues, because you're using all the client side CPUs ratther than you're single, precious server CPU. This does however rely on being able to transfer the required data to the client in an efficient way, otherwise you replace a processor bottleneck with a network bottleneck.
As for caching it depends on your inputs, what variables can users of the client affect? If any of the variables they can alter are discrete (ie they can be a fixed set of values) then they're candidates for caching. For example if a user can select a date range of stock variations to view then that's probably not so useful, if however they can only select a year then you could cache your data sets by year (download each data set to the client and perform your calculation). I'd not worry about caching too much unless you find it's a real performance problem, it'll only make your code more complex, so don't add it until you have proven you need it.
One other thing, if this project is unlikely to be a long term concern then implement the calculations wherever is easiest and fastest, you can revisit if the project becomes more important later on.
Be REALLY REALLY careful about implementing client-side caching. Caching is INSANELY hard to do right while maintaining performance, security and correctness. Note that your DB Server's caching mechanism is already likely to be way better than any local caching mechanism you're likely to implement in less than 2 weeks' effort!
I would urge you to do as much work on the back-end as possible and to limit your client to render the data in a manner that is appropriate for your users. While many may balk at this suggestion, it's based on a number of observations from building many such systems in the past:
If you're going to filter some of the data returned by your service, you've just wasted thousands of clock cycles shipping data that need never have left your server
If you're going to sort your data, your DB could have done the sorting for you (often using otherwise idle CPU ticks) while waiting for the data to be read from its disks.
Your server most likely has more CPU and RAM available than your clients and has a surprising amount of "free time" to use for sorting, filtering, running inline calculations, etc., while its waiting for disks to read sectors etc.
As Roman suggested: Minimize your round-trips between your client and your server as much as possible.
But perhaps most importantly:
BEFORE YOU START DESIGNING YOUR SYSTEM, state your performance goals
Design what you think will achieve those goals. Try to find bottlenecks in your design, particularly areas where you make blocking calls. Re-design those areas to use async patterns wherever you can.
Build your intended solution
Measure your actual perforamnce under actual real-world load
If you're within your expected performance goals, then you're done.
If not, work out where you're spending too long and tune the design of that portion of the system. Goto 3.
Don't try to build the perfect system in one try - chances are that you won't manage it, no matter how hard you try, for a variety of reasons including user expectations, your servers ability to process the required load, your clients' ability to handle the returned data, your network's ability to carry the traffic, etc.
They're a little old now, but I suggest you read through some of the earlier posts at http://blogs.msdn.com/richardt for more thoughts around designing and constructing Service Oriented and distributed systems.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Does anyone use a rule of thumb basis to estimate the effort required for testing as a percentage of the effort required for development? And if so what percentage do you use?
From my experience, 25% effort is spent on Analysis; 50% for Design, Development and Unit Test; remaining 25% for testing. Most projects will fit within a +/-10% variance of this rule of thumb depending on the nature of the project, knowledge of resources, quality of inputs & outputs, etc. One can add a project management overhead within these percentages or as an overhead on top within a 10-15% range.
The Google Testing Blog discussed this problem recently:
So a naive answer is that writing test carries a 10% tax. But, we pay taxes in order to get something in return.
(snip)
These benefits translate to real value today as well as tomorrow. I write tests, because the additional benefits I get more than offset the additional cost of 10%. Even if I don't include the long term benefits, the value I get from test today are well worth it. I am faster in developing code with test. How much, well that depends on the complexity of the code. The more complex the thing you are trying to build is (more ifs/loops/dependencies) the greater the benefit of tests are.
When you're estimating testing you need to identify the scope of your testing - are we talking unit test, functional, UAT, interface, security, performance stress and volume?
If you're on a waterfall project you probably have some overhead tasks that are fairly constant. Allow time to prepare any planning documents, schedules and reports.
For a functional test phase (I'm a "system tester" so that's my main point of reference) don't forget to include planning! A test case often needs at least as much effort to extract from requirements / specs / user stories as it will take to execute. In addition you need to include some time for defect raising / retesting. For a larger team you'll need to factor in test management - scheduling, reporting, meetings.
Generally my estimates are based on the complexity of the features being delivered rather than a percentage of dev effort. However this does require access to at least a high-level set of instructions. Years of doing testing enables me to work out that a test of a particular complexity will take x hours of effort for preparation and execution. Some tests may require extra effort for data setup. Some tests may involve negotiating with external systems and have a duration far in excess of the effort required.
In the end, though, you need to review it in the context of the overall project. If your estimate is well above that for BA or Development then there may be something wrong with your underlying assumptions.
I know this is an old topic but it's something I'm revisiting at the moment and is of perennial interest to project managers.
Some years ago, in a safety critical field, I have heard something like one day for unit testing ten lines of code.
I have also observed 50% of effort for development and 50% for testing (not only unit testing).
Are you talking about automated unit/integration tests or manual tests?
For the former, my rule of thumb (based on measurements) is 40-50% added to development time i.e. if developing a use case takes 10 days (before an QA and serious bugfixing happens), writing good tests takes another 4 to 5 days - though this should best happen before and during development, not afterwards.
When you speak of tests, you could mean waterfall or agile test development. In an agile environment, developers should spend 50% of their time developing and maintaining tests.
But that 50% extra will save you time when the re-factoring and manual verification time comes.
Testing time is probably more closely correlated to feature scope than development time. I'd also argue (perhaps controversially) that testing time is correlated to the skill of your development team.
For a 6-to-9 month development effort, I demand a absolute minimum of 2 weeks testing time, performed by actual testers (not the development team) who are well-versed in the software they will be testing (i.e., 2 weeks does not include ramp-up time). This is for a project that has ~5 developers.
Gartner in Oct 2006 states that testing typically consumes between 10% and 35% of work on a system integration project. I assume that it applies to the waterfall method. This is quite a wide range - but there are many dependencies on the amount of customisations to a standard product and the number of systems to be integrated.
The only time I factor in extra time for testing is if I'm unfamiliar with the testing technology I'll be using (e.g. using Selenium tests for the first time). Then I factor in maybe 10-20% for getting up to speed on the tools and getting the test infrastructure in place.
Otherwise testing is just an innate part of development and doesn't warrant an extra estimate. In fact, I'd probably increase the estimate for code done without tests.
EDIT: Note that I'm usually writing code test-first. If I have to come in after the fact and write tests for existing code that's going to slow things down. I don't find that test-first development slows me down at all except for very exploratory (read: throw-away) coding.
Judge by yesterday's weather. How long did it take last time? Are you trending longer or shorter? Each shop is different.
Most agile shops need a lot less time, have drastically fewer defects, and quicker time to resolve them because of TDD. Even so, most agile shops have some measurable time spent with testing/QC.
If this is the first test run for this application, then the answer is "lets see" followed by an attempt. It depends on how quick you can get questions answered,
- how testable it is,
- how many features/functions
- how many defects are discovered,
- how quickly issues are resolved,
- how many times the code cycles
through testing, and
- how many times testing is blocked by
bugs.
There is no way to tell. You could call it 50% or 175% or more, and not be wrong. Why not make a rough guess and multiply by Pi? It won't be much worse than any other answer you can make up.
You should (must) know how long it takes now and whether it's getting faster or slower, and whether the coverage is increasing or decreasing. With those three bits of information, you should be able to guess quite well.