Maximum Requests Parse server can handle - parse-server

How many requests Parse server can handle? I read somewhere it is 10,000 but someone wrote 14,000 on other place. Can anyone tell what is the exact number of requests it can handle?

For a free plan, Parse limits 30 requests/second ~ 1800 requests/min.

Related

How to split a long timeout API call into smaller ones

I have an API js file which gets called by a cronjob via curl GET.
This js file basically makes a query to an external API via await fetch and saves some data from the response onto Mongodb via await .. updateOne. The problem is this happens in loop for about 500 different values and it takes more than 10 seconds to finish, whereas my server timeout limit for serverless functions is 10 sec.
So how can I split it into multiple "GET" requests ?
Isn't doing a for loop inside the API js file the same since it'd still count as a single operation?
Every time I google for this via different keywords it finds me non-related stuff, am I missing something? maybe is rare to find such a case? I'm new to the whole cronjob/serverless functions thing, if this is not the correct place to ask for this please point me out where should I post it whithin stackexchange
Two potential solutions:
The brute force method would be to increase the timeout setting, you can do this via the serverless.yml, either in the provider section or in the function definition directly. (Maximum timeout for AWS Lambdas is 900 seconds or 15 minutes.) (Not relevant as on Vercel, timeout of 900 seconds for Enterprise and 60 seconds for Pro but 10 seconds for the free plan.)
Doing the for loop inside the Lambda function wouldn't change much. If you can break it down into multiple cron jobs which you can parameterise. E.g. imagine a cron job which goes through a staff list to do some processing on a daily basis. You could change your cron job to accept a range of letters which filters the staff list by last name. So instead of one cron job you would do four: A-F, G-M, N-S and T-Z. (In your case trying to find a parameter which splits the 500 values into equal sized buckets.)
As you get billed by duration and memory consumption with serverless (at least with AWS) it probably doesn't make a lot of sense to split it so increasing the timeout setting might be the easier solution, but I don't know your full context so this is just a guess.

Profiling an application with Redis and getting ObjectNative::WaitTimeout

I am profiling an application which gives timeouts when there are cache misses. In my code I am doing batching, so basically I send multiple batches (size = 100) in parallel to fetch data from Redis. These requests are in parallel and I know that if I send a single batch of 200 it will not timeout, so Ideally multiple batches of 100 should not timeout.
But I do see timeout, and profiling the application I see that about 68 percent of the time is spent in this code.
ConnectionMultiplexer.ExecuteSyncImpl 670 ms
OTHER ObjectNative::WaitTimeout 648 ms
BLOCKING
Can someone give some insights into what this means.. does this mean there is queuing happening or how to figure out where the issue might be. Any pointers will be helpful.
Thanks.

Finding an applications scalibility point using JMeter

I am trying to find an applications scalibility point using JMeter. I define the scalability point as "The minimum number of concurrent users from which any increase no longer increases the Throughput per second".
I am using the following technique. Schedule my load test to run for an hour, starting a new thread sending SOAP/XML-RPC Requests every 30 seconds. I do this by setting my number of threads to 120 and my ramp up period to 3600 seconds.
Then looking at my TOTAL rows Throughput in my Summary Report Listener. A new row (thread) is added every 30 seconds, the total throughput number rises until it plateaus at about 123 requests per second after 80 of the threads are active in my case. It then slowly drops the throughput number to 120 per second as the last 20 threads are added. I then conclude that my applications scalability point is 123 requests per second with 80 active users.
My question, is this a valid way to find an application scalibility point or is there different technique that I should be trying?
From a technical perspective what you're doing does answer your question regarding one specific user scenario, though I think you might be missing the big picture.
First of all keep in mind that the actual HTTP request you're sending and ramp up times can often impact what you call a scalability point. Are your requests hitting a cache? Are they not random enough? Are they too random? Do they represent real world requests? is 30 seconds going to give you the same results as 20 seconds or 10 seconds?
From my personal experience it's MUCH easier and more intuitive to look at graphs when trying to analyze app performance. It's not just a question of raw numbers but also looking and trends and rates of change.
For example here is an example testing the ghost.org blogging platofom using JMeter with an interactive JMeter results graph.
http://blazemeter.com/blog/ghost-performance-benchmark

Simultaneous queries in Solr

Hej,
I am deploying a Solr server containg more than 30m docs. Currently, I am testing the searching performance and the results are very dependant of the number of simultaneous queries I execute:
1 simultaneous query: 2516ms
2 simultaneous queries: 4250,4469 ms
3 simultaneous queries: 5781, 6219, 6219 ms
4 simultaneous queries: 6484, 7203, 7719, 7781 ms
...
Jetty threadpool is configured as default:
New class="org.mortbay.thread.BoundedThreadPool"
Set name="minThreads" 10
Set name="lowThreads" 50
Set name="maxThreads" 10000
I would like to know if there is any factor I can set for decreasing the impact of the simultaneous requests in response times.
Solrconfig is configured also as default but without cache for measuring worst cases and mergeFactor=5 (searching will be more requested than updating).
Thanks in advance
Why are you trying to do this with caching turned off? What exactly are you trying to measure?
You have effectively forced Solr (Lucene) to perform every search from the disk. What you are actually measuring is concurrency of Java itself combined with your OS and disk throughput. This has nothing to do with Jetty or Solr.
Caches are your friend. You really should be using them in any sort of a production capacity. In my opinion, you should be measuring your throughput under load while varying the caches to see what the tradeoff is between cache size and throughput.
Please check out this IBM Tutorial for Solr
I got a great help from this.
Hope you will find your answer. :-)

What's the "average" requests per second for a production web application?

I have no frame of reference in terms of what's considered "fast"; I'd always wondered this but have never found a straight answer...
OpenStreetMap seems to have 10-20 per second
Wikipedia seems to be 30000 to 70000 per second spread over 300 servers (100 to 200 requests per second per machine, most of which is caches)
Geograph is getting 7000 images per week (1 upload per 95 seconds)
Not sure anyone is still interested, but this information was posted about Twitter (and here too):
The Stats
Over 350,000 users. The actual numbers are as always, very super super top secret.
600 requests per second.
Average 200-300 connections per second. Spiking to 800 connections per second.
MySQL handled 2,400 requests per second.
180 Rails instances. Uses Mongrel as the "web" server.
1 MySQL Server (one big 8 core box) and 1 slave. Slave is read only for statistics and reporting.
30+ processes for handling odd jobs.
8 Sun X4100s.
Process a request in 200 milliseconds in Rails.
Average time spent in the database is 50-100 milliseconds.
Over 16 GB of memcached.
When I go to the control panel of my webhost, open up phpMyAdmin, and click on "Show MySQL runtime information", I get:
This MySQL server has been running for 53 days, 15 hours, 28 minutes and 53 seconds. It started up on Oct 24, 2008 at 04:03 AM.
Query statistics: Since its startup, 3,444,378,344 queries have been sent to the server.
Total 3,444 M
per hour 2.68 M
per minute 44.59 k
per second 743.13
That's an average of 743 mySQL queries every single second for the past 53 days!
I don't know about you, but to me that's fast! Very fast!!
personally, I like both analysis done every time....requests/second and average time/request and love seeing the max request time as well on top of that. it is easy to flip if you have 61 requests/second, you can then just flip it to 1000ms / 61 requests.
To answer your question, we have been doing a huge load test ourselves and find it ranges on various amazon hardware we use(best value was the 32 bit medium cpu when it came down to $$ / event / second) and our requests / seconds ranged from 29 requests / second / node up to 150 requests/second/node.
Giving better hardware of course gives better results but not the best ROI. Anyways, this post was great as I was looking for some parallels to see if my numbers where in the ballpark and shared mine as well in case someone else is looking. Mine is purely loaded as high as I can go.
NOTE: thanks to requests/second analysis(not ms/request) we found a major linux issue that we are trying to resolve where linux(we tested a server in C and java) freezes all the calls into socket libraries when under too much load which seems very odd. The full post can be found here actually....
http://ubuntuforums.org/showthread.php?p=11202389
We are still trying to resolve that as it gives us a huge performance boost in that our test goes from 2 minutes 42 seconds to 1 minute 35 seconds when this is fixed so we see a 33% performancce improvement....not to mention, the worse the DoS attack is the longer these pauses are so that all cpus drop to zero and stop processing...in my opinion server processing should continue in the face of a DoS but for some reason, it freezes up every once in a while during the Dos sometimes up to 30 seconds!!!
ADDITION: We found out it was actually a jdk race condition bug....hard to isolate on big clusters but when we ran 1 server 1 data node but 10 of those, we could reproduce it every time then and just looked at the server/datanode it occurred on. Switching the jdk to an earlier release fixed the issue. We were on jdk1.6.0_26 I believe.
That is a very open apples-to-oranges type of question.
You are asking
1. the average request load for a production application
2. what is considered fast
These don't neccessarily relate.
Your average # of requests per second is determined by
a. the number of simultaneous users
b. the average number of page requests they make per second
c. the number of additional requests (i.e. ajax calls, etc)
As to what is considered fast.. do you mean how few requests a site can take? Or if a piece of hardware is considered fast if it can process xyz # of requests per second?
Note that hit-rate graphs will be sinusoidal patterns with 'peak hours' maybe 2x or 3x the rate that you get while users are sleeping. (Can be useful when you're scheduling the daily batch-processing stuff to happen on servers)
You can see the effect even on 'international' (multilingual, localised) sites like wikipedia
less than 2 seconds per user usually - ie users that see slower responses than this think the system is slow.
Now you tell me how many users you have connected.
You can search "slashdot effect analysis" for graphs of what you would see if some aspect of the site suddenly became popular in the news, e.g. this graph on wiki.
Web-applications that survive tend to be the ones which can generate static pages instead of putting every request through a processing language.
There was an excellent video (I think it might have been on ted.com? I think it might have been by flickr web team? Does someone know the link?) with ideas on how to scale websites beyond the single server, e.g. how to allocate connections amongst the mix of read-only and read-write servers to get best effect for various types of users.
I have a customer that uses our software on a commercial web app servers. The software runs on 40 servers. The software is a 10 year old Java API.
4000 TPS.