Concurrency and parallel processing are two different things.
I know that FastAPI supports concurrency. It can handle multiple api requests concurrently using async and await.
What I want to know is, if FastAPI also supports Multiprocessing and Parallel processing of requests or not ?
If yes then how can I implement parallel processing of requests?
I have searched a lot but everywhere I found about concurrency only. I am new to FastAPI. Thanks for your help!!
When running your app with Uvicorn or Gunicorn, you can specify how many workers/processes you want. In Uvicorn, you just need to pass --workers N as an argument, and in Gunicorn it's pretty much the same, with --workers=N. That will start N processes all receiving requests at the same time.
Related
I have a Thread Group with 200 threads. I need to send out 200,000 api requests (with different HTTP request paths) concurrently. Does any of the Jmeter timer can accomplish this? Thank you for your help!
Your use case is not very clear, in order to send 200 000 concurrent requests you need the same amount of threads (virtual users) under Thread Group. Once done you can ensure that they are being executed at exactly the same moment in time by adding Synchronising Timer.
HTTP Request paths can be parameterised using i.e. CSV Data Set Config.
There are also Parallel Controller and Parallel Sampler plugins which you can consider as an alternative, you can install them using JMeter Plugins Manager
Let's say I had a couple of servers each running multiple Scrapy spider instances at once. Each spider is limited to 4 concurrent requests with CONCURRENT_REQUESTS = 4. For concreteness, let's say there are 10 spider instances at once so I never expect more than 40 requests max at once.
If I need to know at any given time how many concurrent requests are active across all 10 spiders, I might think of storing that integer on a central redis server under some "connection_count" key.
My idea was then to write some downloader middleware that schematically looks like this:
class countMW(object):
def process_request(self, request, spider):
# Increment the redis key
def process_response(self, request, response, spider):
# Decrement the redis key
return response
def process_exception(self, request, exception, spider):
# Decrement the redis key
However, with this approach it seems the connection count under the central key, can be more than 40. I even get > 4, for a single spider running (when the network is under load), and even for a single spider when the redis store is just replaced with the approach of storing the count as an attribute on the spider instance itself, to remove any lag in remote redis key server updates being the problem.
My theory for the reason this doesn't work is that even though the request concurrency per spider is capped at 4, Scrapy still creates and queues more than 4 requests in the meantime, and those extra requests call process_requests incrementing the count long before they are fetched.
Firstly, is this theory correct? Secondly, if it is, is there a way that I could increment the redis count only when a true fetch was occurring (when the request becomes active), and decrement it similarly.
In my opinion it is better customize scheduler as it fits better to Scrapy architecture and you have full control of the requests emitting process:
Scheduler
The Scheduler receives requests from the engine and enqueues them for feeding them later (also to the engine) when the engine requests them.
https://doc.scrapy.org/en/latest/topics/architecture.html?highlight=scheduler#component-scheduler
For example you can find some inspiration ideas about how to customize scheduler here: https://github.com/rolando/scrapy-redis
Your theory is partially correct. Usually requests are made much faster than they are fulfilled and the engine will give, not some, but ALL of these requests to the scheduler. But these queued requests are not processed and thus will not call process_request until they are fetched.
There is a slight lag between when the scheduler releases a request and when the downloader begins to fetch it; and, this allows for the scenario you observe where more than CONCURRENT_REQUESTS requests are active at the same time. Since Scrapy processes requests asynchronously there is this bit of a sloppy double dipping possibility baked in; so, how to deal with it. I'm sure you don't want to run synchronously.
So the question becomes: what is the motivation behind this? Are you just curious about the inner workings of Scrapy? Or do you have some ISP bandwidth cost limitations to deal with, for example? Because we have to define what we really mean by concurrency here.
When does a request become "active"?
When the scheduler releases it?
When the downloader begins to fetch it?
When the underlying Twisted deferred is created?
When the first TCP packet is sent?
When the first TCP packet is received?
Perhaps you could add your own scheduler middleware for finer grained control and perhaps can take inspiration from Downloader.fetch.
There's an API used by multiple users, is it possible to implement rate limit using Redis?
Would be interesting to know how to do it for two slightly different cases:
No more than X requests per second from same user.
No more than X concurrent requests from same user.
The API implemented as stateless ruby processes running on multiple servers.
You can look at rack-attack gem. It can use Redis for store information about requests for throttling.
I know it's not Ruby, but I've implemented a rate limiter using ioredis and the Redis commands are easily transferable. Ioredis is a wrapper for Node.js but the redis calls are the same.
Here's a gist of rolling window and lockout-style rate limiters
For the second part of your question, I'm not sure what you mean by concurrent requests? Aren't requests transient and sequential by definition? Do you mean concurrent connections? The number of devices a user has connected at one time, for example? You would only need to keep track of the number connections in that case (no need for a timer).
The Heroku Dev Center on the page about using worker dynos and background jobs states that you need to use worker's + queues to handle API calls, such as fetching an RSS feed, as the operation may take some time if the server is slow and doing this on a web dyno would result in it being blocked from receiving additional requests.
However, from what I've read, it seems to me that one of the major points of Node.js is that it doesn't suffer from blocking under these conditions due to its asynchronous event-based runtime model.
I'm confused because wouldn't this imply that it would be ok to do API calls (asynchronously) in the web dynos? Perhaps the docs were written more for the Ruby/Python/etc use cases where a synchronous model was more prevalent?
NodeJS is an implementation of the reactor pattern. The default build of of NodeJS uses 5 reactors. Once these 5 reactors are being used for IO bound tasks, the main event loop will block.
A common misconception about NodeJS is that it is a system that allows you to do many things at once. This is not necessarily the case, it allows you to do other things while waiting on IO bound tasks, up to 5 at a time.
Any CPU bound tasks are always executed in the main event loop, meaning they will block.
This means if your "job" is IO bound, like putting things in databases then you can probably get away with not using dynos. This of course is dependent on how many things you plan on having go on at once. Remember, any task you put in your main app will take away resources from other incoming requests.
Generally it is not recommended for things like this, if you have a job that does some processing, it belongs in a queue that is executed in its own process or thread.
Now I plan to use scrapy in a more distributed approach, and I'm not
sure if the spiders/pipelines/downloaders/schedulers and engine are
all hosted in separate processes or threads, could anyone share some
info about this? and could we change the process/thread count for each
component? I know now there are two settings "CONCURRENT_REQUESTS" and
"CONCURRENT_ITEMS", they will determine the concurrent threads for
downloaders and pipelines, right? and if I want to deploy spiders/
pipelines/downloaders in different machines, I need to serialize the
items/requests/responses, right?
Appreciate very much for your helps!!
Thanks,
Edward.
Scrapy is single threaded. It uses the Reactor pattern to achieve concurrent network requests. This is done using the Twisted Framework.
People wanting to distribute Scrapy usually try to implement some messaging framework. Some use Redis, some others try RabbitMQ
Also have a look at Scrapyd