I have a spider which fetches the latest url based on a particular date range from a paginated webpage. When it gets all latest urls, my spider has to be closed. How to close a spider?
I referred question : Force stop the spider
But raising an exception to close the spider is not pleasing to me.
Is there any other way I could achieve the same?
You should use the Close Spider extension.
The conditions for closing a spider can be configured through the following settings:
CLOSESPIDER_TIMEOUT CLOSESPIDER_ITEMCOUNT CLOSESPIDER_PAGECOUNT CLOSESPIDER_ERRORCOUNT
Related
Can you "get" multiple pages in parallel with chromedriver?
I am using Python, and as far as I understand selenium provides windowing API but does not allow opening a new window for a new driver.get() action. Trying to fetch a page while another is in process has proven problematic for me although I guess it might have been my wrong usage.
Currently I am opening a number of chromedriver sessions in parallel, which in turn results with X 5 times chrome processes open - This can get intimidating although it seems to work. I am just wandering now if calling driver.get(url) on an existing session (after a previous page was retrieved) might open extra tabs/windows in the "internal" chrome process and bloat memory?
You can open links in a new window using selenium by forcing a context click:
A couple suggestions:
https://stackoverflow.com/a/19152396/1387701
https://stackoverflow.com/a/45582818/1387701
You can then use the switch_to command to switch between these windows.
See more at https://www.browserstack.com/guide/how-to-switch-tabs-in-selenium-python
I'm used to running spiders one at a time, because we mostly work with scrapy crawl and on scrapinghub, but I know that one can run multiple spiders concurrently, and I have seen that middlewares often have a spider parameter in their callbacks.
What I'd like to understand is:
the relationship between Crawler and Spider. If I run one spider at a time, I'm assuming there's one of each. But if you run more spiders together, like in the example linked above, do you have one crawler for multiple spiders, or are they still 1:1?
is there in any case only one instance of a middleware of a certain class, or do we get one per-spider or per-crawler?
Assuming there's one, what are the crawler.settings in the middleware creation (for example, here)? In the documentation it says that those take into account the settings overridden in the spider, but if there are multiple spiders with conflicting settings, what happens?
I'm asking because I'd like to know how to handle spider-specific settings. Take again the DeltaFetch middleware as an example:
enabling it seems to be a global matter, because DELTAFETCH_ENABLED is read from the crawler.settings
however, the sqlite db is opened in spider_opened and is a unique instance variable (i.e., not depending on the spider); so if you have more than one spider and the instance is shared, when the second spider is opened, the old db is lost. And if you have only one instance of the middleware per spider, why bother passing the spider as a parameter?
Is that a correct way of handling it, or should you rather have a dict spider_dbs indexed by spider name?
This code is part of my Scrapy spider:
# scraping data from page has been done before this line
publish_date_datetime_object = (datetime.strptime(publish_date, '%d.%m.%Y.')).date()
yesterday = (datetime.now() - timedelta(days=1)).date()
if publish_date_datetime_object > yesterday:
continue
if publish_date_datetime_object < yesterday:
raise scrapy.exceptions.CloseSpider('---STOP---DATE IS OLDER THAN YESTERDAY')
# after this is ItemLoader and yield
This is working fine.
My question is Scrapy spider best place to have this code/logic?
I do not know how to put implement it in another place.
Maybe it can be implemented in a pipeline, but AFAIK the pipeline is evaluated after the scraping has been done, so that means that I need to scrape all adds, even thous that I do not need.
A scale is 5 adds from yesterday versus 500 adds on the whole page.
I do not see any benefit in moving code to pipeline it that means processing(downloading and scraping) 500 adds if I only need 5 from it.
It is the right place if you need your spider to stop crawling after something indicates there's no more useful data to collect.
It is also the right way to do it, rising a CloseSpider exception with a verbose closing reason message.
A pipeline would be more suitable only if there were items to be collected after the threshold detected, but if they are ALL disposable this would be a waste of resources.
I am running scrapy using their internal API and everything is well and good so far. But I noticed that its not fully using the concurrency of 16 as mentioned in the settings. I have changed delay to 0 and everything else I can do. But then looking into the HTTP requests being sent , its clear that scrapy is not exactly downloading 16 sites at all point of times. At some point of time its downloading only 3 to 4 links. And the queue is not empty at that point of time.
When I checked the core usage , what i found was that out of 2 core , one is 100% and other is mostly idle.
That is when i got to know that twisted library on top which scrapy is build is single threaded and that is why its only using single core.
Is there any workaround to convince scrapy to use all the core ?
Scrapy is based on the twisted framework. Twisted is event loop based framework, so it does scheduled processing and not multiprocessing. That's is why your scrapy crawl runs on just one process. Now you can technically start two spiders using the below code
import scrapy
from scrapy.crawler import CrawlerProcess
class MySpider1(scrapy.Spider):
# Your first spider definition
...
class MySpider2(scrapy.Spider):
# Your second spider definition
...
process = CrawlerProcess()
process.crawl(MySpider1)
process.crawl(MySpider2)
process.start() # the script will block here until all crawling jobs are finished
And there is nothing that stops you from having the same class for both the spiders.
process.crawl method takes *args and **kwargs to pass to your spider. So you can parametrize your spiders using this approach. Let's say your spider is suppose to crawl 100 pages, you can add a start and end parameter to your crawler class and do something like below
process.crawl(YourSpider, start=0, end=50)
process.crawl(YourSpider, start=51, end=100)
Note, that both the crawlers will have their own settings, so if you have 16 requests set for your spider, then both combined will effectively have 32.
In most cases scraping is less about CPU and more about Network access, which is actually non-blocking in case of twisted, so I am not sure this would give you a very huge advantage against setting the CONCURRENT_REQUEST to 32 in a single spider.
PS: Consider reading this page to understand more https://doc.scrapy.org/en/latest/topics/practices.html#running-multiple-spiders-in-the-same-process
Another option is to run your spiders using Scrapyd, which lets you run multiple processes concurrently. See max_proc and max_proc_per_cpu options in the documentation. If you don't want to solve your problem programmatically, this could be the way to go.
I have a spider currently crawling and I want it to now stop collecting links and just crawl everything it has collected, is there a way to do this? I cannot find anything so far.
scrapy offers different ways to stop the spider (apart from calling ctrl+c), that you can find on the CloseSpider extension
You can put that on your settings.py file, so something like:
CLOSESPIDER_TIMEOUT = 20 # to stop crawling when reaching 20 seconds