Scrapy strange output but works with curl - scrapy

I am running below scrapy command
scrapy shell "https://www.vr.de/service/filialen-a-z/a.html'
In return I am getting data like below,
This data do not have any information.
If I use curl to get the data it is accurate with all information.
Can someone please advise me what I am doing wrong?

user headers in scrapy shell
>>> url = 'https://www.vr.de/service/filialen-a-z/a.html'
>>> headers={"User-Agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 5_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B179 Safari/7534.48.3"}
>>> r=scrapy.Request(url, headers=headers)
>>> fetch(r)
2021-05-08 16:12:27 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.vr.de/service/filialen-a-z/a.html> (referer: None)
>>> for data in response.css('div.module.module-teaser.ym-clearfix'):
... print(data.css('a::attr("href")').get())
... print(data.css('div.text::text').get())

Related

selenium send_keys() on a site did not recognise Cyrillic input

I'm a newbie and stack at the beginning of Big journey
My problem is:
I've got a site, where I try to input information in Cyrillic
when I input information manually everything is fine
even if I input information using selenium, chrome in ENG, everything is fine
but in Cyrillic through selenium nothing happened
Option '--lang=ru' did not help
new profile in chrome with indication of Lang in preferences samesh..t - nothing happened
PS I test other site through selenium, chrome, Cyrillic working fine.
Please help me
opt= webdriver.ChromeOptions()
opt.add_argument(r"--user-data-dir=...")
opt.add_argument(r'--profile-directory= "..."')
chrome_locale = 'RU'
opt.add_argument("--lang={}".format(chrome_locale))
opt.add_argument("user-agent=Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.5249.119 Safari/537.36")
opt.add_experimental_option("excludeSwitches", ["enable-logging"])
opt.add_extension(EXTENSION_PATH)
DRIVER = webdriver.Chrome(service=serv, options=opt)
DRIVER.implicitly_wait(0.5)
url = 'my_site'
DRIVER.get(url)
# authentification
##### input information
print('# поиск DEBTS')
_input = DRIVER.find_element(By.XPATH, '...')
_input.click()
_input.send_keys('иванов')
time.sleep(5)
_input.send_keys(Keys.ENTER)

How can I use a random user agent whenever I send a request?

i know how to use random (fake) user agent in scrapy. but after i run scrapy. i could see only one random user agent on terminal. so i guessed maybe 'settings.py' run only one time when i run scrapy. if scrapy work really like this and send 1000 request to some web page to collect 1000 data, scrapy will just send same user agent. Surely it can be easy to get ban i think.
can you tell me how can i send random user agent when scrapy send request to some website?
i used this lib(?) in my scrapy project.
after i set faker in user-agent in settings.py
https://pypi.org/project/Faker/
from faker import Faker
fake = Faker()
Faker.seed(fake.random_number())
fake_user_agent = fake.chrome()
USER_AGENT = fake_user_agent
in settings.py i wrote like this. can it work well ??
If you are setting USER_AGENT in your settings.py like in your question then you will just get a single (random) user agent for your entire crawl.
You have a few options if you want to set a fake user agent for each request.
Option 1: Explicitly set User-Agent per request
This approach involves setting the user-agent in the headers of your Request directly. In your spider code you can import Faker like you do above but then call e.g. fake.chrome() on every Request. For example
# At the top of your file
from faker import Faker
# This can be a global or class variable
fake = Faker()
...
# When you make a Request
yield Request(url, headers={"User-Agent": fake.chrome()})
Option 2: Write a middleware to do this automatically
I won't go into this because you might as well use one that already exists
Option 3: Use an existing middleware to do this automatically (such as scrapy-fake-useragent)
If you have lots of requests in your code option 1 isn't so nice, so you can use a Middleware to do this for you. Once you've installed scrapy-fake-useragent you can set it up in your settings file as described on the webpage
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
'scrapy.downloadermiddlewares.retry.RetryMiddleware': None,
'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400,
'scrapy_fake_useragent.middleware.RetryUserAgentMiddleware': 401,
}
FAKEUSERAGENT_PROVIDERS = [
'scrapy_fake_useragent.providers.FakeUserAgentProvider',
'scrapy_fake_useragent.providers.FakerProvider',
'scrapy_fake_useragent.providers.FixedUserAgentProvider',
]
Using this you'll get a new user-agent per Request and if a Request fails you'll also get a new random user-agent. One of the key parts of setting this up is FAKEUSERAGENT_PROVIDERS. This tells us where to get the User-Agent from. They are tried in the order they are defined, so the second will be tried if the first one fails for some reason (if getting the user-agent fails, not if the Request fails). Note that if you want to use Faker as the primary provider, then you should put that one first in the list
FAKEUSERAGENT_PROVIDERS = [
'scrapy_fake_useragent.providers.FakerProvider',
'scrapy_fake_useragent.providers.FakeUserAgentProvider',
'scrapy_fake_useragent.providers.FixedUserAgentProvider',
]
There are other configuration options (such as using a random chrome-like user-agent, listed in the scrapy-fake-useragent docs.
Example spider
Here is an example spider. For convenience I set the settings inside the spider, but you can put these into your settings.py file.
# fake_user_agents.py
from scrapy import Spider
class FakesSpider(Spider):
name = "fakes"
start_urls = ["http://quotes.toscrape.com/"]
custom_settings = dict(
DOWNLOADER_MIDDLEWARES={
"scrapy.downloadermiddlewares.useragent.UserAgentMiddleware": None,
"scrapy.downloadermiddlewares.retry.RetryMiddleware": None,
"scrapy_fake_useragent.middleware.RandomUserAgentMiddleware": 400,
"scrapy_fake_useragent.middleware.RetryUserAgentMiddleware": 401,
},
FAKEUSERAGENT_PROVIDERS=[
"scrapy_fake_useragent.providers.FakerProvider",
"scrapy_fake_useragent.providers.FakeUserAgentProvider",
"scrapy_fake_useragent.providers.FixedUserAgentProvider",
],
)
def parse(self, response):
# Print out the user-agent of the request to check they are random
print(response.request.headers.get("User-Agent"))
next_page = response.css("li.next a::attr(href)").get()
if next_page is not None:
yield response.follow(next_page, self.parse)
Then if I run this with scrapy runspider fake_user_agents.py --nolog the output is
b'Mozilla/5.0 (Macintosh; PPC Mac OS X 10 11_0) AppleWebKit/533.1 (KHTML, like Gecko) Chrome/59.0.811.0 Safari/533.1'
b'Opera/8.18.(Windows NT 6.2; tt-RU) Presto/2.9.169 Version/11.00'
b'Opera/8.40.(X11; Linux i686; ka-GE) Presto/2.9.176 Version/11.00'
b'Opera/9.42.(X11; Linux x86_64; sw-KE) Presto/2.9.180 Version/12.00'
b'Mozilla/5.0 (Macintosh; PPC Mac OS X 10 5_1 rv:6.0; cy-GB) AppleWebKit/533.45.2 (KHTML, like Gecko) Version/5.0.3 Safari/533.45.2'
b'Opera/8.17.(X11; Linux x86_64; crh-UA) Presto/2.9.161 Version/11.00'
b'Mozilla/5.0 (compatible; MSIE 5.0; Windows NT 5.1; Trident/3.1)'
b'Mozilla/5.0 (Android 3.1; Mobile; rv:55.0) Gecko/55.0 Firefox/55.0'
b'Mozilla/5.0 (compatible; MSIE 9.0; Windows CE; Trident/5.0)'
b'Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10 11_9; rv:1.9.4.20) Gecko/2019-07-26 10:00:35 Firefox/9.0'

Amazon detects scrapy instantly. How to prevent captcha?

I am trying to scrape one web page from amazon with the help of Scrapy 2.4.1 over shell. Without any prior scraping amazon instantly askes for captcha entries.
I am setting another user agent as only prevention but have never before scraped the page:
scrapy shell -s USER_AGENT="Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36"
Get one page:
>>> fetch('https://www.amazon.de/Eastpak-Provider-Rucksack-Noir-Black/dp/B0815FZ3C6/')
>>> view(response)
Results in a captcha question.
I also tried it with headers:
headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0", "Accept-Encoding":"gzip, deflate", "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "DNT":"1","Connection":"close", "Upgrade-Insecure-Requests":"1"}
>>> req = Request("https://www.amazon.de/Eastpak-Provider-Rucksack-Noir-Black/dp/B0815FZ3C6/", headers=headers)
>>> fetch(req)
This also results in a captcha question, while the main page can be scraped in this case.
How does amazon detect that this is a bot and how to prevent that?

Scrapy gets blocked even with Selenium; Selenium on its own doesn't?

I am trying to scrape data off a website. Scrapy on its own didn't work (I get HTTP 403), which led me to believe there are some UI-based countermeasures (e.g. checking for resolution).
Then I tried Selenium; a very basic script clicking its way through the website works just fine. Here's the relevant excerpt of what works:
driver.get(start_url)
try:
link_next = driver.wait.until(EC.presence_of_element_located(
(By.XPATH, '//a[contains(.,"Next")]')))
link_next.click()
Now, in order to store the data, I'm still going to need Scrapy. So I wrote a script combining Scrapy and Selenium.
class MyClass(CrawlSpider):
...
start_urls = [
"domainiwanttocrawl.com?page=1",
]
def __init__(self):
self.driver = webdriver.Firefox()
self.driver.wait = WebDriverWait(self.driver, 2)
def parse(self, response):
self.driver.get(response.url)
while True:
try:
link_next = self.driver.wait.until(EC.presence_of_element_located((By.XPATH, '//a[contains(.,"Next")]')))
self.driver.wait = WebDriverWait(self.driver, 2)
link_next.click()
item = MyItem()
item['source_url'] = response.url
item['myitem'] = ...
return item
except:
break
self.driver.close()
But this will also just result in HTTP 403. If I add something like self.driver.get(url) to the __init__ method, that will work, but nothing beyond that.
So in essence: the Selenium get function continues to work, whereas whatever Scrapy does under the hood with what it finds in start_urls gets blocked. But I don't know how to "kickstart" the crawling without the start_urls. It seems that somehow Scrapy and Selenium aren't actually integrated yet.
Any idea why and what I can do?
Scrapy is a pretty awesome scraping framework, you get a ton of stuff for free. And, if it is getting 403s straight out of the gate, then it's basically completely incapacitated.
Selenium doesn't hit the 403 and you get a normal response. That's awesome, but not because Selenium is the answer; Scrapy is still dead-in-the-water and it's the work-horse, here.
The fact that Selenium works means you can most likely get Scrapy working with a few simple measures. Exactly what it will take is not clear (there isn't enough detail in your question), but the link below is a great place to start.
Scrapy docs - Avoid getting banned
Putting some time into figuring out how to get Scrapy past the 403 is the route I recommend. Selenium is great and all, but Scrapy is the juggernaut when it comes to web-scraping. With any luck it won't take much.
Here is a util that might help: agents.py It can be used to get a random user agent from a list of popular user agents (circa 2014).
>>> for _ in range(5):
... print agents.get_agent()
...
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36
Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36
Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_2 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D257 Safari/9537.53
Mozilla/5.0 (Windows NT 6.3; WOW64; rv:31.0) Gecko/20100101 Firefox/31.0
Below is a basic way to integrate get_agent with Scrapy. (It's not tested, but should point you in the right direction).
import scrapy
from scrapy.http import Request
from agents import get_agent
EXAMPLE_URL = 'http://www.example.com'
def get_request(url):
headers = {
'User-Agent': get_agent(),
'Referer': 'https://www.google.com/'
}
return Request(url, headers=headers)
class MySpider(scrapy.Spider):
name = 'myspider'
def start_requests(self):
yield get_request(EXAMPLE_URL)
Edit
Regarding user agents, looks like this might achieve the same thing but a bit more easily: scrapy-fake-useragent

Scrapy redirects to homepage for some urls

I am new to Scrapy framework & currently using it to extract articles from multiple 'Health & Wellness' websites. For some of the requests, scrapy is redirecting to homepage(this behavior is not observed in browser). Below is an example:
Command:
scrapy shell "http://www.bornfitness.com/blog/page/10/"
Result:
2015-06-19 21:32:15+0530 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-06-19 21:32:15+0530 [default] INFO: Spider opened
2015-06-19 21:32:15+0530 [default] DEBUG: Redirecting (301) to http://www.bornfitness.com/> from http://www.bornfitness.com/blog/page/10/>
2015-06-19 21:32:16+0530 [default] DEBUG: Crawled (200) http://www.bornfitness.com/> (referer: None)
Note that the page number in url(10) is a two-digit number. I don't see this issue with urls with single-sigit page number(8 for example).
Result:
2015-06-19 21:43:15+0530 [default] INFO: Spider opened
2015-06-19 21:43:16+0530 [default] DEBUG: Crawled (200) http://www.bornfitness.com/blog/page/8/> (referer: None)
When you have trouble replicating browser behavior using scrapy, you generally want to look at what are those things which are being communicated differently when your browser is talking to the website compared with when your spider is talking to the website. Remember that a website is (almost always) not designed to be nice to webcrawlers, but to interact with web browsers.
For your situation, if you look at the headers being sent with your scrapy request, you should see something like:
In [1]: request.headers
Out[1]:
{'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Encoding': 'gzip,deflate',
'Accept-Language': 'en',
'User-Agent': 'Scrapy/0.24.6 (+http://scrapy.org)'}
If you examine the headers sent by a request for the same page by your web browser, you might see something like:
**Request Headers**
GET /blog/page/10/ HTTP/1.1
Host: www.bornfitness.com
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36
DNT: 1
Referer: http://www.bornfitness.com/blog/page/11/
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Cookie: fealty_segment_registeronce=1; ... ... ...
Try changing the User-Agent in your request. This should allow you to get around the redirect.