I'm a newbie and stack at the beginning of Big journey
My problem is:
I've got a site, where I try to input information in Cyrillic
when I input information manually everything is fine
even if I input information using selenium, chrome in ENG, everything is fine
but in Cyrillic through selenium nothing happened
Option '--lang=ru' did not help
new profile in chrome with indication of Lang in preferences samesh..t - nothing happened
PS I test other site through selenium, chrome, Cyrillic working fine.
Please help me
opt= webdriver.ChromeOptions()
opt.add_argument(r"--user-data-dir=...")
opt.add_argument(r'--profile-directory= "..."')
chrome_locale = 'RU'
opt.add_argument("--lang={}".format(chrome_locale))
opt.add_argument("user-agent=Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.5249.119 Safari/537.36")
opt.add_experimental_option("excludeSwitches", ["enable-logging"])
opt.add_extension(EXTENSION_PATH)
DRIVER = webdriver.Chrome(service=serv, options=opt)
DRIVER.implicitly_wait(0.5)
url = 'my_site'
DRIVER.get(url)
# authentification
##### input information
print('# поиск DEBTS')
_input = DRIVER.find_element(By.XPATH, '...')
_input.click()
_input.send_keys('иванов')
time.sleep(5)
_input.send_keys(Keys.ENTER)
Related
Environment:
Chrome Driver 92.0.4515
I found that selenium.select_frame takes nearly 3 minutes to switch between main window to the frame
The issue only occurs with headless chrome, the normal chrome's still working fine.
Any solutions will be highly appreciated! Thank you!
My webdriver's arguments:
options.add_argument(f"--window-size={width},{height}")
options.add_argument("--headless")
options.add_argument("--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 "
"(KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36 Edg/84.0.522.59")
options.add_argument("--no-sandbox")
options.add_argument("--allow-running-insecure-content")
options.add_argument("--disable-dev-shm-usage")
options.add_argument("--disable-extensions")
options.add_argument("--start-maximized")
i know how to use random (fake) user agent in scrapy. but after i run scrapy. i could see only one random user agent on terminal. so i guessed maybe 'settings.py' run only one time when i run scrapy. if scrapy work really like this and send 1000 request to some web page to collect 1000 data, scrapy will just send same user agent. Surely it can be easy to get ban i think.
can you tell me how can i send random user agent when scrapy send request to some website?
i used this lib(?) in my scrapy project.
after i set faker in user-agent in settings.py
https://pypi.org/project/Faker/
from faker import Faker
fake = Faker()
Faker.seed(fake.random_number())
fake_user_agent = fake.chrome()
USER_AGENT = fake_user_agent
in settings.py i wrote like this. can it work well ??
If you are setting USER_AGENT in your settings.py like in your question then you will just get a single (random) user agent for your entire crawl.
You have a few options if you want to set a fake user agent for each request.
Option 1: Explicitly set User-Agent per request
This approach involves setting the user-agent in the headers of your Request directly. In your spider code you can import Faker like you do above but then call e.g. fake.chrome() on every Request. For example
# At the top of your file
from faker import Faker
# This can be a global or class variable
fake = Faker()
...
# When you make a Request
yield Request(url, headers={"User-Agent": fake.chrome()})
Option 2: Write a middleware to do this automatically
I won't go into this because you might as well use one that already exists
Option 3: Use an existing middleware to do this automatically (such as scrapy-fake-useragent)
If you have lots of requests in your code option 1 isn't so nice, so you can use a Middleware to do this for you. Once you've installed scrapy-fake-useragent you can set it up in your settings file as described on the webpage
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
'scrapy.downloadermiddlewares.retry.RetryMiddleware': None,
'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400,
'scrapy_fake_useragent.middleware.RetryUserAgentMiddleware': 401,
}
FAKEUSERAGENT_PROVIDERS = [
'scrapy_fake_useragent.providers.FakeUserAgentProvider',
'scrapy_fake_useragent.providers.FakerProvider',
'scrapy_fake_useragent.providers.FixedUserAgentProvider',
]
Using this you'll get a new user-agent per Request and if a Request fails you'll also get a new random user-agent. One of the key parts of setting this up is FAKEUSERAGENT_PROVIDERS. This tells us where to get the User-Agent from. They are tried in the order they are defined, so the second will be tried if the first one fails for some reason (if getting the user-agent fails, not if the Request fails). Note that if you want to use Faker as the primary provider, then you should put that one first in the list
FAKEUSERAGENT_PROVIDERS = [
'scrapy_fake_useragent.providers.FakerProvider',
'scrapy_fake_useragent.providers.FakeUserAgentProvider',
'scrapy_fake_useragent.providers.FixedUserAgentProvider',
]
There are other configuration options (such as using a random chrome-like user-agent, listed in the scrapy-fake-useragent docs.
Example spider
Here is an example spider. For convenience I set the settings inside the spider, but you can put these into your settings.py file.
# fake_user_agents.py
from scrapy import Spider
class FakesSpider(Spider):
name = "fakes"
start_urls = ["http://quotes.toscrape.com/"]
custom_settings = dict(
DOWNLOADER_MIDDLEWARES={
"scrapy.downloadermiddlewares.useragent.UserAgentMiddleware": None,
"scrapy.downloadermiddlewares.retry.RetryMiddleware": None,
"scrapy_fake_useragent.middleware.RandomUserAgentMiddleware": 400,
"scrapy_fake_useragent.middleware.RetryUserAgentMiddleware": 401,
},
FAKEUSERAGENT_PROVIDERS=[
"scrapy_fake_useragent.providers.FakerProvider",
"scrapy_fake_useragent.providers.FakeUserAgentProvider",
"scrapy_fake_useragent.providers.FixedUserAgentProvider",
],
)
def parse(self, response):
# Print out the user-agent of the request to check they are random
print(response.request.headers.get("User-Agent"))
next_page = response.css("li.next a::attr(href)").get()
if next_page is not None:
yield response.follow(next_page, self.parse)
Then if I run this with scrapy runspider fake_user_agents.py --nolog the output is
b'Mozilla/5.0 (Macintosh; PPC Mac OS X 10 11_0) AppleWebKit/533.1 (KHTML, like Gecko) Chrome/59.0.811.0 Safari/533.1'
b'Opera/8.18.(Windows NT 6.2; tt-RU) Presto/2.9.169 Version/11.00'
b'Opera/8.40.(X11; Linux i686; ka-GE) Presto/2.9.176 Version/11.00'
b'Opera/9.42.(X11; Linux x86_64; sw-KE) Presto/2.9.180 Version/12.00'
b'Mozilla/5.0 (Macintosh; PPC Mac OS X 10 5_1 rv:6.0; cy-GB) AppleWebKit/533.45.2 (KHTML, like Gecko) Version/5.0.3 Safari/533.45.2'
b'Opera/8.17.(X11; Linux x86_64; crh-UA) Presto/2.9.161 Version/11.00'
b'Mozilla/5.0 (compatible; MSIE 5.0; Windows NT 5.1; Trident/3.1)'
b'Mozilla/5.0 (Android 3.1; Mobile; rv:55.0) Gecko/55.0 Firefox/55.0'
b'Mozilla/5.0 (compatible; MSIE 9.0; Windows CE; Trident/5.0)'
b'Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10 11_9; rv:1.9.4.20) Gecko/2019-07-26 10:00:35 Firefox/9.0'
I am trying to scrape one web page from amazon with the help of Scrapy 2.4.1 over shell. Without any prior scraping amazon instantly askes for captcha entries.
I am setting another user agent as only prevention but have never before scraped the page:
scrapy shell -s USER_AGENT="Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36"
Get one page:
>>> fetch('https://www.amazon.de/Eastpak-Provider-Rucksack-Noir-Black/dp/B0815FZ3C6/')
>>> view(response)
Results in a captcha question.
I also tried it with headers:
headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0", "Accept-Encoding":"gzip, deflate", "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "DNT":"1","Connection":"close", "Upgrade-Insecure-Requests":"1"}
>>> req = Request("https://www.amazon.de/Eastpak-Provider-Rucksack-Noir-Black/dp/B0815FZ3C6/", headers=headers)
>>> fetch(req)
This also results in a captcha question, while the main page can be scraped in this case.
How does amazon detect that this is a bot and how to prevent that?
Description:
I have upgraded docker selenium version to 3.141.59-zinc (from 3.141.59-europium), it started failing the acceptance test due to header info (set through proxy server) not found at server side. If I change image from zinc to europium - all works fine.
Log trace with 3.141.59-europium:
Remote address of request printed at server side: 127.0.0.1
Headers: {accept-language=en-US,en;q=0.9, host=localhost:39868, upgrade-insecure-requests=1, user=123456789, accept-encoding=gzip, deflate, br, user-agent=Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36,
accept=text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,/;q=0.8, via=1.1 browsermobproxy}
Log trace with 3.141.59-zinc :
Remote address of request printed at server side: 0:0:0:0:0:0:0:1
Headers: {sec-fetch-mode=navigate, sec-fetch-site=none, accept-language=en-US,en;q=0.9, host=localhost:42365, upgrade-insecure-requests=1, connection=keep-alive, sec-fetch-user=?1, accept-encoding=gzip, deflate, br, user-agent=Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36, accept=text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,/;q=0.8,application/signed-exchange;v=b3;q=0.9}
To Reproduce
Create Proxy object with host and port.
Set proxy in webdriver capabilities.
DesiredCapabilities cap = DesiredCapabilities.chrome();
cap.setCapability(CapabilityType.PROXY, proxy);
Set Proxy header
proxyServer.addHeader("user", "123456789");
Access application
driver.get("http://localhost:/welcome")
Check for proxy header "user", it should be 123456789
Expected behaviour
I am setting header with user=123456789, which is not getting passed if using webdriver 3.141.59-zinc. If I manually call url using URLConnection with proxy - Its working (So no issue in proxy server).
And also If I use ip address instead of localhost, its working fine (proxy header available in request at server). So I guess, its ignoring proxy for localhost in the new version of webdriver 3.141.59-zinc. I also tried with setting noProxy with null/"" but it did not work.
Environment
OS: Oracle Linux Server release 7.5
Docker-Selenium image version: 3.141.59-zinc
Docker version: 17.06.2-ol
Note: Using standalone chrome in headless mode
I am trying to scrape data off a website. Scrapy on its own didn't work (I get HTTP 403), which led me to believe there are some UI-based countermeasures (e.g. checking for resolution).
Then I tried Selenium; a very basic script clicking its way through the website works just fine. Here's the relevant excerpt of what works:
driver.get(start_url)
try:
link_next = driver.wait.until(EC.presence_of_element_located(
(By.XPATH, '//a[contains(.,"Next")]')))
link_next.click()
Now, in order to store the data, I'm still going to need Scrapy. So I wrote a script combining Scrapy and Selenium.
class MyClass(CrawlSpider):
...
start_urls = [
"domainiwanttocrawl.com?page=1",
]
def __init__(self):
self.driver = webdriver.Firefox()
self.driver.wait = WebDriverWait(self.driver, 2)
def parse(self, response):
self.driver.get(response.url)
while True:
try:
link_next = self.driver.wait.until(EC.presence_of_element_located((By.XPATH, '//a[contains(.,"Next")]')))
self.driver.wait = WebDriverWait(self.driver, 2)
link_next.click()
item = MyItem()
item['source_url'] = response.url
item['myitem'] = ...
return item
except:
break
self.driver.close()
But this will also just result in HTTP 403. If I add something like self.driver.get(url) to the __init__ method, that will work, but nothing beyond that.
So in essence: the Selenium get function continues to work, whereas whatever Scrapy does under the hood with what it finds in start_urls gets blocked. But I don't know how to "kickstart" the crawling without the start_urls. It seems that somehow Scrapy and Selenium aren't actually integrated yet.
Any idea why and what I can do?
Scrapy is a pretty awesome scraping framework, you get a ton of stuff for free. And, if it is getting 403s straight out of the gate, then it's basically completely incapacitated.
Selenium doesn't hit the 403 and you get a normal response. That's awesome, but not because Selenium is the answer; Scrapy is still dead-in-the-water and it's the work-horse, here.
The fact that Selenium works means you can most likely get Scrapy working with a few simple measures. Exactly what it will take is not clear (there isn't enough detail in your question), but the link below is a great place to start.
Scrapy docs - Avoid getting banned
Putting some time into figuring out how to get Scrapy past the 403 is the route I recommend. Selenium is great and all, but Scrapy is the juggernaut when it comes to web-scraping. With any luck it won't take much.
Here is a util that might help: agents.py It can be used to get a random user agent from a list of popular user agents (circa 2014).
>>> for _ in range(5):
... print agents.get_agent()
...
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36
Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36
Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_2 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D257 Safari/9537.53
Mozilla/5.0 (Windows NT 6.3; WOW64; rv:31.0) Gecko/20100101 Firefox/31.0
Below is a basic way to integrate get_agent with Scrapy. (It's not tested, but should point you in the right direction).
import scrapy
from scrapy.http import Request
from agents import get_agent
EXAMPLE_URL = 'http://www.example.com'
def get_request(url):
headers = {
'User-Agent': get_agent(),
'Referer': 'https://www.google.com/'
}
return Request(url, headers=headers)
class MySpider(scrapy.Spider):
name = 'myspider'
def start_requests(self):
yield get_request(EXAMPLE_URL)
Edit
Regarding user agents, looks like this might achieve the same thing but a bit more easily: scrapy-fake-useragent