Set headers - capybara mechanize or selenium - selenium

In my cucumbers, I need to add a key/value pair to the http headers when I request a page with capybara using the mechanize driver or perhaps the selenium driver.
I'm using capybara 1.1.1 and mechanize 2.0.1 and selenium 2.5.0
But how?
Here are my step definitions:
When /^set some headers$/ do
#set some headers here
visit('/url')
end
Then /^some result$/ do
#check page responds to header
end
Many thanks,
Rim

If you're using Mechanize you should be able to set headers in the request like this:
When /^set some headers$/ do
#set some headers here
page.driver.agent.request_headers = {"X-Header" => "value"}
visit('/url')
end

Related

How to continue request session with cookies in Selenium?

Is it possible to continue a request session in selenium will all its cookies, i have seen many people doing it the other way arround but i have no clue how to do it proper the way i want.
def open_selenium_session(self):
# get or set cookies
driver = get_chromedriver(self.proxy, use_proxy=True, user_agent=True)
driver.get("https://www.instagram.com")
cookies = driver.get_cookies()
for cookie in cookies:
self.session.cookies.set(cookie['name'], cookie['value'])
Incase you have stored the cookies previesly from an active session using pickle:
import pickle
pickle.dump( driver.get_cookies() , open("cookies.pkl","wb"))
You can always set them back as follows:
# loading the stored cookies
cookies = pickle.load(open("cookies.pkl", "rb"))
for cookie in cookies:
# adding the cookies to the session through webdriver instance
driver.add_cookie(cookie)
Reference
You can find a couple of detailed discussions in:
org.openqa.selenium.InvalidCookieDomainException: Document is cookie-averse using Selenium and WebDriver
selenium.common.exceptions.InvalidCookieDomainException: Message: invalid cookie domain while executing tests in Django with Selenium

Use JMeter Webdriver (Plugin Selenium) to make HTTP Header Manager and see results in Dynatrace

I'd like to know how can I put the HTTP header manager in JMeter if I use Selenium Webdriver Sampler.
I know that there is the standard tool(HTTP Header Manager) in JMeter but that tool is useful when I use HTTP Request in my test. In this case for testing, I use only WebDriver Sampler with Java 1.8. The goal is to see in dynatrace the tags that I send from JMeter. Is it possible to do that? And if it is the answer is positive, how can I do that? Thanks for your help!
WebDriver Sampler doesn't respect HTTP Header Manager
WebDriver itself doesn't support working with HTTP Headers and the feature is unlikely to be implemented ever
So the options are in:
Use an extension like ModHeader, but in this case you will have so switch from the WebDriver Sampler to JSR223 Sampler. Example code:
def options = new org.openqa.selenium.chrome.ChromeOptions()
options.addExtensions(new File('/path/to/modheaders.crx'))
def capabilities = new org.openqa.selenium.remote.DesiredCapabilities()
capabilities.setCapability(org.openqa.selenium.chrome.ChromeOptions.CAPABILITY, options)
def driver = new org.openqa.selenium.chrome.ChromeDriver(capabilities)
driver.get('http://example.com')
Use a proxy like BrowserMob as the proxy for the WebDriver and configure it to add headers to each intercepted request. Example initialization code (you can put it into the aformentioned JSR223 Sampler somewhere in setUp Thread Group)
def proxy = new net.lightbody.bmp.BrowserMobProxyServer()
def proxyPort = 8080
proxy.setTrustAllServers(true)
proxy.addRequestFilter((request, contents, info) -> {
request.headers().add('your header name', 'your header value')
return null
})
proxy.start(proxyPort)

How to write customize Downloader Middleware for selenium and Scrapy?

I am having issue communicating between selenium and scrapy object.
I am using selenium to login to some site, once I get that response I want to use scrape's functionaries to parse and process. Please can some one help me writing middleware so that every request should go through selenium web driver and response should be pass to scrapy.
Thank you!
It's pretty straightforward, create a middleware with a webdriver and use process_request to intercept the request, discard it and use the url it had to pass it to your selenium webdriver:
from scrapy.http import HtmlResponse
from selenium import webdriver
class DownloaderMiddleware(object):
def __init__(self):
self.driver = webdriver.Chrome() # your chosen driver
def process_request(self, request, spider):
# only process tagged request or delete this if you want all
if not request.meta.get('selenium'):
return
self.driver.get(request.url)
body = self.driver.page_source
response = HtmlResponse(url=self.driver.current_url, body=body)
return response
The downside of this is that you have to get rid of the concurrency in your spider since selenium webdrive can only handle one url at a time. For that see settings documentation page.

autoauth firefox plugin didn't work

Good day to all.
I'm use Selenium WebDriver to automatically test execute. But on development site using HTTP base autentification. I found AutoAuth addon for Firefox. It save login/password and don't need type credentional each time.
But this plugin don't save credentions. I'm reinstall addon and firefox, delete cookie, but nothing. On this machine in other user plugin work successfylly. Maybe, anybody have and resolve this problem?
To author of addon I wrote already.
Way:https://login:passwd#host don't help too...
Do you mean plugin not working on invoking with webdriver? simple way to create profile and call that provide in webdriver.
Here is the way to create firefox profile. Install that add-in and save credentials.
Call above saved profile in webdriver
ProfilesIni allProfiles = new ProfilesIni();
FirefoxProfile profile = allProfiles.getProfile("selenium");
WebDriver driver = FirefoxDriver(profile);
Thank You,
Murali
If it's a HTTP Basic Authentication, then you can set the credentials in the URL. Note that it requires to set the "network.http.phishy-userpass-length" preference to enable it.
Here is a working example with Selenium / Firefox / Python:
from selenium import webdriver
profile = webdriver.FirefoxProfile()
profile.set_preference("network.http.phishy-userpass-length", 255)
driver = webdriver.Firefox(profile)
driver.get("http://admin:admin#the-internet.herokuapp.com/basic_auth")
The approach I've used very successfully is to set up an embedded Browsermob proxy server (in Java code) and register a RequestInterceptor to intercept all incoming requests (that match the host / URL pattern in question).
When you have a request that would otherwise need Basic auth, add an Authorization HTTP header with the credentials required ('Basic ' + the Base64-encoded 'user:pass' string. So for 'foo:bar' you'd set the value Basic Zm9vOmJhcg==)
Start the server, set it as a web proxy for Selenium traffic, and when a request is made that requires authentication, the proxy will add the header, the browser will see it, verify the credentials, and not need to pop up the dialog.
You won't need to deal with the dialog at all.
Other benefits:
It's a pure HTTP solution, it works the same across all browsers and operating systems.
No need for any hard-to-automate add-ons and plugins, any manual intervention.
No need for custom profiles, custom preferences etc.
You control credentials in your test code, and don't store them elsewhere.

Stubbing request.host in Capybara 2.0 / Cucumber

My application needs to act differently on certain request.host values. I test this behavior with Cucumber. Before Capybara 2.0 I was able to mimic the right request.host value by executing this Cucumber step:
Given /^the url starts with "([^"]*)"$/ do |url|
Capybara.app_host = "http://#{url}"
end
But now with Capybara 2.0.1, my browser actually navigates to the set URL, instead of staying on my test server and pretending to be from that URL.
So my question is: how do I correctly "stub request.host" in Capybara 2.0?
I managed to get through not by stubbing request.host but setting these:
default_url_options[:host] = host
Capybara.app_host = "http://" + host
Hope that helps.