i use cucumber ruby for automation testing at browser. i need print cookies the browser
how to get cookies in browser in capyabara
1. inspect element at browser
2. application
3. print the cookies
how to print cookies browser in cucumber capyabara
i have try
puts Capybara.current_session.driver
but print like this
#<Capybara::Selenium::Driver:0x007fcbf52e2250>
Since feature tests (which is what Capybara was/is designed for) really shouldn't be dealing with cookies (test user visible things, not implementation details) there is no Capybara cookie API. This means any access is going to be driver dependent. In your case you appear to be using selenium so it would be
page.driver.browser.manage.all_cookies
i try this can be solve
Capybara.current_session.driver.browser.manage.cookie_named("browser_id")[:value]
Related
I need to automate a scenario where I should verify the connection to the url is secure without any SSL certificate issues.
I am using selenium webdriver with java and the browsers on which I need this are IE and Chrome. I cannot provide any url having this issue due to some security restrictions.
Issue to be caught
I found an old question (Check if website has any ssl certificate warnings using Selenium webDriver) which was answered only for Firefox and so it is of no use in my case.
Regarding browser (better say browser profile) settings you can add expectation of specific elements, see attached image for example in IE.
In Chrome those links can not be inspected, let's try something like this:
driver.get(UrlWithSslIssue);
Thread.sleep(5000); // ofc better wait methods available
WebElement lnk_advanced_settings = driver.findElement(By.partialLinkText("ADVANCED"));
lnk_advanced_settings.click();
Thread.sleep(5000);
WebElement lnk_continue = driver.findElement(By.partialLinkText("Continue"));
// lnk_continue.click();
What are the ways of getting the cookies from a website from ways other than the following?
Inspecting the browser
Using Selenium web drivers by using driver.get_cookies() command
you can also run javascript via selenium and use document.cookie.
it return the cookies in string format:
key=value;key=value
Using PHP Codeception and WebDriver PHP wrapper (Facebook), is it in general possible to get the environment variables of the actual page request, made by PhantomJS or a real browser used?
Maybe it is just my misunderstanding of the technology behind acceptance tests, but given the fact that a testing framework like Codeception is requesting a page using PhantomJS or a real browser like Chrome or Firefox, I would expect to have access to e.g. the $_SERVER global variable. Unfortunately I can not find any methods providing this in WebDriver Codeception module or Facebook PHP WebDriver wrapper.
Specifically, I have a page which is supposed to use SSL only, so a 301 redirection is expected to happen when visiting the page.
I need to have an acceptance test case in Codeception to check just that and checking the $_SERVER['HTTPS'] global variable should do it.
First I tried to match the URL against 'https://' but the WebDriver wrapper method _getCurrentUrl() delivers only the URI part without protocol and host.
Then I tried to get the $_SERVER variable inside custom Helper action, but the one accessed directly looks like it comes from the CLI environment, not a browser request.
No, you can't access $_SERVER in acceptance tests, because $_SERVER is in server-side and all you have is a client.
If you want to check a complete url, you can use getCurrentURL method of webdriver instance, it can be accessed in the same way as _getCurrentUri method in your helper.
public function checkUrl()
{
$url = $this->getModule('WebDriver')->webDriver->getCurrentURL();
//do your checks here
}
If already used WebDriver module:
$currentUrl = $I->executeJS('return jQuery(location).attr("href");');
I'm testing a webpage using Selenium (either IDE or webdriver). The webpage has a "search" function, basically just a GET call with params. The javascript also output to console the JSON returned from the search call, i.e. something like console.log(data). And I'm able to inspect the response data in Firefox console.
My question is: is there anyway I can capture this data from Firefox console in Selenium (so that I can further inspect and doing asserts)? Writing a direct GET request (eg, from Python) does not work since the search url is protected through a login page.
Thanks.
AFAIK Selenium doesnt provide any in built API/method to play with console.
You can redirect console output file and read from file.
Link: How to redirect Firefox console output to file.
It was possible at one point using Firebug. Not sure if it still works.
We have a bunch of redirects in our Apache configuration. I would like to automate the testing of redirects with Selenium, which led me to some problems:
Call an URL, but assert on the redirected page
Check the URL of the browser after redirected
Check Response Header, to determine the type of redirection (301, 302)
Maybe Selenium is not the best solution for this. Any other suggestions?
Selenium-RC has a traffic capture mode, defined as selenium.start("captureNetworkTraffic=true"); that will enable you to capture HTTP responses, including redirects and error codes.
Here is an excellent resource on how to capture and process/format this information once retrieved. It uses Python, though, but should give you a start.
For checking the URL of browser, you could use selenium.getLocation();
in python's implementation,
driver = webdriver.Firefox()
print driver.current_url
http://selenium.googlecode.com/svn/trunk/docs/api/py/webdriver_remote/selenium.webdriver.remote.webdriver.html#module-selenium.webdriver.remote.webdriver