How to set a cookie in Geb / Selenium with PhamtomJS - selenium

How do you set cookie in Geb ? I'm running into the following error with the given example:
org.openqa.selenium.InvalidCookieDomainException: {"errorMessage":"Can only set Cookies for the current domain" ....
.. Ive also tried explicitly setting the cookie domino using the Cookie Builder though that only cause another exception : org.openqa.selenium.UnableToSetCookieException: {"errorMessage":"Unable to set Cookie"}
Note that I used to have a baseURL in the GebConfig.groovy file .. but I have removed it as well .. Other then PhantomJS driver config, there are no settings in the config file.
I'm on OSX and using PhantomJS latest version (1.3.0 jar, and 2.1.1 driver OSX).
Note the example DOES work using the Chrome Webdriver for some reason.
import geb.spock.GebSpec
import org.openqa.selenium.Cookie
class SetCookieIT extends GebSpec {
def "Cookie example"() {
given:
def options = driver.manage()
when:
go "https://www.wikipedia.org/"
then:
!options.getCookieNamed("my-geb-cookie")
when:
options.addCookie(new Cookie("my-geb-cookie", "foobar"))
go "https://www.wikipedia.org/"
then:
title == "Wikipedia"
options.getCookieNamed("my-geb-cookie").value == "foobar"
}
}

Wikipedia is not spelt with an "ie" in the domain name and "org.com" also looks very strange. Maybe next time you want to provide an example which is actually executeable and does something meaningful. :-7
For me this works nicely:
package de.scrum_master.stackoverflow
import geb.spock.GebReportingSpec
import org.openqa.selenium.Cookie
class SetCookieIT extends GebReportingSpec {
def "Cookie example"() {
given:
def options = driver.manage()
when:
go "https://www.wikipedia.org/"
then:
!options.getCookieNamed("my-geb-cookie")
when:
options.addCookie(new Cookie("my-geb-cookie", "foobar"))
go "https://www.wikipedia.org/"
then:
title == "Wikipedia"
options.getCookieNamed("my-geb-cookie").value == "foobar"
}
}
If you have any further problems, please update your question and provide an SSCCE reproducing the actual problem.
Update after the question was modified: The problem with PhantomJS is that it refuses to create cookies if you do not explicitly specify the domain. This works:
options.addCookie(new Cookie("my-geb-cookie", "foobar", ".wikipedia.org", "/", null))

Related

chrome:headless (MacOS) results with ' 1) AssertionError: expected 'about:blank' to include $target page'

I am using TestCafe in combination with gherkinTestcafe (steps) / cucumber.
I am also using environment variables so that i can run my tests on 2 different environments.
My code is as follows, although through debugging, i don't believe this is something strictly code related, as much as it is related to:
chrome:headless
environment
version of chrome / MacOS
import Enviorments from "../../../../../../AEM_Engine/Enviorment/Enviorments";
import { Helper } from "../../../../../TestActions/Test_specific/Career_helper";
import {AddAuthCredentialsHook} from "../../../../../TestActions/BasicAuth";
const {Before, Given, Then} = require('cucumber');
let publisher = new Publish();
let aemEnv = new Enviorments();
let helper = new Helper;
let careersPage = '/career';
Before('#basicAuth', async testController => {
const addAuthCredentialsHook = new AddAuthCredentialsHook('$someUserName', '$somePassword');
await testController.addRequestHooks(addAuthCredentialsHook);
});
Before('#disableCookie', async testController => {
await testController.addRequestHooks(publisher.mockCookieResponse);
});
Given('I am at Careers page', async testController => {
await publisher.Navigate(testController, aemEnv.frontEndURL + careersPage);
await publisher.verifyURL(testController, aemEnv.frontEndURL + careersPage);
});
.
.
.
When i wait for the script to run i have
1) AssertionError: expected 'about:blank' to include $expectedPage
As i mentioned, i don't believe the problem is in the code. Even if i remove the step for verifying the current URL location, the test fails on the next step after.
Tests pass on
Chrome (with UI shell)
Other browsers (firefox, safari), headless or with UI shell
Second (staging) environment
When Tests are run and TestCafe starts, i get the following info
Running tests in:
- HeadlessChrome 99.0.4844 / Mac OS X 10.15.7
Feature: Careers Page Available
(node:87344) Warning: Setting the NODE_TLS_REJECT_UNAUTHORIZED environment variable to '0' makes TLS connections and HTTPS requests insecure by disabling certificate verification.
I tried re-installing some packages, re-writing some of the steps, adding some flags to clear cache, change chrome port or similar, but nothing worked.
Any thoughts on what might be causing this and how to solve it?

How to fill JavaScript form using Python?

I want to use Python to fill this form.
I tried using Mechanize but this is a Microsoft Form which uses JavaScript and has no form tag and no GET/POST URL. Maybe BeautifulSoup/Selenium can do this, but I do not have any experience in scraping JS forms. Can anyone help me out and suggest how to go about this?
Here's what I've tried, Mechanize is unable to recognize any form on the page:
import mechanize
def main():
br = mechanize.Browser()
br.set_handle_robots(False)
br.set_handle_refresh(False)
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
response = br.open("https://forms.office.com/Pages/ResponsePage.aspx?id=8Pm7rtoj40mYvzIXGrvJvCxQDveyljlCrKN2Teo3EHFUQVNaWDlYRkhYR09JRTZWRFpKTTNIQU9HUC4u")
for form in br.forms():
print("Form name:", form.name) #prints nothing
print(form) #prints nothing
if __name__ == '__main__':
main()
Selenium works fine.
You'll need to install the components
install selenium pip install selenium
You need to ensure you download the correct chromedriver (or other driver) for your browser and OS versions and add it to path
Then this runs:
from selenium import webdriver
driver = webdriver.Chrome()
url = "https://forms.office.com/Pages/ResponsePage.aspx?id=8Pm7rtoj40mYvzIXGrvJvCxQDveyljlCrKN2Teo3EHFUQVNaWDlYRkhYR09JRTZWRFpKTTNIQU9HUC4u"
driver.get(url)
name = driver.find_element_by_xpath("//div[#class='question-title-box'][.//span[text()='NAME']]/following-sibling::*//input")
name.send_keys("hello, World")
setionSelection = "F"
section = driver.find_element_by_xpath("//div[#class='question-title-box'][.//span[text()='Section']]/following-sibling::*//input[#value='" + setionSelection + "']")
section.click()
date = driver.find_element_by_xpath("//input[contains(#placeholder, 'Please input date')]")
date.send_keys("01/12/2020")
submit = driver.find_element_by_xpath("//div[text()='Submit']")
submit.click()
The xapths are a little long but they're based on the question text so potentially stable
For an alternative approach - When you say there is no POST url, did you check devtools? - That exposes the destination of the form:
Request URL: https://forms.office.com/formapi/api/aebbf9f0-23da-49e3-98bf-32171abbc9bc/users/f70e502c-96b2-4239-aca3-764dea371071/forms('8Pm7rtoj40mYvzIXGrvJvCxQDveyljlCrKN2Teo3EHFUQVNaWDlYRkhYR09JRTZWRFpKTTNIQU9HUC4u')/responses
Request Method: POST
it also exposes the payload... This is the first submit:
{startDate: "2020-08-17T10:40:18.504Z", submitDate: "2020-08-17T10:40:18.507Z",…}
answers: "[{"questionId":"r8f09d63e6f6f42feb2f8f4f8ed3f9389","answer1":"Hello, World"},{"questionId":"r28fe12073dfa47399f8ce95ae679dccf","answer1":"G"},{"questionId":"r8f9e9fedcc2e410c80bfa1e0e3ef9750","answer1":"2020-08-28"}]"
startDate: "2020-08-17T10:40:18.504Z"
submitDate: "2020-08-17T10:40:18.507Z"
Those post URL UUID/GUIDs questions IDs seem to be satic for this form. Every time i run form they're not chaning. This is the second run:
{startDate: "2020-08-17T10:43:48.544Z", submitDate: "2020-08-17T10:43:48.546Z",…}
answers: "[{"questionId":"r8f09d63e6f6f42feb2f8f4f8ed3f9389","answer1":"test me"},{"questionId":"r28fe12073dfa47399f8ce95ae679dccf","answer1":"G"},{"questionId":"r8f9e9fedcc2e410c80bfa1e0e3ef9750","answer1":"2020-08-12"}]"
startDate: "2020-08-17T10:43:48.544Z"
submitDate: "2020-08-17T10:43:48.546Z"
Once you capture this once you'll probably be able to do it through the API without a GUI.
... Just to make sure, i tried it and i get success...
import requests
url = "https://forms.office.com/formapi/api/aebbf9f0-23da-49e3-98bf-32171abbc9bc/users/f70e502c-96b2-4239-aca3-764dea371071/forms('8Pm7rtoj40mYvzIXGrvJvCxQDveyljlCrKN2Teo3EHFUQVNaWDlYRkhYR09JRTZWRFpKTTNIQU9HUC4u')/responses"
myobj = {"startDate":"2020-08-17T10:48:40.118Z","submitDate":"2020-08-17T10:48:40.121Z","answers":"[{\"questionId\":\"r8f09d63e6f6f42feb2f8f4f8ed3f9389\",\"answer1\":\"Hello again, World\"},{\"questionId\":\"r28fe12073dfa47399f8ce95ae679dccf\",\"answer1\":\"F\"},{\"questionId\":\"r8f9e9fedcc2e410c80bfa1e0e3ef9750\",\"answer1\":\"2020-08-26\"}]"}
x = requests.post(url, data = myobj)
My answers are just hard coded into the data object but it seems to work.
Remember to pip install requests if you don't already have it

Feature with tag still being run when configured not to

I have a main feature file where I have included a "setup" feature file that should add some test data. This setup feature file has an annotation that I have called #ignore. However, following the instructions in this Can't be enable to #ignore annotation for the features SO answer, but I am still seeing the setup feature file being run outside of the main test feature.
Main feature file, unsubscribe_user.feature:
Feature: Unsubscribe User
Background:
* def props = read('properties/user-properties.json')
* url urlBase
* configure headers = props.headers
* def authoriZation = call read('classpath:basic-auth.js') { username: 'admin', password: 'admin' }
* def testDataSetup = call read('classpath:com/meanwhileinhell/app/karate/feature/mockserver/testDataSetup.feature') { data1: #(props.data1), data2: #(props.data2) }
Scenario: Unsubscribe user
...
...
Scenario: Remove test data
* def testDataTearDown = call read('classpath:com/meanwhileinhell/app/karate/feature/mockserver/testDataTearDown.feature') { data1: #(props.data1), data2: #(props.data2) }
...
testDataSetup.feature file
#ignore
Feature: Add data to REST Mock Server
Background:
* url mockServerUrlBase
Scenario: Add data
* print 'Adding test data'
Given path 'mapping'
And request { data1: '#(data1)', data2: '#(data2)' }
When method post
Then status 201
Now from my Java runner class, I have added #KarateOptions(tags = "~#ignore").
import org.junit.runner.RunWith;
import com.intuit.karate.KarateOptions;
import com.intuit.karate.junit4.Karate;
import cucumber.api.CucumberOptions;
#RunWith(Karate.class)
#CucumberOptions(features = "classpath:com/meanwhileinhell/app/karate/feature/unsubscribe_user.feature")
#KarateOptions(tags = "~#ignore")
public class KarateTestUnSubscribeUserRunner {
}
However, I can still see my print statement in my setup class being called, and two POSTs being performed. I have also tried running my suite with the following cmd options, but again, still see the feature file run twice.
./gradlew clean test -Dkarate.env=local -Dkarate.options="--tags ~#ignore" --debug
I am following this wrong somewhere? Is there something I can add to my karate-config.js file? I am using Karate version 0.9.0.
Annotations only work on the "top level" feature. Not on "called" features.
If your problem is that the features are being run even when not expected, you must be missing something, or some Java class is running without knowing it. So please follow this process and we can fix it: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue
EDIT: I think I got it - please don't mix CucumberOptions, in fact we deprecated it, use only KarateOptions. Even that is not recommended in 0.9.5 onwards and you should move to JUnit 5.
Read the docs: https://github.com/intuit/karate#karate-options

Groovy URL getText() returns a PasswordAuthentication instance

I am trying to download the content of a password-protected Gerrit URL in a Jenkins pipeline Groovy script. HTTPBuilder is not accessible so I am using the URL class with Authenticator:
// To avoid pipline bailing out since data PasswordAuthentication is non-serializable
#NonCPS
def getToString(data) {
data.toString()
}
def fetchCommit(host, project, version) {
withCredentials([usernamePassword(credentialsId: 'my-credentials',
usernameVariable: 'user',
passwordVariable: 'PASSWORD')]) {
proj = java.net.URLEncoder.encode(project, 'UTF-8')
echo "Setting default authentication"
Authenticator.default = {
new PasswordAuthentication(env.user, env.PASSWORD as char[])
} as Authenticator
echo "https://${host}/a/projects/${proj}/commits/${version}"
url = "https://${host}/a/projects/${proj}/commits/${version}".toURL()
result = getToString(url.getText())
echo "${result}"
}
}
The result is a PasswordAuthentication instance, and not the expected data:
[Pipeline] echo
java.net.PasswordAuthentication#3938b0f1
I have been wrestling with this for a while. I have tried different ways to setup the authentication and reading the data, but those mostly end up with an exception. Using eachLine() on the url does not enter the closure at all. The job also exits far to quickly, giving the impression it not even tries to make a connection.
Refs:
https://kousenit.org/2012/06/07/password-authentication-using-groovy/

Setting referer in Selenium

Im working with the selenium remote driver to automate actions on a site, i can open the page i need directly by engineering the url as the sites url schema is very constant. This speeds up the script as it dose not have to work through several pages before it gets to the one it needs.
To make the automation seem organic is there a way to set a referral page in Selenium ?
If you're checking the referrer on the server, then using a proxy (as mentioned in other answers) will be the way to go.
However, if you need access to the referrer in Javascript using a proxy will not work. To set the Javascript referrer I did the following:
Go to the referral website
Inject this javascript onto the page via Selenium API: document.write('<script>window.location.href = "<my website>";</script>')"
I'm using a Python wrapper around selenium, so I cannot provide the function you need to inject the code in your language, but it should be easy to find.
What you are looking for is referer spoofing.
Selenium does not have an inbuilt method to do this, however it can be accomplished by using a proxy such as fiddler.
Fiddler also provides an API-only version of the FiddlerCore component, and programmatic access to all of the proxy's settings and data, thus allowing you to modify the headers of the http response.
Here is a solution in Python to do exactly that:
https://github.com/j-bennet/selenium-referer
I described the use case and the solution in the README. I think github repo won't go anywhere, but I'll quote the relevant pieces here just in case.
The solution uses libmproxy to implement a proxy server that only does one thing: adds a Referer header. Header is specified as command line parameter when running the proxy. Code:
# -*- coding: utf-8 -*-
"""
Proxy server to add a specified Referer: header to the request.
"""
from optparse import OptionParser
from libmproxy import controller, proxy
from libmproxy.proxy.server import ProxyServer
class RefererMaster(controller.Master):
"""
Adds a specified referer header to the request.
"""
def __init__(self, server, referer):
"""
Init the proxy master.
:param server: ProxyServer
:param referer: string
"""
controller.Master.__init__(self, server)
self.referer = referer
def run(self):
"""
Basic run method.
"""
try:
print('Running...')
return controller.Master.run(self)
except KeyboardInterrupt:
self.shutdown()
def handle_request(self, flow):
"""
Adds a Referer header.
"""
flow.request.headers['referer'] = [self.referer]
flow.reply()
def handle_response(self, flow):
"""
Does not do anything extra.
"""
flow.reply()
def start_proxy_server(port, referer):
"""
Start proxy server and return an instance.
:param port: int
:param referer: string
:return: RefererMaster
"""
config = proxy.ProxyConfig(port=port)
server = ProxyServer(config)
m = RefererMaster(server, referer)
m.run()
if __name__ == '__main__':
parser = OptionParser()
parser.add_option("-r", "--referer", dest="referer",
help="Referer URL.")
parser.add_option("-p", "--port", dest="port", type="int",
help="Port number (int) to run the server on.")
popts, pargs = parser.parse_args()
start_proxy_server(popts.port, popts.referer)
Then, in the setUp() method of the test, proxy server is started as an external process, using pexpect, and stopped in tearDown(). Method called proxy() returns proxy settings to configure Firefox driver with:
# -*- coding: utf-8 -*-
import os
import sys
import pexpect
import unittest
from selenium.webdriver.common.proxy import Proxy, ProxyType
import utils
class ProxyBase(unittest.TestCase):
"""
We have to use our own proxy server to set a Referer header, because Selenium does not
allow to interfere with request headers.
This is the base class. Change `proxy_referer` to set different referers.
"""
base_url = 'http://www.facebook.com'
proxy_server = None
proxy_address = '127.0.0.1'
proxy_port = 8888
proxy_referer = None
proxy_command = '{0} {1} --referer {2} --port {3}'
def setUp(self):
"""
Create the environment.
"""
print('\nSetting up.')
self.start_proxy()
self.driver = utils.create_driver(proxy=self.proxy())
def tearDown(self):
"""
Cleanup the environment.
"""
print('\nTearing down.')
utils.close_driver(self.driver)
self.stop_proxy()
def proxy(self):
"""
Create proxy settings for our Firefox profile.
:return: Proxy
"""
proxy_url = '{0}:{1}'.format(self.proxy_address, self.proxy_port)
p = Proxy({
'proxyType': ProxyType.MANUAL,
'httpProxy': proxy_url,
'ftpProxy': proxy_url,
'sslProxy': proxy_url,
'noProxy': 'localhost, 127.0.0.1'
})
return p
def start_proxy(self):
"""
Start the proxy process.
"""
if not self.proxy_referer:
raise Exception('Set the proxy_referer in child class!')
python_path = sys.executable
current_dir = os.path.dirname(__file__)
proxy_file = os.path.normpath(os.path.join(current_dir, 'referer_proxy.py'))
command = self.proxy_command.format(
python_path, proxy_file, self.proxy_referer, self.proxy_port)
print('Running the proxy command:')
print(command)
self.proxy_server = pexpect.spawnu(command)
self.proxy_server.expect_exact(u'Running...', 2)
def stop_proxy(self):
"""
Override in child class to use a proxy.
"""
print('Stopping proxy server...')
self.proxy_server.close(True)
print('Proxy server stopped.')
I wanted my unit tests to start and stop the proxy server without any user interaction, and could not find any Python samples doing that. Which is why I created the github repo (link above).
Hope this helps someone.
Not sure if i understand your question correctly, but if you want to override your HTTP requests there is no way to do it directly with webdriver. You must run your request thru a proxy. I prefer using browsermob, you can get it thru maven or similar.
ProxyServer server = new ProxyServer(proxy_port); //net.lightbody.bmp.proxy.ProxyServer;
server.start();
server.setCaptureHeaders(true);
Proxy proxy = server.seleniumProxy(); //org.openqa.selenium.Proxy
proxy.setHttpProxy("localhost").setSslProxy("localhost");
server.addRequestInterceptor(new RequestInterceptor() {
#Override
public void process(BrowserMobHttpRequest browserMobHttpRequest, Har har) {
browserMobHttpRequest.addRequestHeader("Referer", "blabla");
}
});
// configure it as a desired capability
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability(CapabilityType.PROXY, proxy);
// start the driver
driver = new FirefoxDriver(capabilities);
Or black/whitelist anything:
server.blacklistRequests("https?://.*\\.google-analytics\\.com/.*", 410);
server.whitelistRequests("https?://*.*.yoursite.com/.*. https://*.*.someOtherYourSite.*".split(","), 200);