Unable to understand how to leverage "Open Test Browser" which is a Cumulusci keyword when launching Salesforce url on Browserstack?
Project background:
I am building an automation framework (for functional/regression tests) for a Salesforce product using Robot Framework, SeleniumLibrary and CumulusCI and need to scale up automation capabilities to execute same tests on multiple browsers/OS for which I am integrating it with BrowserStack.
Implementation level details:
RFW and Cci is integrated correctly and works perfect on any scratch org locally in Chrome or FF. Here I can directly leverage the full power of Cci keywords such as "Open Test Browser" which knows the org details such as instance_url, username and password and has a access token. So logging into the Org is cakewalk. Here launching the SF url does not ask me for a email verification code.
*** Settings ***
Resource cumulusci/robotframework/Salesforce.robot
Library cumulusci.robotframework.CumulusCI ${ORG}
library SeleniumLibrary timeout=20
library OperatingSystem
library Collections
library XML
library String
library BuiltIn
*** Variables ***
${BSUser} myBSkey
${BSAccessKey} s******************b
${BSUrl} http://${BSUser}:${BSAccessKey}#hub.browserstack.com/wd/hub
###Login Page Locators
${signOn_username} //input[#id='username']
${signOn_password} //input[#name='pw']
${loginButton} //input[#name='Login']
###Home Page Locators
${SetupRecentlyViewed} //div[#class='module-header']/div/header/h2/span
*** Test Cases ***
Connect RFW with BS
${instance_url} ${username} ${password} Log my Org Info #User keyword
Setup BS Browser ${instance_url} ${username} ${password} #User keyword
*** Keywords ***
Log my Org Info
&{OrgInfoDict}= Get Org Info #Cci keyword
:FOR ${key} IN #{OrgInfoDict.keys()}
\ ${instance_url}= Get From Dictionary ${OrgInfoDict} instance_url
\ ${username}= Get From Dictionary ${OrgInfoDict} username
\ ${password}= Get From Dictionary ${OrgInfoDict} password
[Return] ${instance_url} ${username} ${password}
Setup BS Browser
[Arguments] ${instance_url} ${username} ${password}
Open Browser url=${instance_url} remote_url=${BSUrl} desired_capabilities=browser:Safari,browser_version:12.0,os:OS X,os_version:Mojave,browserstack.video:True
Maximize Browser Window
Login to Salesforce ${username} ${password}
Login to Salesforce [Arguments] ${Username} ${Password}
Input Text ${signOn_username} ${Username}
Input Text ${signOn_password} ${Password}
Click Element ${loginButton}
${Pass}= RUN KEYWORD AND RETURN STATUS Wait Until Page Contains Element ${SetupRecentlyViewed}
run keyword if '${Pass}'=='True' Log "SF Home page loaded successfully"
... ELSE Fail "SF Home page did not load successfully"
Actual Issue:
After referring to RFW and BrowserStack integrations on the internet (some links below), I was able to connect to BS but using Selenium keyword "Open Browser". I also tried to get all of my Org info and pass it on to BS to log into the SF url. When I do this, because I am using Selenium and not in the context of Cci, my SF org now throws email verification at me. Hence I feel I am falling short of understanding how to make use of the Cci context when running tests on BrowserStack.
Ref: https://www.swtestacademy.com/browserstack-robotframework-integration/ and https://github.com/knightjacky/Robot-BrowserStack
Workarounds tried:
I tried creating a webdriver and then using "Open Test Browser" that one also doesnt work, because as expected, it does not have a context of Cci Org.
*** Keywords ***
Create a BS WebDriver
#Some code that creates a dict...
#${executor}= Evaluate str('${BSUrl}')
#Create Webdriver Remote desired_capabilities=${desired_capabilities} command_executor=${executor}
#Open Test Browser #Cci keyword
Expected thoughts:
Please share some ideas as to how I can tweak my current implementation to make use of Cci keywords and not rely on Selenium when integrating with BrowserStack.
At the moment, the salesforce keyword Open test browser doesn't support using the create webdriver keyword. However, it's easy to duplicate what Open test browser does. The key is to use a URL which contains the properly encoded credentials.
The keyword Login URL from the CumulusCI library will return you an appropriate url. You can then use this url with any browser no matter how the browser was opened.
Example:
The following example will use a default browser on browserstack and then logs in to my default org. You can also pass an org into the Login URL keyword if you wish.
*** Settings ***
Resource cumulusci/robotframework/Salesforce.robot
Suite Setup Setup BS Browser
Suite Teardown Delete records and close browser
*** Variables ***
${BSUser} <your username here>
${BSAccessKey} <your access key here>
${BSUrl} http://${BSUser}:${BSAccessKey}#hub.browserstack.com:80/wd/hub
*** Keywords ***
Setup BS Browser
Create Webdriver Remote command_executor=${BSUrl}
${login_url} Login URL
go to ${login_url}
Wait until loading is complete
*** Test Cases ***
Example using 'create webdriver'
capture page screenshot
Note: The salesforce keyword Open Test Browser does a bit more than just open the browser: it also installs two location strategies. If you're not calling Open Test Browser and you want to use those location strategies, you will need to register them yourself:
Add Location Strategy text Locate Element By Text
Add Location Strategy title Locate Element By Title
Related
So I have figured out how to get started and open a Kameleo browser profile using Python. However, I find the Session ID and Port the chrome browser was started with. I think I have this, but my session ID is throwing an error.
I was expecting the /profiles/{guid}/start endpoint would return a JSON dictionary with the session id and port, also would be nice to have that under the profiles/{guid}/status http calls. I couldn't find it in the swaggerhub documentation.
This is the code I'm using
from kameleo.local_api_client.kameleo_local_api_client import KameleoLocalApiClient
from kameleo.local_api_client.builder_for_create_profile import BuilderForCreateProfile
client = KameleoLocalApiClient()
base_profiles = client.search_base_profiles(
device_type='desktop',
browser_product='chrome'
)
# Create a new profile with recommended settings
# for browser fingerprinting protection
create_profile_request = BuilderForCreateProfile \
.for_base_profile(base_profiles[0].id) \
.set_recommended_defaults() \
.build()
profile = client.create_profile(body=create_profile_request)
# Start the browser
client.start_profile(profile.id)
According to the documentation you don't need to get the port and the sessionID manually as you can make the connection to the browser through Kameleo.CLI.exe port.
If you keep on reading the README you will find an example where they showcase the W3C WebDriver connection.
# Connect to the running browser instance using WebDriver
options = webdriver.ChromeOptions()
options.add_experimental_option("kameleo:profileId", profile.id)
driver = webdriver.Remote(
command_executor=f'{kameleoBaseUrl}/webdriver',
options=options
)
# Use any WebDriver command to drive the browser
# and enjoy full protection from Selenium detection methods
driver.get('https://google.com')
I could also find this code in Kameleo's example repository.
I am trying to read the console output of a webpage, especially I need the POST-GET-PUT ajax calls, with RF and Selenium. I have found some help online but I cannot seem to make it work. the python script I have is:
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
def get_logs2(driver):
# enable browser logging
#d = DesiredCapabilities.CHROME
#d['goog:loggingPrefs'] = { 'browser':'ALL' }
#driver = webdriver.Chrome(desired_capabilities=d)
# load the desired webpage
#driver.get(driver.current_url)
a = driver.get_log('browser')
# print messages
for entry in driver.get_log('browser'):
print(entry)
print("finished")
return a
I call this script from RF, after having done some operations on the webpage. So I need to pass to this function the page exactly how it is after the actions I took. To do that I do:
${seleniumlib}= Get Library Instance SeleniumLibrary
Log ${seleniumlib._drivers.active_drivers}[0]
${message} = Get Logs2 ${seleniumlib._drivers.active_drivers}[0]
I get as a result and empty message, but I know console is not empty. Can you help? Thanks.
Here is a solution using entirely Robot Framework, no additional user library.
The logic is the same.
Set the correct browser capabilities to enable logging.
Then use the Get Library Instance keyword to retrieve the webdriver instance.
Call the get_log('browser') on the webdriver instance.
*** Settings ***
Library SeleniumLibrary
*** Variables ***
&{browser logging capability} browser=ALL
&{capabilities} browserName=chrome version=${EMPTY} platform=ANY goog:loggingPrefs=${browser logging capability}
*** Test Cases ***
Browser Log Cases
Open Browser https://stackoverflow.com Chrome desired_capabilities=${capabilities}
${log entries}= Get Browser Console Log Entries
Log ${log entries}
[Teardown] Close All Browsers
*** Keywords ***
Get Browser Console Log Entries
${selenium}= Get Library Instance SeleniumLibrary
${webdriver}= Set Variable ${selenium._drivers.active_drivers}[0]
${log entries}= Evaluate $webdriver.get_log('browser')
[Return] ${log entries}
I have selenium grid working with local and remote nodes from a test started in robotframework. I do not want to control selenium startup from within the robot test, I just want to say give me access to a selenium-node that is using a gateway which is the device under test(DUT). I want to in essence access specific nodes based on desired_criteria ~= mgmt in testbed1, client-interface in testbed1. Do I need a grid per testbed? It seems I need to associate a grid with a specific testbed and have another selenium running for the management interface.
Client ----------- DUT --------- Server
node | | mgmt interface
192.168/24 | |
| |
localhost ---- 10.0.1.1
hub + node + robot runner
I tried specifying platform=LINUX for client-node and platform=UNIX for local-node and thats not working in Open Browser. Now attempting use of robotframework_selenium2library, since I thought it was a drop in!
https://github.com/detro/ghostdriver/blob/master/README.md may provide a way to seperate instances, but it is still one instance running on a specific node, if I disable firefox on the client browsers and use that for manager access it will give me what I need - (hack! hack! hairball).
The key to solve this is to know which instance of selenium you are refering to when you call open_browser and that can be controlled by being explicit refering to the Selenium remote RC client_web.open_browser or by using Keyword Set Library Search Order.
*** Settings ***
Library SeleniumLibrary 120 ${CLIENT_IP} ${SELENIUM_SERVER_PORT} WITH NAME client_web
*** Variables ***
${MANAGER_BROWSER} Firefox
${BROWSER} Firefox
*** Test Cases ***
Verify Two Browsers Using Search Order
[Setup] Test Case Setup
[Tags] noncritical
Set Library Search Order SeleniumLibrary
${wb_index} = Open Browser ${DUT} ${MANAGER_BROWSER}
Set Library Search Order client_web
${wb_index} = Open Browser ${facebook} ${BROWSER}
Maximize Browser Window
Select Window main
Wait Until Page Contains ${page text} 10s
Log browser-index:${wb_index}
Comment Set suite variable Manager_Browser with call to open specific browser
[Teardown] Local Test Case Teardown
Environment:
- Selenium 2.39 Standalone Server
- PHP 5.4.11
- PHPUnit 3.7.28
- Chrome V31 & ChromeDriver v2.7
I'm testing a website,which invokes a lot of Advertisement Systems,such as Google AD.
The browser takes a lot of time to connect to external AD links , even all the elements of the page has already been loaded.
If my internet network was not fast when I ran my tests on a webpage,
Selenium would wait for a very long time ,since the AD links responsed slowly.
Under this condition ,Selenium usually waits for over 60 seconds, and throws a timeout exception.
I'm not sure how Senelium works, but it seems that Selenium has to wait for a sign of webpage's full loading, then pulls the DOM to find elements.
I want to make selenium operate elements without waiting for connectiong to external AD links.
Is there a way to do that ? Thank you very much.
I would suggest that you could make use of a proxy. Browsermob integrates well with selenium, very easy to use it:
// start the proxy
ProxyServer server = new ProxyServer(4444);
server.start();
// get the Selenium proxy object
Proxy proxy = server.seleniumProxy();
// This line will automatically return http.200 for any request going to google analytics
server.blacklistRequests("https?://.*\\.google-analytics\\.com/.*", 200);
// configure it as a desired capability
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability(CapabilityType.PROXY, proxy);
// start the browser up
WebDriver driver = new FirefoxDriver(capabilities);
I'm not sure how Senelium works, but it seems that Selenium has to
wait for a sign of webpage's full loading, then pulls the DOM to find
elements.
It is pretty much like this. The default loading strategy is "NORMAL" which means:
NORMAL of type DOMString
The remote end MUST wait until the "document.readyState" of the frame currently handling commands equals "complete", or there are no
more outstanding network requests other than XMLHttpRequests.
I finally found a simple solution for my condition.
I decide to block these Ad requests and tried some firewall and proxy softwares,for example,
comodo,privatefirewall, etc.
comodo is too heavy and complex ,privatefirewall doesn't support wildcards, and firewall would interrupt tests. At last I choosed a proxy software CCproxy. Trial Version is enough.
I create a rule for localhost ,to make it can request my test website domain only, and all other requests are rejected.
Running a test costs about 1-2 minutes before and only 30 seconds now ,it's apparently more stable and fast without connecting to the useless Ad links.
Here're configuration steps:
1.launch CCproxy with Administor privilege( you should set it using Adminisrator in the file property)
2.click Options, select AutoStartup,select AutoDetected for Local IP Address. click OK.
3.create a txt file ,input your domains,like " *.rong360.com*;*.rong360.*; "
4.click Account, select PermitOnly for Permit Category;
click New, input 127.0.0.1 for IP Address/Range;
select WebFilter,click the E button at right side to create a filter;
click the ... button,select the text file you create at Step3,
select PermittedSites. click OK
click OK.
5.click OK to return to the main UI of CCproxy.
6.launch IE and config the local proxy with 127.0.0.1:808
other browsers will use this config automatically too.
now you can run the tests again , you'll feel better if have same condition :)
I want to use Selenium Webdriver and I am unable to do so because when I run my code, I get the following exception.
My code is very basic and as follows.
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("http://www.google.com.bh")
assert "Google" in driver.title
driver.close()
Exception Message
selenium.common.exceptions.WebDriverException: Message: '<HTML><HEAD>\n<TITLE>Access Denied</TITLE>\n</HEAD>\n<BODY>\n<FONT face="Helvetica">\n<big><strong></strong></big><BR>\n</FONT>\n<blockquote>\n<TABLE border=0 cellPadding=1 width="80%">\n<TR><TD>\n<FONT face="Helvetica">\n<big>Access Denied (authentication_failed)</big>\n<BR>\n<BR>\n</FONT>\n</TD></TR>\n<TR><TD>\n<FONT face="Helvetica">\nYour credentials could not be authenticated: "Credentials are missing.". You will not be permitted access until your credentials can be verified.\n</FONT>\n</TD></TR>\n<TR><TD>\n<FONT face="Helvetica">\nThis is typically caused by an incorrect username and/or password, but could also be caused by network problems.\n</FONT>\n</TD></TR>\n<TR><TD>\n<FONT face="Helvetica" SIZE=2>\n<BR>\nFor assistance, contact your network support team.\n</FONT>\n</TD></TR>\n</TABLE>\n</blockquote>\n</FONT>\n</BODY></HTML>\n'
It opens firefox but after that it is unable to connect to google or any other local sites.
The exception is at driver = webdriver.Firefox()
I googled around and I followed the link on SO.
But unfortunately I still get the same error.
I cannot run as the root user. I changed my proxy settings and set No Proxy element for localhost as well as mentioned in the link.
I am using Python 2.7 and have installed selenium 2.31 version.
I also tried setting proxy.
myProxy = "*********:8080"
proxy = Proxy({
'proxyType': ProxyType.MANUAL,
'httpProxy': myProxy,
'ftpProxy': myProxy,
'sslProxy': myProxy,
'noProxy': 'localhost,127.0.0.1,*.abc'
})
driver = webdriver.Firefox(proxy=proxy)
I also tried to set the proxy to system's proxy i.e., in the above code, 'proxyType': ProxyType.SYSTEM
But it again gives the above exception message.
is there a place where I have to set my username and password?
Any help would be appreciated!
Remove proxy settings from all the browsers on the system manually. I had IE, Firefox and Google Chrome.
When I removed the proxy settings of all the browsers and enabled the proxy only on Firefox,it worked without giving any errors. I do not know the exact reason why this works like this, may be got to do with the registry settings on windows which I am not sure about.
After doing the above said, I ran the basic code and it worked fine.
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("http://www.google.com.bh")
assert "Google" in driver.title
driver.close()
I didn't set the proxy explicitly also. By default, it had taken the system's proxy settings. Hope this would help others facing similar issue.