how to handle cross-domain testing in selenium - selenium

How to handle cross-domain functionality in selenium.can any one explain me plz?
For ex: need to open Google.com and gmail, Using same selenium session object, I was seeing permission denied error , i tried with *iehta, Proxy injection mode as well it didn't work can you help me out..

I found this anwer on stackexchange.com:
You should be able to do so while using browsers with elevated
security privileges like *chrome for firefox. So you could just do
selenium.open("newURL");
in your test. Problem of changing URL is, it change in domain and
normal Selenium browser mode is restricted by Java Script's Same
Origin Policy, as I mentioned above browsers with elevated security
privileges should get you going.
I suppose this is the point where you are trying to load another URL
in same selenium session -
sel.open("www.google.com");
sel.waitForPageToLoad(stimeout);
First - don't use waitForPageToLoad, open api takes care of it.
Now if sel.open does not work then you should definitely encounter
error. Don't keep you method in try catch block and see the error you
encounter....
1:
https://sqa.stackexchange.com/questions/761/can-the-base-url-be-changed-in-the-same-browser-session-using-selenium-rc

If can't open two different domains with one selenium object, try using a different object for each domain (e.g. an object called seleniumGoogle and an object calledseleniumGmail).

Related

Persits ASPPDF ImportFromUrl ServerXMLHTTP Error: The request has timed out

We have a test website that uses Persits ASPPDF to build a PDF using the ImportFromUrl method. It works fine on our test domain, but when I use the same code on another domain (and crucially perhaps, a sub-domain) I get the "MSXML2::ServerXMLHTTP Error: The request has timed out." error.
This leads me to think its related to the problem outlined in
https://support.persits.com/show.asp?code=PS080709171
"the calling Active Server Page (ASP) should not send requests to an ASP in the same virtual directory or to another virtual directory in the same pool or process. This can result in poor performance due to thread starvation."
So perhaps the config of the two servers hosting the two sites (test and live) are different - and if so what would that be? - Or you can't run this method on a sub-domain? Any guidance out there please?
I've had the same issue for weeks and finally found out what the problem was. In my case, it was because I had set to True the options that allow the debug of classic ASP code, without which I could not debug using visual studio. Setting those options back to False fixed the issue.

"The search engine appears to be down or failing to respond to the search query"

I've installed FusionAuth (awesome product) into a Docker Swarm cluster using the official docker-compose.yml file and everything seems to work brilliantly.
EXCEPT
Periodically, when a user goes to login they will be presented with the above error stating that the search engine is not available. If they try again immediately then everything works correctly! I would, obviously, prefer that they never saw the error.
Elasticsearch is definitely running and is responding to API calls correctly, and I can see the fusionauth_user index is present and populated with docs.
I guess my question is two fold:
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
I've search the docs for answers to the above but I can't seem to find anything :-(
Thanks for the kind feedback.
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
Elasticsearch provides full text search of user data. Each time a user is created or updated the user is re-indexed. In this case during login, we are updating the search index with the last login instant.
This service is required and cannot be disabled. We have had clients request to make this service optional for embedded applications or small scale scenarios where Elasticsearch may not be required. While this is not currently in plan, it is possible we may revisit this option in the future.
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
Not currently.
Full disclosure, I am not a Docker or Docker Swarm expert at all - perhaps there are some nuances to Swarm and response time due to spin up and spin down of resources?
Do you see any exceptions in the log when a user sees this error on the login?

Webdriver(Selenium2) - How to make selenium operate elements without wating for connecting to external AD links?

Environment:
- Selenium 2.39 Standalone Server
- PHP 5.4.11
- PHPUnit 3.7.28
- Chrome V31 & ChromeDriver v2.7
I'm testing a website,which invokes a lot of Advertisement Systems,such as Google AD.
The browser takes a lot of time to connect to external AD links , even all the elements of the page has already been loaded.
If my internet network was not fast when I ran my tests on a webpage,
Selenium would wait for a very long time ,since the AD links responsed slowly.
Under this condition ,Selenium usually waits for over 60 seconds, and throws a timeout exception.
I'm not sure how Senelium works, but it seems that Selenium has to wait for a sign of webpage's full loading, then pulls the DOM to find elements.
I want to make selenium operate elements without waiting for connectiong to external AD links.
Is there a way to do that ? Thank you very much.
I would suggest that you could make use of a proxy. Browsermob integrates well with selenium, very easy to use it:
// start the proxy
ProxyServer server = new ProxyServer(4444);
server.start();
// get the Selenium proxy object
Proxy proxy = server.seleniumProxy();
// This line will automatically return http.200 for any request going to google analytics
server.blacklistRequests("https?://.*\\.google-analytics\\.com/.*", 200);
// configure it as a desired capability
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability(CapabilityType.PROXY, proxy);
// start the browser up
WebDriver driver = new FirefoxDriver(capabilities);
I'm not sure how Senelium works, but it seems that Selenium has to
wait for a sign of webpage's full loading, then pulls the DOM to find
elements.
It is pretty much like this. The default loading strategy is "NORMAL" which means:
NORMAL of type DOMString
The remote end MUST wait until the "document.readyState" of the frame currently handling commands equals "complete", or there are no
more outstanding network requests other than XMLHttpRequests.
I finally found a simple solution for my condition.
I decide to block these Ad requests and tried some firewall and proxy softwares,for example,
comodo,privatefirewall, etc.
comodo is too heavy and complex ,privatefirewall doesn't support wildcards, and firewall would interrupt tests. At last I choosed a proxy software CCproxy. Trial Version is enough.
I create a rule for localhost ,to make it can request my test website domain only, and all other requests are rejected.
Running a test costs about 1-2 minutes before and only 30 seconds now ,it's apparently more stable and fast without connecting to the useless Ad links.
Here're configuration steps:
1.launch CCproxy with Administor privilege( you should set it using Adminisrator in the file property)
2.click Options, select AutoStartup,select AutoDetected for Local IP Address. click OK.
3.create a txt file ,input your domains,like " *.rong360.com*;*.rong360.*; "
4.click Account, select PermitOnly for Permit Category;
click New, input 127.0.0.1 for IP Address/Range;
select WebFilter,click the E button at right side to create a filter;
click the ... button,select the text file you create at Step3,
select PermittedSites. click OK
click OK.
5.click OK to return to the main UI of CCproxy.
6.launch IE and config the local proxy with 127.0.0.1:808
other browsers will use this config automatically too.
now you can run the tests again , you'll feel better if have same condition :)

Sporadic invalid_request 400 errors connecting to Shopify /admin/oauth/access_token

I am using a java raw HTTP client to connect to Shopify API (specifically, using Play Framework with the non-defualt sync driver which is actually the JDK's default driver).
My application usually manages to connect successfully and convert the temporary access token into a permanent one by calling the /admin/oauth/access_token endpoint.
However, sometimes I get this error result from the API:
Generic Error(400)
{"error":"invalid_request"}
I haven't been able to reproduce the issue with my test stores - I've tried installing a fresh store, reinstalling existing stores after uninstalling, I'm not sure why this call sometimes fail and how to debug it. The API call still continues to succeed for some stores using our application.
Some things that I am doing:
Even if the URL of the store is on a custom domain, I'm always using the https://foo.myshopfiy.com/admin/oauth/access_token URL and not the URL of the custom domain, to prevent a redirect.
I am always using an https URL and never an http one, again to prevent a redirect (we noticed a few issues with redirect with the Java HTTP client, so we aim to have zero redirects)
A thread I found about this error suggest possible problems with our SSL certificates, however I don't think this is my problem because some requests work for us, and the result of running openssl on our machine does't show any issues.
How should I proceed? Open a support ticket with Shopify?
FYI, I see that this specific problem only started yesterday on Feb 19 2013, so it might be a temporary issue.
FYI, the problem was caused by reusing a temporary access code.
Our fault - Shopify could have been more clear in their error message though.

Error in Selenium Testcase playback

I always get this error whenever I try to playback a testcase in Selenium
[error] Permission denied for <http://www.facebook.com> to get property Location.href
It sounds like you are bumping into javascript's same-domain security policy.
See here: http://www.codingforums.com/showthread.php?t=117050.
Without more information about your test case, it's hard to be specific, but the basic problem might be
JavaScript has a same domain policy for security reasons. That means
it can not touch other domains.
In the example in the linked resource, the user was able to replace
top.document.location.href = searchLocation;
with
window.open(searchLocation, "_top");
to solve the problem.