Selenium c# SendKeys vs manual Input - different behavior of tested webpage - selenium

My goal is to automatically test an "register new user " webpage. When registering manually (without selenium) the result page has a different behavior than after registerering using the selenium script.
I use
foreach (var c in message) { target.SendKeys(c.ToString());}
to input the characters into the text fields.
It seems that somehow the framework the website is built on recognizes that a script is doing the registration. I wonder if there is any other way than using target.SendKeys to avoid a different behavior of the website when using selenim instead of manual testing.

Related

How to test Integration of two separate URL and test them in selenium

I have administrator and front end section I have to upload a document in administrator eg-www.admin.com and moderate this document then it will display on frontend
Now we have to open new front end application in same session and validate correct data is displaying or not
Pls suggest
Uploading scenario cannot be done with selenium. You can use AutoIt to achieve it.
It is possible to open two different tabs and work with different Url in a single session.
Example(C#):
1. To open a new tab:
IWebElement body = driver.FindElement(By.TagName("body"));
body.SendKeys(Keys.Control + 't');
or
driver.ExecuteScript("window.open('your url','_blank');");
2.To switch between browser tabs: Get the window handle(window id's) of opened tabs by
var d =driver.WindowHandles;
3. To switch between browser tabs
driver.SwitchTo().Window(window-id);
If you do Web Interface tests with Java + Selenium, I advise you to use the NoraUi Open Source Framework.
This Framework manage multiple-application and multiple-data (user, manager, ...).

Capture HTML after element has been changed

I am looking for automated way to capture HTML after I do some actions on the web page.
For example I select some item in dropdown and HTML has been changed, i want to capture that HTML and dump into file. As result, I will end up with many different HTML files on my hdd.
I was thinking it might be possible to achieve that by using Selenium, maybe some other plugin which would give me possibility so save HTML in automatic manner to file.
do you mean source code?
for Python:
driver.page_source
for Java:
driver.getPageSource();
you can run these code after each step where page is changed
I think what you are asking to do will not be that easy. There are other questions on SO (e.g. this one) asking the same thing and there aren't really any good answers. I tried googling for a few mins to find a way to do this. I would think something more like a browser plug-in would exist that would do this for you.
If I were forced to code this using Selenium I would do something like the following...
Create a script that launches the browser and navigates to the page you want to track. In a user-defined interval, the script would grab the page source and compare it to the last capture. If the source is not the same, it would diff the two pages and write the diff to disk. I'm sure there are a number of diffing libraries you could find and use.
The problems with this approach...
If you made too many changes within the defined interval, you would get a glob of changes and not be able to differentiate what action made what change.
If you make the interval too small, you may run into perf issues.
Probably the most significant issue is going to be the fact that you run several tests and then go back and look at the diffs... but you won't have any way to tell what changes correspond to what action since you can't tie the two together other than order of occurrence.
What might be cool is if you could inject a button into the page that when clicked would popup an input dialog that you could type some text into and use that as a label for the upcoming action diff. For example, you click the button and type "choose price" - OK. Now you select a price from a dropdown. The next time you click the button, the script detects the button click and does a quick diff and writes it to disk using the "choose price" label... or something like that.
I found answer for my own question.
Start a selenium chrome driver server.
Use selenium client connect to and all changes could be captured by using code example below:
Code:
WebDriver driver = new RemoteWebDriver(new URL("http://127.0.0.1:9515"), DesiredCapabilities.chrome());
driver.get("http://google.com");
By by = new By.ByTagName("div");
List<WebElement> oldDivs = driver.findElements(by);
while(true){
try {
List<WebElement> newDivs = driver.findElements(by);
if (oldDivs != newDivs) {
for (WebElement element : newDivs) {
String a = element.getAttribute("a");
String b = element.getAttribute("b");
System.out.println(a + " :" + b);
}
}
}catch (Exception e){
System.err.println(e);
}
}

How to call chrome function from browser content page in xulrunner

I am converting code that originally ran as remote signed jar files in Firefox to use XULRunner instead. There are several reports that are implemented as web pages with an output option. Options include an HTML page or a report viewer that is written in XUL and Javascript.
When the user submits the form, and the report viewer is selected, then I need to open a chrome window. Obviously this cannot be done directly for security reasons. I want to provide a function or use some sort of message passing method to signal to the containing chrome what needs to happen.
Can this be done and if so how? Things I am considering:
1) Adding a function to the content window's window or document object
2) Some sort of message passing function
3) Some sort of customer event send/receive
4) A special URL form with a handler such as repviewer://repname/parameters
There is a quite elaborate article on this topic on MDN. The best way to achieve this without jeopardizing security is to send a generic event from your web page. The top XUL document should call addEventListener() with the fourth parameter set to true which will allow it to receive such untrusted events. Data can be passed through an attribute of the event target, the XUL document can then inspect that attribute.

Selenium - click a link/button in javascript popup

I am using selenium to test a webapp, for which most of the selenium test cases are already written. I have no idea how it works, I just build the project and go to link provided in the browser and run the test start running and yes all the test are manually written not generated.
I am using ruby, and doing something like this for clicking a link/button in a javascript popup :
def methodName()
clickAndWait("<Id of the link in js popup that I want to click>")
assertText("<text I need to check>")
end
this method is then called in '.test' file, but never works for a javascript popup, for the rest its all good !
help !
Popups a lot of the time are in a different context, either a frame or a window. When you call assertText, Selenium ignores these. Use the switchTo function (not sure of exact syntax in ruby) to switch to the popup before calling assertText
In Ruby maybe we should use like this:
#driver.switch_to.alert.accept

Screen Scraping - still not working

I have browsed through many posts on this and have tried some of the suggestions but still not understanding it fully.
I would like to scrape html pages that have some script running that usually executes the script to display a link after clicking. Some mentioned firebug and others talked about reverse engineering the code I need. But after trying reverse engineering I still dont see how to get the data after tracing the script function.
jQuery('.category-selector').toggle(
function() {
var categoryList = jQuery('#category-list');
categoryList.css('top', jQuery(this).offset().top+43);
jQuery('.category-selector img').attr ('src', '/images/up_arrow.png');
categoryList.removeClass('nodisplay');
},
function() {
var categoryList = jQuery('#category-list');
jQuery('.category-selector img').attr('src', '/images/down_arrow.png');
categoryList.addClass('nodisplay');
}
);
jQuery('.category-item a').click(
function(){
idToShow = jQuery(this).attr('id').substr(9);
hideAllExcept(jQuery('#category_' + idToShow));
jQuery('.category-item a').removeClass('activeLink');
jQuery(this).addClass('activeLink');
}
);
I am using vb.net and some sites were easy using firebug where looking at the script I was able to pull the data that I needed. What woudl I do in this scenario? the link is http://featured.typepad.com/ and the categories are what I am trying to access. Notice the url does not change.
Appreciate any responses.
My best suggestion would be to use Selenium for screen scraping. It is normally used for automated website testing but would fit your case well. I've used to screen scrape AJAX pages on multiple occasions where the page was heavily Javascript dependent.
http://seleniumhq.org/projects/ide/
You can write your screen scraping code to run in .NET and it can use Firefox or IE to run your screen scraping with.
With selenium what you'll do is record a screen scraping session with the Selenium IDE in Firefox (look for the Firefox extension in the link above). That screen scraping session can either output an HTML template or C# code. It might be able to output VB as well.
You'll copy the C# or VB.NET output from the screen scrape into a selenium .NET project that you'll create and then run the Selenium project through Nunit.
I'd suggest looking online for some help with getting Selenium started and working but this should get you on your way.