Design concerns about Page Object Model Pattern - selenium

I wonder if something is wrong with our approach to the POM. In many examples on the internet I find this pattern used in a way so that your test script looks like this:
pageObject.doSomethingWithElement(parameters...)
For us it was more natural to do it like this:
pageObject.element.doSomething(parameters...)
So we don't need to implement what can be done with an element in the page class but just simply define those actions in the test script. For some reason that felt more natural for us similar to the fact that there is
System.out.println()
System.err.println()
instead of
System.printlnOut()
System.printlnErr()
Do we miss some disadvantage of our approach?

There's no advantage or disadvantage when choosing between those 2 models. But I think you are slightly misunderstanding the recommended approach. The idea is not to pageObject.doSomethingWithElement, but to do something with a functionality of the page, pageObject.doSomething if you want.
If you look at Selenium examples for instance, one of the first examples is public HomePage loginAs(String username, String password). Nothing in this function is about elements, it's about functionality of the page, easily expressed verbally, without any reference to elements involved. I can read this function as: when user is on Login page, and user provides username and password, on successful login, user is redirected to Home Page. Which sort of provides a natural BDD interpretation of page model.
The advantage of such approach is that your tests are much more readable. Instead of something like:
loginPage.username.setValue(...)
loginPage.password.setValue(...)
loginPage.loginButton.submit()
// how do I get a homepage from here?
this model allows to have
HomePage homePage = loginPage.loginAs(...)
Done!
Also from maintenance perspective: if developers change elements on Login page, it might be important for 20 tests that deal with login functionality. But you want other 980 tests you have to be completely unaffected by their change, since they only use login on their way to test other parts of functionality. So they can remain completely oblivious to changes in login elements, as long as login itself keeps working as it should.
So I think the choice is not between various ways your page could express element-related functions, but with whether your page should express elements at all, or concentrate on functionality it provides, regardless of the elements.
I recommend reading that page I quote here, it gives a very good idea of what page model is all about.

Related

Behat using BeforeScenario vs Given

For something that REALLY needs to be set up before every scenario then BeforeScenario is used.
Sometimes there are things that only need to be set up for some scenarios, but for a significant proportion of them. For example, if the scenario needs a "regular user account" to exist, then it goes on to login as that user and do some stuff.
You can make an #BeforeScenario #RegularUser method that will run for each Scenario tagged as #RegularUser. So scenarios end up looking like:
#RegularUser
Scenario login as a regular user
Given I am on the login page
When I login to a regular user account
Then the welcome screen is displayed
The alternative is to
Scenario login as a regular user
Given a regular user exists
And I am on the login page
When I login to a regular user account
Then the welcome screen is displayed
"a regular user exists" would be associated with the method that creates the regular user.
With the first approach, I can make #AfterScenario #RegularUser that will delete the user after the scenario ends. So that seems "a good thing".
But the 2nd approach kind of looks nicer to me from a BDD viewpoint. But its limitation is that then there is nothing happening at the end of the scenario to delete the user.
Which approach is the better practice?
In my opinion, i would go with first option.
The problem is that, both options are not so good, because normally you have context of this scenario added:
In order to ..
As a Regular User
I need to ..
That should show what was you intention of running this scenario. The process of creating RegularUser don't adding value to scenario itself.
Then, if you need to confirmation in scenario of having regular user - then is OK. But in here in my opinion, it is not the point - then second option is not so good - we have context to add informations like that.
That is why, I think first option (not ideal, but better that second) is good solution.
I don't know in Behat functionality to set up user by context, but tag option should be more elastic.

What should I test in views?

Testing and Rspec are new to me. Currently I'm using Rspec with Shoulda and Capybara to test my application. It's all fine to test models, controllers, helpers, routing and requests. But what should I exactly test in views? Actually I want to test everything in views, including DOM, but I also don't want to overdone things.
These three things would be a good starting point
Use Capybara to go start at the root of your site, and have it click on links and whatever until it gets to the view you want tested.
Make sure what ever content is supposed to be on the page, is actually showing up on the page. So, if they 'user' went to the Product 1 page, make sure all the Product 1 content is actually there.
If different users see different content, make sure that's working. So, if Admin users see Admin-type buttons, make sure the buttons are they when the user is an Admin, and that aren't when the user isn't.
Those 3 things are a pretty good base. Even the first one is a big win. That will catch any kind of weird view syntax errors you may have accidentally introduced, as the syntax error will fail the test.
At my work we are using RSpec only to do unit testing.
For business testing or behavior testing we are using Cucumber that is much more readable for the business and IT guys.
It's like a contract that you sign with your business or it's like a documentation that you can execute.
Have a look at Cucumber: http://cukes.info/
I use view specs to verify that the view uses the IDs and classes I depend on in my jQuery code.
And to test different versions of the same page. E.g.:
I would not want to create two full request or feature specs to check that a new user sees welcome message A and a returning user welcome message B. Instead I would pick one of the cases, write a request or feature spec for it, and then a additional view spec that tests both cases.
Rails Test Prescriptions might be interesting for you, since it has a chapter dedicated to view testing.

Dynamic url shortening script for text input

We are looking for script, which automatically detects url, as you type and shorten it, in text input window, before press "submit". The shortening service used is http://yourls.org/
Have you tried implementing one yourself? Deploy the shortener to your own web site (it's written in PHP, as far as I can see from a cursory glance at the web site) and provide a simple Ajax endpoint which will dynamically perform a shortening conversion, then implement calls to that from the main page using JavaScript.
You might want to impose a reasonable delay to allow the user to finish typing, to avoid performing lots of unnecessary conversions of bogus URLs (which may require, e.g. writes to a file or database - I haven't looked at how the library referenced does things).
I'm not sure what you're trying to achieve; if you create new shortened URLs for each substring before the user has finished typing the full URL, you will just proliferate your database.
I don't see how shortening a URL before it's finished makes sense.
If you want to relieve the user from the arduous task of clicking the submit button, then initiate the submit using javascript (jQuery, or something). I'm not sure if that's what you want to do.
http://monkeytooth.net/2010/12/htaccess-php-how-to-wordpress-slugs/
simple means of implementing the concept its a lot more easier than one would think. Querying a DB or some other means of matching the slug/id with the that of which is found in the URL wouldn't be all to hard either. The linked article doesn't really go in depth as what to do next but catching and breaking the URL apart is the essential process of making it work. I have person used the method myself on several sites and it works like a charm for me and the sites it was used on.

How does Google track search result clicks? Is this the best way?

As the question states, I'm trying to figure out how google tracks clicks on search results. When you view the source, you find the following:
<em>Yahoo</em>!
The function rwt is, which is pretty messy:
windows.rwt=function(b,d,e,g,h,f,i,j){
var a=encodeURIComponent||escape,c=b.href.split("#");
b.href=["/url?sa=t\x26source\x3dweb",d?"&oi="+a(d):"",e?"&cad="+a(e):"","&ct=",a(g),"&cd=",a(h),"&url=",a(c[0]).replace(/\+/g,"%2B"),"&ei=7_C2SbqXBMW0-AbU4OWnCw",f?"&usg="+f:"",i,c[1]?"#"+c[1]:""].join("");
b.onmousedown="";
return true};
So it looks like Google is changing the href of the a tag to /url?... which I'm assuming is where their tracking is. From LiveHeaders in Firefox, it looks like this page is redirecting the browser to the original href of the a tag.
Is this correct and is this the best method of tracking clicks on links on your site, such as ads?
It's actually changing the href of the link rather than the window location. It's setting b.href, and b refers to the link itself. This runs in onmousedown, so when you release the mouse and the click is handled you magically get sent to that new href.
Any click tracking pretty much comes down to sending the user to some equivalent of Google's /url?... script, counting the click, and performing a 302 redirect to the real destination.
This javascript href replacement has the advantage of automatically filtering out any robots that don't run scripts. The downside is that it also filters out any real people that have javascript disabled. If, like Google, you just care which link is most popular with your real human users, this works out quite well. The clicks that you do record should be representative of real human traffic, and you can safely ignore the clicks from non-javascript users because they probably have the same preferences anyway.
Most adverts just link straight to the counting URL with no javascript replacement. This means that you definitely count every real click on the link, but you need to worry about filtering out requests from robots, since they'll now see your counting URL too.
Which you prefer really depends on why you want to track the clicks.
I think most people expect ads to click through via some sort of tracking system, so I shouldn't worry too much about following this particular javascript implementation - as much as anything that's probably there to ensure that the user sees the correct link in the browsers status bar, that various other interesting bits of info (search terms, position on the result set at the time, who you are, etc) are sent across (without you realising it) and that the links still work if JavaScript is disabled.
Generally, yes directing the user through some tracking page with the ID of the ad they have clicked on, and possibly some additional indication of where they have come from is sensible - that way you aren't relying on other mechanisms (such as JS event handlers) to track clicks on the links, it's certainly the way most ad systems I've used work.

writing SEO-friendly pages that can be toggled public or private

our application wants to be able to create static, searchable pages based on user profile information, which would be linkable to other public profiles.
I am looking at LinkedIn as an example...it seems like they actually auto-generate the page to be a static file that is indexable and searchable.
Can someone suggest how we would do this? I am thinking there would need to be a cron job that runs and writes a the path and file name.
The user may want to keep the whole page private, in which case I imagine it would need to delete it.
There's alot of sub-requirements but that's the general concept and wanted to start getting ideas and feedback.
Thanks.
You can do without the cron job if you generate the static pages in real time whenever the profile information is created/updated or whenever user changed the setting to keep info public/private. This way you are not constantly looping through all users, and do not depend on another component (your cron job) to be running.
One alternative would be to adopt an explicit RESTful information architecture so that a profile resource ("page") is addressable with a permanent URL. The resulting resource could be a static page. Or not. That would be an implementation detail invisible to the search engine crawler and any web browser accessing the resource.
umnik700's answer is fairly dead-on if you're not considering issues related to authentication or who gets to see what. Consider the difference between the profiles you see when you're logged into Facebook versus those same profiles' publicly facing, searchable counterparts. Even MySpace, with a lot less consideration for search engine privacy, has viewability that is dependent on your relationship to the other person, defaulting, for private profiles, to "This profile has been set to private by the user" or something to that extent.
If you're looking to suddenly scale out a social tool where individuals are eliciting their personal information, I would suggest umnik700's answer (dynamically generate the content, but not the URLs, for public versions of the profile) with the following corollary: you need to be able to support privacy preferences varying from extremely strict to completely open, and default to a version that at least errs on the stricter, more private version of the profile. If you're just now pushing out searchable personal content when there never was any way to find it outside the site before, it's important not to abuse information given under different pretenses.
I know this probably requires maybe more scalability and added functionality than you were hoping this project would take, but to do otherwise could be most likely taken as a violation of your user base's tacit trust. Anyway, the best strategy to do this will probably require you to lean on your database more anyway, so it might be time to rework it a bit--including adding some privacy preferences.