How do I resolve ambiguity in Capybara? For some reason I need links with the same values in a page but I can't create a test since I get the error
Failure/Error: click_link("#tag1")
Capybara::Ambiguous:
Ambiguous match, found 2 elements matching link "#tag1"
The reason why I can't avoid this is because of the design. I'm trying to recreate the twitter page with tweets/tags on the right and the tags on the left of the page. Therefore it will be inevitable that identical links page shows up on the same page.
My solution is
first(:link, link).click
instead of
click_link(link)
Such behavior of Capybara is intentional and I believe it shouldn't be fixed as suggested in most of other answers.
Versions of Capybara before 2.0 returned the first element instead of raising exception but later maintainers of Capybara decided that it's a bad idea and it's better to raise it. It was decided that in many situations returning first element leads to returning not the element that the developer wanted to be returned.
The most upvoted answer here recommend to use first or all instead of find but:
all and first don't wait till element with such locator will appear on the page though find does wait
all(...).first and first won't protect you from situation that in future another element with such locator may appear on the page and as the result you may find incorrect element
So it's adviced to choose another, less ambiguous locator: for example select element by id, class or other css/xpath locator so that only one element will match it.
As a note here are some locators that I usually consider useful when resolving ambiguity:
find('ul > li:first-child')
It's more useful than first('ul > li') as it will wait till first li will appear on the page.
click_link('Create Account', match: :first)
It's better than first(:link, 'Create Account').click as it will wait till at least one Create Account link will appear on the page. However I believe it's better to choose unique locator that doesn't appear on the page twice.
fill_in('Password', with: 'secret', exact: true)
exact: true tells Capybara to find only exact matches, i.e. not find "Password Confirmation"
The above solution works great but for those curious you can also use the following syntax.
click_link(link_name, match: :first)
You can find more information here:
http://taimoorchangaizpucitian.wordpress.com/2013/09/06/capybara-click-link-different-cases-and-solutions/
NEW ANSWER:
You can try something like
all('a').select {|elt| elt.text == "#tag1" }.first.click
There may be a way to do this which makes better use of the available Capybara syntax -- something along the lines of all("a[text='#tag1']").first.click but I can't think of the correct syntax off hand and I can't find the appropriate documentation. That said it's a bit of a strange situation to begin with, having two <a> tags with the same id, class, and text. Is there any chance they are children of different divs, since you could then do your find within the appropriate segment of the DOM. (It would help to see a bit of your HTML source).
OLD ANSWER: (where I thought '#tag1' meant the element had an id of "tag1")
Which of the links do you want to click on? If it's the first (or it doesn't matter), you can do
find('#tag1').click
Otherwise you can do
all('#tag1')[1].click
to click the second one.
You can ensure that you find the first one using match:
find('.selector', match: :first).click
But importantly, you probably do not want to do this, as it will lead to brittle tests that are ignoring the duplicate-output code smell, which in turn leads to false positives that keep working when they should have failed, because you removed one matching element but the test happily found the other one.
The better bet is to use within:
within('#sidebar') do
find('.selector).click
end
This ensures that you're finding the element you expect to find, while still leveraging the auto-wait and auto-retry capabilities of Capybara (which you lose if you use find('.selector').click), and it makes it much clearer what the intent is.
To add to the existing body of knowledge here:
For JS tests, Capybara has to keep two threads (one for RSpec, one for Rails) and a second process (the browser) in sync. It does this by waiting (up to the configured maximum wait time) in most matchers and node-finding methods.
Capybara also has methods that don't wait, primarily Node#all. Using them is like telling your specs that you'd like them to fail intermittently.
The accepted answer suggests page.first('selector'). This is undesirable, at least for JS specs, because Node#first uses Node#all.
That said, Node#first will wait if you configure Capybara like so:
# rails_helper.rb
Capybara.wait_on_first_by_default = true
This option was added in Capybara 2.5.0 and is false by default.
As Andrei mentioned, you should instead use
find('selector', match: :first)
or change your selector. Either will work well regardless of config or driver.
To further complicate things, in old versions of Capybara (or with a config option enabled), #find will happily ignore ambiguity and just return the first matching selector. This isn't great either, as it makes your specs less explicit, which I imagine is why no longer the default behavior. I'll leave out the specifics because they've been discussed above already.
More resources:
https://thoughtbot.com/blog/write-reliable-asynchronous-integration-tests-with-capybara
https://makandracards.com/makandra/20829-threads-and-processes-in-a-capybara-selenium-session
Due to this post, you can fix it via "match" option:
Capybara.configure do |config|
config.match = :prefer_exact
end
As considering all the above options, you can try this too
find("a", text: text, match: :prefer_exact).click
If you are using cucumber, you can follow this too
You can pass the text as a parameter from the scenario steps which can be generic step to reuse again
Something like When a user clicks on "text" link
And in step definition When(/^(?:user) clicks on "([^"]*)" (?:link)$/) do |text|
This way, you can reuse the same step by minimizing the lines of code and would be easy to write new cucumber scenarios
To avoid ambiguous error in cucumber.
Solution 1
first("#tag1").click
Solution 2
Cucumber features/filename.feature --guess
Related
I'm currently in the process of setting up Cypress for my project. Currently we're only using testing library for frontend tests. And reading the Cypress documentation has gotten me a bit confused as the two libraries seem to have opposite philosophies in regards to how you're supposed to query for elements.
Testing library basically says test what the user can see/touch and only use data-testid if all else fails. Cypress on the other hand states that best practice is that you should query elements by data-testid / data-cy attributes.
I feel conflicted between the two approaches. I get the point about we should test what the user actually sees (testing library). But I do also get that those things often change (cypress) and we need to spend time updating tests whenever we make small changes (i.e "Ok" -> "Done"). And when testing with data-cy attributes, are we not also ignoring accessibility / screen readers?
What are your thoughts on this?
React Testing library(RTL) is specifically made to write tests from a user perspective. From their Guiding Principles:
The more your tests resemble the way your software is used, the more confidence they can give you.
Meaning, RTL wants you to use accessibility queries like getByRole and only fallback to getByTestId for cases where you can't match by role or text, or it doesn't make sense (e.g. the text is dynamic).
However, thanks to the render method allowing us to specify props (compared to Cypress), we have much more flexibility and may entirely omit dynamic text.
Cypress, on the other hand, runs with all dependencies. In case of dynamic content from a C.M.S or multi-language application, things are not that easy using getByRole("heading", {name: /welcome/i }). In this case, the recommendation of testId's make sense.
My personal recommendation is to use accessibility query selectors in both Cypress and RTL, unless the text is dynamic. Then testId's in Cypress and a combination of testId's & accessibility query selectors provide the best solution.
It should also be noted that Playwright and Cypress test-generator tools select by accessibility query selectors.
I thought a lot for a few days before answering this question and I even got to do some tests, and after that time I came to the conclusion that the Cypress approach is the best.
The main reason that led me to this answer was that when the testing library says that we should test what the user actually sees, it is already applicable in Cypress even when we use a data-testid, because let's suppose you have a button that exists in the DOM, but it is not visible when you select this button, with the data-testid when you try to click in this button Cypress will return an error saying that the button is not visible and if you really want to do this action you must apply force:true. The same happens if the button is not clickable or if there is another element in front of the button.
Cypress already checks by default in click and type actions if the element:
element is into view
it is visible
it is not disabled
it is not detached
it is not readonly
it is not animating
it is not covered
fire the event at a
descendent
Also if you fetch the element by text, placeholder, or class, this does not guarantee that the element is actually visible to the user, as the element can be in the DOM and not be visible to the user for various reasons.
So the best way to make tests easier to maintain, easier to read, and avoid flaky tests is to use the data-testid and whenever possible or necessary combine the location of the element with an assertion to ensure that the element is visible. Example:
cy.get('[data-testid="button"]').should('be.visible')
I hope I had contributed to the discussion and would love to hear other people's points of view.
Hi people from StackOverflow,
I'm taking over someone's work, and my predecessor created a testset using Cucumber&Capybara&Selenium.
I'm familiar with the lot, but I've got a question concerning his way of finding text on a page.
His implementation:
expect(page).to have_content(text)
My implementation:
page.has_content?(text)
What I've noticed is that the first implementation often fails because the automation is 'too quick' for the website to load its page. The latter seems a more robust implementation, perhaps because of its simplicity?
Can someone tell me if there's a right or a wrong, or whether these two are fundamentally different. Because I've been trying to search the web but have not really found a solid conclusion..
Thanks in advance!
have_content raises an exception when it's expectation fails and should be used much more often in most test suites than has_content?. has_content? is basically just a wrapper around have_content that catches the exception and returns true or false, and is for use with conditionals
if page.has_content?(...)
# click something
else
# click something else
end
Your predecessor is using Capybara correctly since if you are testing to make sure a page has specific content you should be using have_content. has_content? will never fail the test (just silently return false and continue on). If your have_content assertions are failing because the site is too slow you probably need to increase Capybara.default_max_wait_time (or figure out why page load times are so long)
Ummm... I think that there is one more thing.
expect(page).to_have_content(text) is Rspec method while page.has_content?(text) is Minitest method. It is just depend what type of testing you use in your project. I guess.
I wrote one xpath like below:
//div[contains(#id,'ext-element-')]/table[2]/tbody/tr/td/div/span
Same xpath sometimes it is finding the particular element.But sometimes it is throwing
ElementNotFoundException.
Is there any convenient way to solve this problem?
The more elements there are in the xPath (e.g. tbody/tr/td/div), the more possibilities it is for it to break (sometimes for mysterious reasons).
Wherever possible, use descendant to skip them, for example:
//div[contains(#id,'ext-element-')]/table[2]/descendant::span[contains(#id, 'spanId')]
Or just double-slash // (meaning any child or subchild):
//div[contains(#id,'ext-element-')]/table[2]//span[contains(#id, 'spanId')]
Shorter, yes, but less readable (easy to miss a slash and then wonder what happened). But is still mostly prefer double-slash.
The use of "Axis names" can make your xPaths more robust.
Here are some resources:
http://seleniumworks.blogspot.de/2014/03/xpath-selenium-uses-part-ii.html
https://www.simple-talk.com/dotnet/.net-framework/xpath,-css,-dom-and-selenium-the-rosetta-stone/
http://www.guru99.com/xpath-selenium.html
Does the step prior to this cause a change to the DOM (e.g. new page load or AJAX request that changes the page)? If yes, then is likely to be a timing problem: the element is sometimes not found because the request for the element happened while the page was loading. You should instead wait for the element to exist. Then find the element and do whatever comes next.
Not sure if this will help you much, but I was having a similar issue where driver.find_element_by_xpath(...) would return information initially, but then running the same thing a couple seconds later would result in the 'Element Not Found' exception, so I imported time and put a sleep(2) right after my driver.get(...) and that fixed it for me.
Hope that helps.
Always try to go with an absolute path; otherwise try by adding indexes to XPATH; otherwise go for the third option of a relative path (keeping it minimal).
I want to evaluate my content blocks before running my test suite but the closures' property names is in bytecode already. I'm ooking for the cleanest solution (compared with parsing source manually).
Already tried solution outlined in this post (and I'd still wind up doing some RegEx/parsing) but could only get it to work via script execution engine. It failed in IDE and GroovyConsole. Rather than embedding a Groovy script in project's code, I thought I'd try using Geb's native classes.
Is building on the suggestion about extending Geb Navigators here viable for Geb's PageContentSupport class whose contentTemplates contain a LinkedHashMap of exactly what I need? If yes, would someone provide guidance? If no, any suggestions?
It is currently not possible to get hold of all content elements for a given page/module. Feel free to create an issue for this in Geb's bug tracker, but remember that all that Geb can provide is either a list of content element names or a map from these names to closures that create these elements.
Having that information isn't a generic solution to your problem because it's possible for content elements to take parameters and there are situations where your content elements will be available on the page only after some other actions are performed (for example you have to click on button to reveal a section of a page that uses ajax to retrieve it's content). So I'm afraid that simply going over all elements and checking if they don't throw any errors will not cut it.
I'm still struggling to see what would "evaluating" all content elements prior to running the suite buy you. Are you after verifying that your content elements still work to get a faster feedback than running the whole suite? I'm pretty sure that you won't be able to fully automate detection of content definitions that don't work anymore. In my view it will be more effort than it's worth.
Am trying to capture a element (delete button in gmail) which has variable xpath.
The xpath is something like this-
//*[#id=':rr']/div/div[4]/div[1]/div[1]/div[1]/div/div/div[2]/div[3]
can somebody kindly help?
No, this is where the IDE falls behind and it's for good reason. It, along with other 'XPath-ified' (e.g using the 'XPath' right-click option in Firebug) tools will only take a guess at where something is located in the DOM.
In that, I mean it's going to walk down the tree and see where it is, in relation to the other elements, i.e it'll walk down one set of tr elements, and know there are 7 of them, therefore it'll know that the first one can be accessed using [1], then the next one can be accessed using [2] etc etc...
It doesn't, or really can't, know what is unique enough for you to use. That's why it's down to you to figure it out.
As for Gmail specifically, I would suggest you either fall back to Gmail's basic mode - so the markup will be easier to deal with or stop completely and use a particular set of API's in whatever language you are using to deal direct with the mailboxes in that account.
Though, if you do this, you'll need to dump the IDE altogether - essentially this is beyond the IDE and is a logical thing you need to decide yourself. The IDE is not designed for this.
Though, a tip would be see what's near the delete button. Is there a static element, that has the same attributes all the time, near it? You can get that element, and walk through to the DOM to your 'delete button'.