The Velocity documentation mentions test coverage as a primary feature of the framework, but I'm having a hard time finding specific instructions for getting started.
Is there support for test coverage reporting in Meteor / Velocity (either directly or via a third-party package)?
You can't currently do this, but it is on the road map.
This used to work: https://github.com/xolvio/meteor-coverage but it can't with the new mirror approach. I'll update this answer when we get it working again.
Related
We are three testers and going to prepare automation project with selenium and java code so what are the steps for environment setup , scripts integration and running the testcases and getting the results for the whole project suits
So there are a few things we have to use in order to allow multiple engineers to work on the same framework.
Step 1) Creating the framework, assuming you know how to do this already, you have working tests you can skip this stage, however if not please follow the tutorial i link below.
http://toolsqa.com/selenium-webdriver/
Step 2) Creating a REPO, my preference is GitHub, you can use any git repo however i will post the guide to set one up with GitHub, its a similar process for all. This will allow you to merge code properly without causing conflicts.
https://help.github.com/articles/create-a-repo/
Step 3) Source Control program - to push, pull and fetch from your GitHub Repo, you can do this from Command Prompt however i find cloning the repo into a program like 'SourceTree' is really easy, so i've posted that below.
https://confluence.atlassian.com/get-started-with-sourcetree
If you follow these 3 guides, you will be able to have your automation test scripts on GitHub by the end of the day.
If you have any more questions please do not hesitate to ask.
All the best, Jack
The easiest and most logical way to do this would be to create one branch in your CVS (git or SVN, etc) and have each person setup the dev environment in the same way. Work exactly like developers and pull code before you check-in/commit (this will ensure that one small error does not break your framework) and swear to resolve conflicts during merge (to ensure you don't step on each others' toes).
Also, before you kick off, agree on a standard of coding (including package naming, design pattern usage, filename and methodname usage) and if this is in sync with the dev coding standards in your company, even better.
There will be a few hiccups along the way, but experience is the best way to create a process for your development and check-in practices.
Good luck with your new project and happy coding!
You have asked two questions, in my opinion the answer of your questions is.
how multiple automation testers work in same selenium project - You can use any version control system, Git Hub is the best option which gives you a lot of facilities. You all three can work on same project at same time or you can go for any centralized version control system like tortoise svn which is not much likely used now a days. I will suggest Git Hub for that.
what are the steps for environment setup , scripts integration and running the test cases and getting the results for the whole project suits - It depends on various factors like application and the kind of framework you want to use, there are many frameworks which are widely used for automation testing like Modular Framework, Data Driven, Keyword Driven, BDD, Cucumber, TestNg etc or if you have bandwidth and time you can design your custom framework as per the needs.
I hope I put some glimpse on your queries.
Thanks
Hello Test Automation Experts, Managers,
Seeking advice on how to transit from Manual Testing to automation testing
I have been a manual tester for over 8 years. I wish to switch to automation testing hence I attended training in Selenium web driver and Java ( we used TestNG and Maven ). Though being able to write medium complex automation scripts I am finding it hard to get interviews or seek attention from a hiring manager or recruiter just by saying them I have knowledge in automation.
What should I do to get one step closer so that a hiring manager will show interest in me?
I am very much keen on moving to test automation and willing to spend the required time to make this happen.
Should I write a blog portraying my skills in automation or post my samples in Git hub?
Looking forward to receiving your advice as I am totally lost and frustrated last few months making this attempt switching to automation testing
Thanks in advance
As most of the Technocrats from Selenium, Mozilla, Google Chrome and IE community are active and visits Stack Overflow on regular basis you can try the following ideas to get noticed :
As a beginner go through the required Discussion threads within Frequent TAB on Stack Overflow and start getting your hands dirty with code.
When you gather ample knowledge start Answering questions and become a StackOverflow Volunteer.
Earn Bronze/Silver/Gold badges on Selenium / WebDriver / Java / Python / C# / NodeJS / Ruby / PHP / Perl tags.
Start writing a Technical Blog Site
Prepare Videos on Technical Aspects of Selenium / WebDriver / Java / Python / C# / NodeJS / Ruby / PHP / Perl and publish them.
The best outcome can be, you can turn out to be a Selenium Commiter
My suggestion would be:
To start with, try to write automation script for open web application. Example: Selenium script to login to Amazon, select a specific product, add to basket and go till payments page. Next automate some other sites.
Create a blog where to mention step by step how you achieved your task.
Upload your scripts to Github and provide that link in your block.
Ask and answer questions in stackoverflow. Be active in Facebook selenium/automation groups.
Practice java programming in sites like hackerrank, hackerearth, etc. Upload your solutions in github as well.
Explore deeper and deeper, keep giving interviews and never stop learning.
Update all your works as personal projects in your resume.
All the best.
I am new to the Testing Arena. I am working with a very heavy ExtJs application.
And I am looking for the best testing tool.
I came across a bunch of tools, but can't seem to make a decision.
1) Siesta 2) Jasmine 3) Riatest
I want to be able to deploy these tests easily on a CI server.
Siesta and Jasmine can both be used with PhantomJs to automate the tests, but which one is better and easy to use?
As long as I can generate various clicks correctly and capture output, I'm cool.
Any help is appreciated.
Our company is moving from a Java based client to an ExtJS web and mobile application. We use QTP/UFT for Java automation which is slow, buggy, expensive, and cannot get passed the DOM easily so I started investigating Siesta recently. It seems like a viable option in my book but I admit I haven't checked out the other applications.
The initial setup with Siesta took longer than expected but with its event recorder, it makes it a gratifying transition. The recorder still requires debugging. I'm in QA and know how to script using Python, Bash, etc but it's definitely a learning curve to transition from VBScript to ExtJs/Siesta JavaScript. They have an open source version and a free 45 day trial to check out.
I've read about HTML Robot and SmartBear. Here's a post on the Sencha forums that talks about different automation software. Sencha also plans to release some kind of automation involving SenchaCmd during SenchaCon 2015 this April 7 to 9.
You should take a tool which covers your needs and improve the software quality.
Jasmine is good for unit tests without much gui interaction, you should use this to test your domain logic (e.g. stores, models, ...). Jasmine can run on every environment, a simple server with nodejs runtime is enought.
For regression tests the choice is yours. What tool you are comfortable with? Choosing a tool is one part, using it is another. Riatest seems like a windows application, are you able to run this on your CI server?
Evaluate them with your dev team and then make a choice for the long run.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Guys I am looking for Web Testing Generic Automation Framework which can be used to do automation testing of various web based applications .Looking for C# based framework as that is the language I am more familiar with. But any other language framework will also do and it should not use any proprietary/licensed language.
Framework should have some open source and free of cost license model.
I searched for selenium based framework on Google and SO. But could not come with any which have source code available. It will be good if the framework encapsulates all the functionality provided by Selenium WebDriver and/or Selenium RC and empower the functional tester to create and maintain test in human readable scripts.
Requirements of the framework:
The framework code should avoid hard coding of test steps. My idea is to maintain the test scripts outside the automation framework code , so that they can be easily be modified if needed. The framework should read through the step tables and the data tables and run the test accordingly.
If there is no such framework available now right then we can collectively build such a framework in a open source community model.
P.S.
I have read a little about Hermes Framework and Robot Framework, but not yet tried them, any help is welcome.
The good side of this problem: there are a lot of flexible tools and approaches, you can get together and build a flexible, reliable and robust test automation framework.
The hard part is: yes, there is no “out of box” solution, and you’ll need to find and put together lots of tools in order to solve this test automation puzzle.
What I would recommend:
First you need to choose a unit-test test framework. This is a tool which helps to identify separate methods in code as tests, so you can run them together or separately and get the run results, such as pass or fail.
My personal opinion, is that the testing tool – MS-Test – which ships with Visual Studio 2013 (and also Express Edition) is good enough. Another alternatives are: NUnit or Gallio Icarus
All unit-testing frameworks includes a mechanism for doing assertions inside the test. The capability of assertions class depends on given unit-testing framework. Here, I would like to recommend a popular library which works great for the entire unit testing framework.
This is Fluent Assertions (also available from NuGet repository).
That’s a hard moment. You need to decide: are you going to use the PageObject approach in order to build your test automation framework, or you are going to choose simpler approach, without heavy utilization of the Object Oriented Programming.
Properly designed Page Objects makes your test automation code much maintainable. Utilizing the OOP – you can do a magic in your code: write less to do more. Although, such approach requires more skill.
Here are a good articles on this topic:
Maintainable Automated UI Tests
And this one:
Tips to Avoid Brittle UI Tests
The alternative to the PageObject is a scripted approach. This approach can be also successful and requires less time to start.
Coypu is a good and usable example of such framework for Selenium Web Driver.
All the popular unit-testing frameworks support data-driven tests. The best support is in NUnit – you can run/re-run and see the tests generated for individual data row in the tests tree.
MS-Test supports reading data from different data-sources: text files, excel, mssql etc., but it is not possible to re-run the test for individual data row. Although, there is a hack for this – Ms-Test Rows.
For my data-driven tests, I am using a great library – Linq to Excel
I have a lot more to say. There are so many approaches to build test automation framework – and there is no ready solution yet.
I am trying to build one according to my testing methodology – SWD.Starter .
This project is still on its early development stages. But, at least, probably you’ll find a few tips how to build and organize the test automation code.
I've implemented https://github.com/leblancmeneses/RobustHaven.IntegrationTests based on my prior experience on large projects "trying" to implement full end to end testing.
I've been using this and and have a lot of useful extensions for general selenium, angularjs, and kendo ui work. Since this framework is not obtrusive you could just use these extensions without using anything else.
I'm using this on my latest project and everyone is loving it.
There are a lot of bdd/spec frameworks (specflow, mspec, nspec, storyq) to help wire the behavior of your system to tests.
What I've learned:
make it frictionless for any .net developer/tester to begin writing/running tests.
Most fail here because it requires installing additional pluggins into visual studio.
mine uses the standard nunit
Logically you would think that a feature is a class file and scenarios are [Test] methods - to support some of these frameworks they make each scenario a class file.
use the original spec to create stubs of your tests - hopefully readable code
I used spec flow back in 2010 - so things might have changed. I generated my tests from my bdd document. A year later when I went to add more tests and update existing tests, I felt I wasted a lot of time with ceremony than writing code I really wanted - I stopped using it.
My approach uses t4 to generate stubs - developer has a choice to generate from feature file, for a specific scenario or don't use generated code at all.
how is state shared across steps / nested steps
most use dictionary<string,object> to help you separate data from being hardcoded in your tests accessed from a context object.
mine uses viewmodels and pointers to those viewmodels - if your using something like angularjs you are using viewmodels in your server side display/editor templates and in angularjs controller so why not reuse these in your tests!
start early with CI - make development transparent
My project has ResultDiff that given the nunit testresult.xml file, folder location to your gherkin feature files, and output json file; Read description on why this is important on the screenshot: https://github.com/leblancmeneses/RobustHaven.IntegrationTests#step-5-ci-setup-resultdiff
Example:
Modified means business and developers have a mismatch of Gherkin statements - did something change that we need to talk about?
What is missing? a dashboard to render the .json file created by ResultDiff. It's on my backlog.....
With a centralized dashboard that supports multiple environments(branches of your code) this dashboard will serve all stakeholders (business, developers) what is the status of features being developed.
There is a framework named "omelet" which is built in java on top of testng for selenium,
For cross browser multi-parallel testing , it easily blends with your CI tools and have some cool reporting features with step level reports
Running your test cases on BrowserStack and Grid was never so easy as with omelet with few config changes.
if you want to give it a try then do follow the 5 min tutorial available on the website, there is archetype available on maven central + there are many more features available
Stable version is 1.0.4 and we are currently looking for people to contribute to project.
Documentation over here
Github link
I'm looking for a code coverage tool that I can use with a BlackBerry application. I'm using J2ME-Unit for Unit Testing and I want to see how much of my code is being covered by my tests.
I've tried using Cobertura for J2ME but after days of wrestling with it I failed to get any results from it. (I believe that the instrumentation is un-done by the RAPC compilation). And despite this message, the project seems to be dead.
I've looked at JInjector but the project seems very incomplete. There is little (if any) documentation and although it claims to be able to work with BlackBerry projects, I haven't seen any places where it has been used for that purpose. I've played with the project quite a bit but to no avail.
I've also tried the "Coverage" view in the BlackBerry JDE, even though I use Eclipse for development. The view stays permanently blank, regardless of clicking "Refresh" and running the application from the JDE.
I've looked at most of the tools on this SO thread, but they won't work with J2ME/BlackBerry projects.
Has anyone had any success with any code coverage tools on the BlackBerry? If so, what tools have you used? How have you used them?
If anyone has managed to get JInjector or Cobertura for J2ME to work with a BlackBerry project, what did you have to do to get it working?
I can't speak for Coberatura or JInjector, because I don't know how they collect test coverage probe data.
What is
critical is how this data is captured (does it need Java runtime support only available in standard Java VMs?) and how it is exported to the test coverage display/report generation tools.
Our SD Java Test Coverage tool instruments your source code; at runtime this produces an array of native Java booleans representing the coverage data, without need for any special VM support. Normally, this array is exported directly to a file, used by the test coverage display mechanism, by a TCVDump method provided with the test coverage tool, as your application exits.
Java (and other programming langauges used) in embedded systems often requires custom methods to extract the test coverage data. You might need to code a special dump procedure (in Java) to write out that boolean array to an accessible place. Our experience with building such custom dump procedures is that they are generally pretty simple (a few dozen lines); the real trick is deciding how/where to put the data, so that it can be easily moved to the target file. Mostly this is just a peculiar pair of copies, the first of which copies the boolean array to some staging location, and the second which writes the staged data into the destination file. (The standard TCVdump method is provided in source form to enable this kind of customization).
While I haven't specifically looked at BlackBerry, if you can write the data anywhere, you can pretty much be assured you can achieve this. We've had success with other embedded hand-set systems, such as Symbian, doing this.
If you want a complete overview of how to generally instrument code for test coverage following this strategy, see this paper: Branch Coverage for Arbitrary Languages Made Easy
I was actively involved with JInjector while working at Google. We were able to use it to successfully obtain code coverage for Blackberry applications. The application lifecycle for Balckberry apps is less predictable than J2ME and we found we had to tweak the application code to ensure the coverage data was gathered. I didn't personally work on the blackberry apps, several other engineers did. I'd hoped we'd create an example blackberry application and make it available on the jinjector site, but events and life got in the way.
If you would be willing to provide a sample blackberry apps with some unit tests, I'd be willing to spend a few hours trying to help you get the code coverage working. I'm not actively working with either J2ME or Blackberry (I'm currently working on Android apps when I have time to experiment with mobile) so I'm quite rusty. I have a day job that doesn't involve much mobile test automation, however I continue to work on ways to improve the test automation for mobile apps e.g. http://code.google.com/p/mwta/downloads/list for Android Test Automation.
I'm julianharty at gmail.com