As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Does anyone have an examples or experience of writing UI automation in a functional language? I'm currently learning F# to get a better understanding of functional concepts and I'm having difficulty working out how an automated UI test would be structured in a functional language - seems easy to use the same page/screen object patterns I would in Java or C#, but given lack of experience I'm curious if there's a different approach I've missed.
Your biggest win with using a functional language will come from not having to use classes at all, but being able to when they are the right answer. Also, F# allows for a nice clean 'dsl' looking test suite because of type inference and the syntax. Common actions (example: logging in) are easily abstracted into a function and called within a test. Any function that is very specific to a page can be added to that page's module along with its defining features (css selectors etc).
Here is an example of a test written with canopy
test(fun _ ->
//description of the test
describe "registering a user"
//go to root
url "/"
//ensure that you are on the login page
on "/Account/LogOn"
//click the registration link
click "form a[href='/Account/Register']"
//verify that you were redirected
on "/Account/Register"
//set the value of the input to email address specified
"#Email" << "username#example.com"
//set the value of the input to "Password"
"#Password" << "Password"
//set the value of the input to "PasswordConfirmation"
"#PasswordConfirmation" << "Password"
//click the register button
click "input[value='register']"
//verify that you were redirected
on "/"
//log off after test
url "/account/logoff"
)
More about canopy
I've written a Web automation framework/library in F# (also one in Ruby) and thus far, while I wouldn't consider its style to be functional, it doesn't have any classes. Almost everything is a function. Your test suite is a list of functions that are run.
github page
some examples
With < 500 LoC there are only 3 modules, the main set of functions to interact with your page, a simple test runner, and some configuration variables. At this point this paradigm has worked really well for me. I don't use classes for page definitions because for me, a page definition is as simply the css selectors I use. A module with a bunch of values fulfills this need nicely.
Give it a shot, I think you will find it to be an excellent way to accomplish your goals.
Sorry first time post so it wont let me show more links. Look on github and you can see the source at /canopy/canopy/canopy.fs
You seem to answer your own question, F# supports OOP, OOP is a good fit in this case, and the distinction between imperative vs functional is separate from structure in this case.
So use classes and methods just like you would in C#, but write the unit tests themselves in a functional manner.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am going to develop an application for OS X and I need some scripting engine for it.
The purpose of the scripts is to receive a text on their input (HTML file in most cases), parse it in some way and return the data to my app. These scripts should be easily editable by the users, therefore they should have some common used syntax like C or pascal.
Can you suggest some lightweight solution for this?
Thanks!
PS. I am new to OS X development, trying to switch from Windows...
Two suggestions:
Javascript, try the V8 engine. http://code.google.com/p/v8/ Very popular, likely familiar syntax to many.
Lua. http://www.lua.org Extremely lightweight and simple to connect. If your script editors write scripts for World of Warcraft, for example, they will know Lua.
In general AppleScript/Automator actions are easy for the end user to work with since the technology includes a GUI for building scripts without much programming knowledge. For experienced developers used to other languages, they can be a bit too friendly/loose and have a somewhat different syntax (more like plain English). The good thing is that they can also call other languages as needed, so a developer familiar with Perl or whatever could incorporate that into an AppleScript or Automator action.
Since you're talking about parsing text, Perl itself would be a good solution - again there's some difference in syntax, but the scripts can be rather compact and the basics of parsing aren't too difficult to learn. I haven't personally incorporated Perl into an OS X app, I've just used it on the command line, so I don't know if there are any pitfalls to that approach.
One additional advantage to AppleScript is that you can make your application itself scriptable so that users could automate the functions of your application into a larger workflow.
I would suggest downloading the free TextWrangler application by Bare Bones Software, or a similar developer's text editor, to see how they incorporate scripting into the application. This may give you additional insight into your approach.
LUA seems to be a good choice.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm looking to adopt Agile Development for a project based on web2py on the backend and Ember on the front end. For that I would like to use Behavior Driven Development (BDD) tools like Cucumber and Capybara for Rails. An implicit requirement is that the members of the team writing the user stories should be able to write and run BDD tests without deep knowledge of the code being developed.
I think that Cucumber.js combined with Zombie.js or Selenium would be a good approach but then there are Jasmine and Mocha. Both claim to enable BDD testing for JavaScript but I have the feeling that they are more suited to Unit Testing rather than to testing web applications by simulating how a real user would interact with the application.
Can anyone who has tried BDD with Cucumber.js, Jasmine or Mocha share their point of view as to which one would be the better choice for BDD with javascript?
Also, are there any other alternatives to consider?
For a full BDD testing Stack you could use:
1) cucumber.js + selenium + Soda (or other adapter for node) + node.js
or
2) cucumber.js + zombie.js + node.js
Personally I would go with the second option, since cucumber.js provides you with stub javascript code after parsing your scenarios/features/step definitions written in Gherkin syntax. you can use this code and additionally setup your zombie world and provide all the necessary assertion helper functions for your test suites and you are all setup. The only advantage I see in selenium is his Webdriver capabilities (sauce labs etc.) and the record functionalities, but I think the syntax used in zombie.js to drive the tests is pretty strait forward and maybe you don't need all the functionality selenium provides you.
About mocha and jasmine, if you want Gherkin syntax then none will provide you this feature, but if you like to write all your test in a Rspec syntax style you can go with one of these instead of cucumber.js, it all depends how important the Gherkin style is to you.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've had a "philosophy" disagreement with a coworker and I would like to hear community thoughts on the two sides presented (or, an even better third option).
Basically: We have a JSON API that returns a friends list. The result looks something like:
[{"name":"Bob", "uid":12345, "level":4}, {"name":"George", "uid":23456, "level":6}]
There are the normal sort of "mutual friend" requirements in place, that influence the response.
The disagreement is basically over which is better,
Tests that assert on "features" of the response:
def test_results_are_sorted_by_name():
.. <setup 2 friends> ..
response = controller.getFriendsList()
assertLessThan(response[0].name, response[1].name)
def test_blocked_users_are_not_returned():
.. <setup some friends and block one, storing the id in blocked_uid> ..
response = controller.getFriendsList()
for friend in response:
assertNotEqual(friend.uid, blocked_uid)
Tests that assert on a literal response
def test_results_are_sorted_by_name():
.. <setup 2 friends> ..
response = controller.getFriendsList()
expectedResponse = {...}
assertEqual(response, expectedResponse)
def test_blocked_users_are_not_returned():
.. <setup some friends and block one, storing the id in blocked_uid> ..
response = controller.getFriendsList()
expectedResponse = {...}
assertEqual(response, expectedResponse)
Which is better, and why?
Are there other options which are better than both?
Pure opinion here, but:
Testing literals encourages bad behavior - literal tests are so fragile and break so often that people learn to respond to test failures by updating the literal. Even if they broke it, they may convince themselves it's correct, update the literal, and move on, unintentionally cementing a bug in with a test, ensuring that it will not be fixed.
Feature testing is the right way to go. There are ways to go wrong with it - you can write a feature test easily that looks right, but passes even when your code is broken - but the best solution to that is to also use integration tests with your feature tests, to ensure that the service actually does the thing you expect.
You need to test based on features, not literal responses.
In principle, this allows people to change the format (add fields, etc) in ways that don't break what you have already written tests for. When they do this, your tests won't break, which is what they should do. (If it's bad for new fields to be added, write a test for that.)
In practice, literal text tests sometimes get fixed by just pasting in the new string, and real problems get, well, pasted over without anyone giving them enough thought.
If you want to make sure the response isn't changed in some way that passes tests but then doesn't work with the recipients of the data, include some basic level of integration test.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
I have seen some people who refuse to use Interface Builder and prefer to make everything using code. Isn't this a bit tedious and doesn't it take longer? Why would people do that?
This is usually a holdover from working in other environments with other UI builders. A lot of UI builder programs are viewed as newbie hand-holding at best and outright harmful at worst. Interface Builder is unusual in that it's actually the preferred way to create interfaces for the platform.
Some people don't like mixing code functionality in interface designs. Another example is when flash devs would include lots of code snippets directly in the stage (fla files), rather than in separate .as files. With xib it's not as big of a problem, since they are xml and can be merged quite easily when using source control. I personally like using xib's because we have a team of devs and designers -- splitting up the work load is nice. The designers can easily port their photoshop/fireworks designs into xibs and we can focus on the functionality.
Sometimes you want to do something that the UI builder can't quite handle (these situations aren't common, but they do come up now and then). Sometimes you may feel you have better control over what's happening when you write the code yourself. Me, I prefer to let the UI builders do it as much as possible, but sometimes it doesn't always work that nicely, and I sometimes have had to write the code myself.
Possibly because the Interface Builder is another tool to understand. Also, it's useful to know how to do things programmatically in case nibs don't give you enough functionality.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Background
The QA department where I work has a lot of automated blackbox tests that interact with our applications via the GUI and the command line. Currently, the automated tests output their results to standard out where we then manually enter the final pass/fail result into a spreadsheet.
We would prefer to have a system where the automated test automatically saves detailed test results to a file. We would then have a web page that the testers and developers could access to view the detailed test results and any necessary attachments. It would generate reports of the test results by project and version number.
Question:
What system would you recommend for test report generation? We need a system where our tests will automatically be inserted into new reports and that is preferably open source. I'm interested in what your company actually uses or what you have found useful in managing test results.
Our QA department is capable of building a simplified version of this system from scratch, however we would prefer not to reinvent this.
We are now using Testopia. it is integrated with Bugzilla, it is nice to have everything at the ame place. It uses the same XMLRPC API interface as Bugzilla.
reStructuredText is a very happy medium between writing to stdout and formal documentation. There are several scripts to convert from rst to other formats such as html.
You could mostly keep the system you have in place -- you'd only have to add a couple "tags" around the text, but unlike HTML tags, these are more readable characters. In fact, it's very close to the markdown you use when asking/answering here on StackOverflow.
The stdout text remains overall very readable by humans, but then it's as simple as adding one script in the chain to render to HTML or PDF for instance.
This page has a very good example of what it looks like in plain text and rendered forms.
Maven has an an extensive site mechanism, it does require you to bend to its will though, so that might rule it out for you.
Once configured you get a standard set of reports generated on each build, that can be packaged as a jar if you wish, or deployed directly to your build results site. There are plugins for many of the major reporting tools, such as Cobertura/Emma, Junit, JDepend etc.
The maven-site-plugin publishes its own sites if you want to have a look.