Avoid maven cyclic dependency with scope test - selenium

I have the next problem. "A" library contains custom JSF components. "B" library contains Selenium tests for these custom components, and some others (Primefaces JSF components).
When I develop "A", I want to test any change using "B" in order to check if I've broken something. So "A" needs "B" dependency in test scope.
When I develop "B", I want to test any change using "A" for deploying a website on the fly at testing-time in order to check if I've broken something. So "B" needs "A" dependency in test scope.
So, How is the best way to avoid the cyclic dependency and get my goal?

you may have to choose between:
having B tests with "A app" like app but an app that is not A at all to avoid having A in B dep.
having a new C test module that have A and B as dependencies. Where C is testing B with A. But this case begin to be less easy to maintain...
In general you dont have the need to test your test tool except if your tool make some complex stuff. In this case I do some small test sets that are autonomous.

Related

Is there a way to find all tests associated with a specific class on any level?

I'm using Intellij IDEA to program a project in java. I'm trying to test a class which I've changed and I want to get a list of all the tests that use it on any level.
Is there a way to do it in Intellij? Any external tool apart from IDEA can be good too.
I know how to get the direct unit tests(That's possible using CTRL+SHIFT+t) but I also want other tests as well that test the class on more complex level).
I want to find tests that might use the class C as follows:
Test A runs class B which uses class C.
So I want to find Test A and other like it.

Tests naming convention for unexported functions in Go

I can't trace where I know it from, but normally if one writes a test for method Foo, the corresponding test is named TestFoo.
If one tests an unexported function, say foo, what the name of the test should be then?
My question comes from the fact, that JetBrains IDE for Go, when asked to generate a test for an unexported function, generates something like Test_foo.
This behavior may have sense, because if you have Foo and it's unexported counterpat foo in the same package, you'd want to distinct tests for them somehow (at least for jump to test feature in IDE).
So is there any convention on naming tests after unexported functions?
BTW:
documentation for the Go testing package says, that a test is executed if it is:
any function of the form
func TestXxx(*testing.T)
where Xxx can be any alphanumeric string (but the first letter must not be in [a-z]) and serves to identify the test routine.
Which means, that any test having underscore in its name shouldn't be executed by go test. However, we all know, that such tests work just fine.
My initial confusion with this originated from 2 things:
1) an assumption, that underscore is allowed in test funciton name. This assumption was backed by tons of major opensource projects that do this
2) the fact, that JetBrains Idea generates tests with names containing underscores
The response to my own question: there is a convention (I'd rather say, it's a guideline from Google), which many projects violate. Underscore should not be used in a test name.
I've voted for closing my own question and created a bug-report in JetBrains bugtracker.
https://youtrack.jetbrains.com/issue/GO-5185

Karma-Jasmine: Conflicting source files

I have a pretty nice Gulp based Karma-Jasmine unit test workflow going. My question is about how to avoid source conflicts in one's tests. Without having to do anything, Karma-Jasmine has auto-magically exposed my src files to Jasmine and detected my tests. I can't see how this would be useful in a real codebase where things don't fit the happy-path.
Example: Create two files you would like to test that both implement a function with the same name, Test(). One returns true, the other false. Which one is your test actually testing? Do I have any control over this? I want to be able to test both (forget telling me about better JS design, that is obvious).

Unit Testing in xcode5

I've been asked to debug a prototype iPad app (written in Objective C). I thought a good approach would be to write a series of unit tests (IDing bugs and helping me familiarise myself with the code). Though I have written unit tests before I've never used xcode, Objective C; or a Mac for that matter.
The problem being that the code as it stands won't currently build - there are a large number of errors. I'm wondering if there is a way to unit test certain parts of the code using xcode without having to build the entire project; or do I need to ID what's causing all or the errors and eliminate these first?
I would say it depends on how deeply linked the components are, if the error producing components are separate enough (i.e they only communicate/are used by themselves), then you could simply remove them from the build.
However, if the components are also necessary for the remainder of the app (the parts you want to test), then you would need to fix the errors first, as otherwise you couldn't really test the full functionality in your unit tests.

TestCase scripting framework

For our webapp testing environment we're currently using watin with a bunch of unit tests, and we're looking to move to selenium and use more frameworks.
We're currently looking at Selenium2 + Gallio + Xunit.net,
However one of the things we're really looking to get around is compiled testcases. Ideally we want testcases that can be edited in VS with intellisense, but don't require re-compilling the assembly every single time we make a small change,
Are there any frameworks likely to help with this issue?
Are there any nice UI tools to help manage massive ammount of testcases?
Ideally we want the testcase writing process to be simple so that more testers can aid in writing them.
cheers
You can write them in a language like ruby (e.g., IronRuby) or python which doesnt have an explicit compile step of such a manner.
If you're using a compiled a compiled language, it needs to be compiled. Make the assemblies a reasonable size and a quick Shift F6 (I rewire it to shift Ins) will compile your current project. (Shift Ctrl-B will typically do lots of redundant stuff). Then get NUnit to auto-re-run the tests when it detects the assembly change (or go vote on http://xunit.codeplex.com/workitem/8832 and get it into the xunit GUI runner).
You may also find that CR, R# and/or TD.NET have stuff to offer you in speeding up your flow. e.g., I believe CR detects which tests have changed and does stuff around that (at the moment it doesnt support the more advanced xunit.net testing styles so I dont use it day to day)
You wont get around compiling test frameworks if you add new tests..
However there are a few possibilities.
First:
You could develop a native language like i did in xml or similar format. It would look something like this:
[code]
action name="OpenProfile"
parameter name="Username" value="TestUser"
[/code]
After you have this your could simply take an interpreter and serialize this xml into an object. Then with reflection you could call the appropriate function in the corresponding class. After you have a lot of actions implemented of course perfectly moduled and carefully designed structure ( like every page has its own object and a base object that every page inherits from ), you will be able to add xml based tests on your own without the need of rebuilding the framework it self.
You see you have actions like, login, go to profile, go to edit profile, change password, save, check email etcetc. Then you could have tests like: login change password, login edit profile username... and so on and so fort. And you only would be creating new xmls.
You could look for frameworks supporting similar behavior and there are a few out there. The best of them are cucumber and fitnesse. These all support high level test case writing and low level functionality building.
So basically once you have your framework ready all your have to do is writing tests.
Hope that helped.
Gergely.