I have created an extra builder for Java projects.
I'm trying to test it automatically via JUnit test that changes and refreshes a project file and expects automatic builder to kick in. Unfortunately this doesn't happen.
I can access and modify Workspace from that test but I don't know how to deal with builders (or GUI elements). I also tried to check "Run in UI thread" in junit test's debug configuration but no success.
What's the proper way to do this kind of testing? (I'd like to avoid learning TPTP, if possible -- looks too heavy).
How to initiate "Project -> Clean" command from my tests? Or how to execute any UI command? I suppose there are some threading issues to take care of.
I think you are looking for the command:
ResourcesPlugin.getWorkspace().build(IncrementalProjectBuilder.FULL_BUILD, null);
and
ResourcesPlugin.getWorkspace().build(IncrementalProjectBuilder.CLEAN_BUILD, null);
The first will initiate a clean, and then a build on the entire workspace. The second will initiate only a clean.
Alternatively, if you have access to an IProject object, you can call build on that.
Related
I’m trying to run a custom test suite which includes several test cases. For example, I’ve wrote 4 test scripts(test_login_success,test_login_fail,test_register_xxx,test_register_yyy), and I just want to run test_login_* module, how to set the defaultTestSuite and add testcases to it?
The test cases you create belongs to its class. If you want to customise test runs you shall consider updating to the new Xcode 11. The new version of Xcode has test plans feature allowing you to control tests executions better.
Introduction video:
https://developer.apple.com/videos/play/wwdc2019/413/
If you prefer to stay on previous Xcode, you should add schemes for your scenarios.
Also, you can pass tests names in xcodebuild shell command.
I found a really nice set of functionality in IntelliJ but it is very manual...
use Analyze/Analyze backward dependencies on a object to find all the tests that eventually reference that class.
Create a run configuration using Test Kind "Pattern" and manually enter each of the test classes found into the "Pattern" field.
Run the test with code coverage
Navigate to the original class to view it's total test coverage.
This whole process is fairly slow and user intensive, but it could easily be automated with a single "Find test coverage for class" key-press (It would still be pretty slow, but I could go on and do something else). Does anyone know if this is in a key binding or plug-in I haven't found yet? It seems like a pretty obviously useful and easy to implement piece of functionality.
If not, can anyone suggest how I might do this with the IDE scripting console or a custom Intention (I've had no success finding really good usable documentation/examples of the IDE scripting console, haven't looked into intentions too much...)
How about the following 2 flows/options based on the Windows shortcuts (don't mind the failing stuff, it's just a quick copy-paste for the sake of brevity):
1) With the cursor placed on your class name:
CTRL+SHIFT+T (Chose test for launch)
SHIFT+END (Select all)
SHIFT+UP (Unselect Create new test...)
CTRL+SHFIT+F10 (Execute selected tests)
2) With the Group by test/production option selected in the find window and cursor placed on your class name:
ALT+F7 (Find usages)
chose the tests from the list
CTRL+SHFIT+F10 (Execute selected tests)
Currently, I define the following function in the REPL at the start of a coding session:
(defn rt []
(let [tns 'my.namespace-test]
(use tns :reload-all)
(cojure.test/test-ns tns)))
And everytime I make a change I rerun the tests:
user=>(rt)
That been working moderately well for me. When I remove a test, I have to restart the REPL and redefine the method which is a little annoying. Also I've heard bad rumblings about using the use function like this. So my questions are:
Is using use this way going to cause me a problem down the line?
Is there a more idiomatic workflow than what I'm currently doing?
most people run
lein test
form a different terminal. Which guarantees that what is in the files is what is tested not what is in your memory. Using reload-all can lead to false passes if you have changed a function name and are still calling the old name somewhere.
calling use like that is not a problem in it's self, it just constrains you to not have any name conflicts if you use more namespaces in your tests. So long as you have one, it's ok.
using lein lets you specify unit and integration tests and easily run them in groups using the test-selectors feature.
I also run tests in my REPL. I like doing this because I have more control over the tests and it's faster due to the JVM already running. However, like you said, it's easy to get in trouble. In order to clean things up, I suggest taking a look at tools.namespace.
In particular, you can use clojure.tools.namespace.repl/refresh to reload files that have changed in your live REPL. There's alsorefresh-all to reload all the files on the classpath.
I add tools.namespace to my :dev profile in my ~/.lein/profiles.clj so that I have it there for every project. Then when you run lein repl, it will be included on the classpath, but it wont leak into your project's proper dependencies.
Another thing I'll do when I'm working on a test is to require it into my REPL and run it manually. A test is just a no-argument function, so you can invoke them as such.
I am so far impressed with lein-midje
$ lein midje :autotest
Starts a clojure process watching src and test files, reloads the associated namespaces and runs the tests relevant to the changed file (tracking dependencies). I use it with VimShell to open a split buffer in vim and have both the source and the test file open as well. I write a change to either one and the (relevant) tests are executed in the split pane.
I'm wondering if there is any solution to let Scala tests run automatically upon change of test class itself or class under the test (just to test automatically pairs Class <---> ClassTest) would be a good start.
sbt can help you with this. After you setup project, just run
~test
~ means continuous execution. So that sbt will watch file system changes and when changes are detected it recompiles changed classes and tests your code. ~testQuick can be even more suitable for you, because it runs only tests, that were changed (including test class and all it's transitive dependencies). You can read more about this here:
http://code.google.com/p/simple-build-tool/wiki/TriggeredExecution
http://php.jglobal.com/blog/?p=363
By the way, ~ also works with other tasks like ~run.
For our webapp testing environment we're currently using watin with a bunch of unit tests, and we're looking to move to selenium and use more frameworks.
We're currently looking at Selenium2 + Gallio + Xunit.net,
However one of the things we're really looking to get around is compiled testcases. Ideally we want testcases that can be edited in VS with intellisense, but don't require re-compilling the assembly every single time we make a small change,
Are there any frameworks likely to help with this issue?
Are there any nice UI tools to help manage massive ammount of testcases?
Ideally we want the testcase writing process to be simple so that more testers can aid in writing them.
cheers
You can write them in a language like ruby (e.g., IronRuby) or python which doesnt have an explicit compile step of such a manner.
If you're using a compiled a compiled language, it needs to be compiled. Make the assemblies a reasonable size and a quick Shift F6 (I rewire it to shift Ins) will compile your current project. (Shift Ctrl-B will typically do lots of redundant stuff). Then get NUnit to auto-re-run the tests when it detects the assembly change (or go vote on http://xunit.codeplex.com/workitem/8832 and get it into the xunit GUI runner).
You may also find that CR, R# and/or TD.NET have stuff to offer you in speeding up your flow. e.g., I believe CR detects which tests have changed and does stuff around that (at the moment it doesnt support the more advanced xunit.net testing styles so I dont use it day to day)
You wont get around compiling test frameworks if you add new tests..
However there are a few possibilities.
First:
You could develop a native language like i did in xml or similar format. It would look something like this:
[code]
action name="OpenProfile"
parameter name="Username" value="TestUser"
[/code]
After you have this your could simply take an interpreter and serialize this xml into an object. Then with reflection you could call the appropriate function in the corresponding class. After you have a lot of actions implemented of course perfectly moduled and carefully designed structure ( like every page has its own object and a base object that every page inherits from ), you will be able to add xml based tests on your own without the need of rebuilding the framework it self.
You see you have actions like, login, go to profile, go to edit profile, change password, save, check email etcetc. Then you could have tests like: login change password, login edit profile username... and so on and so fort. And you only would be creating new xmls.
You could look for frameworks supporting similar behavior and there are a few out there. The best of them are cucumber and fitnesse. These all support high level test case writing and low level functionality building.
So basically once you have your framework ready all your have to do is writing tests.
Hope that helped.
Gergely.