How to write Tests in Godot engine - testing

Soo I'm developing a game with the Godot engine and I want to create some test for the code to be sure that everything is working, I can't test the game with simple actions it takes 5-10 minutes. Is there a way to write tests ???

Have a look at GUT, the Godot unit test framework, or WAT.
Differences between the two, quoting WAT's author:
GUT runs in a scene. WAT runs in the editor itself.
GUT (last I checked) cannot handle creating Test Doubles that take constructor dependencies (as in it uses the _init method with
arguments).
WAT has parameterized testing (so you can do the same test multiple times only needing to define the different set of arguments per run
through).
WAT has a much larger range of asserts (I think)
WAT can intentionally crash a test script if one method fails (this isn't documented yet though).
WAT cleans up after itself with regards to memory. GUT doesn't. (Note: This was largely thanks to the recent method print_stray_nodes
builtin to Godot that GUT didn't have during its initial creation).
GUT allows for inner test classes. WAT doesn't. However WAT has the "context" attribute attached to every asserts call so you can add
sub-context to your describe() method context of the parent method.

There is also GdUnit3 https://github.com/MikeSchulze/gdUnit3 ;)
It will release in upcoming version 2.0.0 with c#-beta support.
GdUnit3 is full integrated in the Godot editor with a UI inspector to navigate over your test results.
Supports also automated testing by using the command line tool in a Github-Action
Feel free to give a try ;)

Related

vue3 and sonarqube, data return say: Not covered by tests

hy all,
I state that I have already read the other issues on this problem but I have not found what is right for me, or rather: I believe that sonarqube should not manage my code like this, let me explain:
it signals, in almost all returns, that it failed to run the unit test and, for this reason, the entire application is branded as "failed". But the Vue3 data construct, which uses the default return (doesn't seem complicated and abstruse to me) shouldn't be tested? How could I fix it?
Thank you all.

How to build a call graph for a function module?

A while ago during documenting legacy code I found out there is a tool for displaying call graph (call stack) of any standard program. Absurdly I wasn't aware of this tool for years :D
It gives fancy list/hierarchy of calls of the program, though it is not a call graph in a full sense, it is very helpful in some cases.
The problem is this tool is linked only to SE93 so it can be used only for transactions.
I tried to search but didn't find any similar tool for reports or function modules. Yes, I can create a tcode for report, but for function module this approach doesn't work.
If I put FM call inside report and build a graph using this tool, it wraps this call as a single unit and does not analyze deeper. And that's it
Anybody knows a workaround how we can build graph for smth besides transaction?
The cynic in me thinks RS_CALL_HIERARCHY was left to rot. Sandra is right, it definitely used to work. Once OO came to abap, interfaces and dynamic/generic code became possible. So a call heirarchy based on static code analysis was pushing proverbial up hill.
IMO the best way to solve this is a FULL trace and then to extract the data from the trace.
There are even external tool that do that.
This is of course, still limited as running a trace on every execution path can be very time consuming. Did I hear someone say small Classes please ?
Trans SAT.
Make sure teh profile you use isnt aggregating, and measure the blocks you are interested.
Now wade you way through the trace.
https://help.sap.com/doc/saphelp_ewm93/9.3/en-US/4e/c3e66b6e391014adc9fffe4e204223/content.htm?no_cache=true
Have fun :)
The call hierarchy displays also works for programs and function modules.
In my S/4HANA system, for VA01, it displays:
Clicking the hierarchy of function module CJWI_INIT displays:
I get exactly the same result by calling the function module RS_CALL_HIERARCHY this way:
The parameter OBJECT_TYPE may have these values:
P : program
FF : function module
The "call graph" is not maintained anymore since at least Basis 4.6, and it doesn't work for classes and methods.
But the tool is buggy: in some cases, a function module containing a PERFORM at the first line, it may not be displayed, whatever the call graph is launched from SE93 or directly from RS_CALL_HIERARCHY.

Clear Cursive REPL state before each test run

I'm new to Cursive and Clojure in general and am having some difficulty getting a decent TDD workflow.
My problem is that subsequent test runs depend on state in the REPL. For example suppose that you have the code below.
(def sayHello "hello")
(deftest test-repl-state
(testing "testing state in the repl"
(is (= "hello" sayHello))))
If you run this with "Tools->REPL->Run tests in current ns in REPL" it will pass.
If you then refactor the code like this
(def getGreeting "hello")
(deftest test-repl-state
(testing "testing state in the repl"
(is (= "hello" sayHello))))
If you run this with "Tools->REPL->Run tests in current ns in REPL" it will still pass (because the def of sayHello still exists in the repl). However, the tests should fail because the code is currently in a failing state (sayHello is not defined anywhere in the code).
I've tried toggling the "locals will be cleared" button in the REPL window but this does not seem to fix the issue.
If there is a way to run the tests outside of the REPL (or in a new REPL for each test run) I'd be fine with that as a solution.
All I want is that there is a 1 to 1 correspondence between the source code under test and the result of the test.
Thanks in advance for your help.
Yes, it's annoying to have old defs available. I don't even create tests usually (whoops), but this bites me during normal development. If I create a function, then rename it, then change it, then accidentally refer to the first function name, I get odd results since it's referring to the old function. I'm still looking for a good way around this that doesn't involve killing and restarting the REPL.
For your particular case though, there's a couple easy, poor workarounds:
Open IntelliJ's terminal (button at bottom left of the window) and run lein test. This will execute all the project's tests and report the results.
Similarly to the above, you can, outside of IntelliJ, open a command window in the project directory and run lein test, and it will run all found tests.
You can also specify which namespace to test using lein test <ns here> (such as lein test beings-retry.core-test), or a specific test in a namespace using :only (such as lein test :only beings-retry.core-test/a-test; where a-test is a deftest). Unfortunately, this doesn't happen in the REPL, so it kind of breaks workflow.
The only REPL-based workaround I know of, as mentioned above, is to just kill the REPL:
"Stop REPL" (Ctrl+F2)
"Reconnect" (Ctrl+F5).
Of course though, this is slow, and an awful solution if you're doing this constantly. I'm interested to see if anyone else has any better solutions.
You could use Built-in test narrowing (test selector) feature of test-refresh lein plugin. It allows to test only those tests that have been marked with ^:test-refresh/focus meta every time you save a file.
The usual solution for this kind of problem is either stuartsierra/component or tolitius/mount.
A complete description would be out of place here, but the general idea is to have some system to manage state in a way that allows to cleanly reload the application state. This helps keeping close to the code that is saved in your source files while interactively working on the running system.
Thanks to everyone for their suggestions. I'm posting my own answer to this problem because I've found a way forward that works for me and I'm not sure that any of the above were quite what I was looking for.
I have come to the conclusion that the clojure REPL, although useful, is not where I will run tests. This basically came down to a choice between either running a command to clean the repl between each test run (like the very useful refresh function in tools.namespace https://github.com/clojure/tools.namespace) or not running tests in the REPL.
I chose the latter option because.
It is one less step to do (and reloading is not always perfect)
CI tests do not run in a REPL so running them directly in dev is one step closer to the CI environment.
The code in production does not run in a REPL either so running tests outside the repl is closer to the way that production code runs.
It's actually a pretty simple thing to configure a run configuration in IntelliJ to run either a single test or all tests in your application as a normal clojure application. You can even have a REPL running at the same time if you like and use it however you want. The fact that the tooling leans so heavily towards running things in the REPL blinded me to this option to some extent.
I'm pretty inexperienced with Clojure and also a stubborn old goat that is set in his TDD ways but at least some others agree with me about this https://github.com/cursive-ide/cursive/issues/247.
Also if anyone is interested, there is a great talk on how the REPL holds on to state and how this causes all sorts of weird behaviour here https://youtu.be/-RaFcpNiYCo. It turns out that the problem I was seeing with re-defining functions was just the tip of the iceberg.
One option that may help, especially if you're bundling several assertions, or have repeating tests is let. The name-value binding has a known scope, and can save you from re-typing a lot.
Here's an example:
(deftest my-bundled-and-scoped-test
(let [TDD "My expected result"
helper (some-function :data)]
(testing "TDD-1: Testing state in the repl"
(is (= TDD "MY expected result")))
(testing "TDD-2: Reusing state in the repl"
(is (= TDD helper)))))
Once my-bundled-and-scoped test finishes executing, you'll no longer be in the let binding. An added benefit is that the result of some-function will be reusable too, which is handy for testing multiple assertions or properties of the same function/input pair.
While on the subject, I'd also recommend using Leiningen to run your tests, as there are plenty of plugins that can help you test more efficiently. I'd checkout test-refresh, speclj, and cloverage.

Tool or eclipse base plugin available for generate test cases for SalesForce platform related Apex classes

Can any one please tell me is there any kind of tools or eclipse base plugins available for generate relevant test cases for SalesForce platform related Apex classes. It seems with code coverage they are not expecting out come like we expect with JUnit, they want to cover whether, test cases are going through the flows of the source classes (like code go through).
Please don't get this post in wrong, I don't want anyone is going to write test cases for my codes :). I have post this question due to nature of SalesForce expecting that code coverage should be. Thanks.
Although Salesforce requires a certain percentage of code coverage for your test cases, you really need to be writing cases that check the results to ensure that the code behaves as designed.
So, even if there was a tool that could generate code to get 100% coverage of your test class, it wouldn't be able to test the results of those method calls, leaving you with a false sense of having "tested code".
I've found that breaking up long methods into separate, sometimes static, methods makes it easier to do unit testing. You can test each individual method, and not worry so much about tweaking parameters to a single method so that it covers all execution paths.
it's now possible to generate test classes automatically for your class/trigger/batch. You can install "Test Class Generator" app from AppExchange and see it working.
This would really help you generating test class and saves lot of your development time.

TestCase scripting framework

For our webapp testing environment we're currently using watin with a bunch of unit tests, and we're looking to move to selenium and use more frameworks.
We're currently looking at Selenium2 + Gallio + Xunit.net,
However one of the things we're really looking to get around is compiled testcases. Ideally we want testcases that can be edited in VS with intellisense, but don't require re-compilling the assembly every single time we make a small change,
Are there any frameworks likely to help with this issue?
Are there any nice UI tools to help manage massive ammount of testcases?
Ideally we want the testcase writing process to be simple so that more testers can aid in writing them.
cheers
You can write them in a language like ruby (e.g., IronRuby) or python which doesnt have an explicit compile step of such a manner.
If you're using a compiled a compiled language, it needs to be compiled. Make the assemblies a reasonable size and a quick Shift F6 (I rewire it to shift Ins) will compile your current project. (Shift Ctrl-B will typically do lots of redundant stuff). Then get NUnit to auto-re-run the tests when it detects the assembly change (or go vote on http://xunit.codeplex.com/workitem/8832 and get it into the xunit GUI runner).
You may also find that CR, R# and/or TD.NET have stuff to offer you in speeding up your flow. e.g., I believe CR detects which tests have changed and does stuff around that (at the moment it doesnt support the more advanced xunit.net testing styles so I dont use it day to day)
You wont get around compiling test frameworks if you add new tests..
However there are a few possibilities.
First:
You could develop a native language like i did in xml or similar format. It would look something like this:
[code]
action name="OpenProfile"
parameter name="Username" value="TestUser"
[/code]
After you have this your could simply take an interpreter and serialize this xml into an object. Then with reflection you could call the appropriate function in the corresponding class. After you have a lot of actions implemented of course perfectly moduled and carefully designed structure ( like every page has its own object and a base object that every page inherits from ), you will be able to add xml based tests on your own without the need of rebuilding the framework it self.
You see you have actions like, login, go to profile, go to edit profile, change password, save, check email etcetc. Then you could have tests like: login change password, login edit profile username... and so on and so fort. And you only would be creating new xmls.
You could look for frameworks supporting similar behavior and there are a few out there. The best of them are cucumber and fitnesse. These all support high level test case writing and low level functionality building.
So basically once you have your framework ready all your have to do is writing tests.
Hope that helped.
Gergely.