XCUITEST is not running while running from from terminal using xcodebuildcommand - xcodebuild

I have more than 50 test cases in my UI-TEST scheme and I am running it via the below command
xcodebuild test -workspace tribo.xcworkspace -scheme triboUITests -sdk iphonesimulator -destination 'platform=iOS Simulator,name=iPhone X,OS=12.4'
what happens is the RAM of my Mac machine shoots up and the system hangs up , I have attached the activity monitor report tooenter image description here
can anyone help me in running all XCUITEST cases present in scheme from terminal

The console command itself seems legit.
The RAM problem also could be caused by inaccurate resource management in test code.
Consider inspecting your code for possible memory leakings and other bugs.

You can run a specific test or tests as part of the xcodebuild invocation.
From the xcodebuild help command line:
-only-testing:TEST-IDENTIFIER constrains testing by specifying tests to include, and excluding other tests
-skip-testing constrains testing by specifying tests to exclude, but including other tests
-skip-testing:TEST-IDENTIFIER constrains testing by specifying tests to exclude, but including other tests
Using these commands may help you understand which tests are causing the issue.
You could also use the Xcode Memory Graph Debugger (see this link:
https://medium.com/zendesk-engineering/ios-identifying-memory-leaks-using-the-xcode-memory-graph-debugger-e84f097b9d15
Finally, you if you are trying to run all your tests in parallel, you may have too many simulators running on the machine.

Related

Is there anyway to run only the tests that are under "Debug\AnyCPU" in NUnit3 Console?

I can run all of my test cases in a solution file ( in DEBUG mode) by using the following command:
nunit3-console.exe "%SlnFile%" /config:Debug --skipnontestassemblies --result="%TestResult.xml%"
But this command will run for all tests under different platforms, x86, x64 and AnyCPU.
Is there anyway to restrict it, so that nunit3-console only runs for DEBUG|AnyCPU?

ctest and MPI parallel tests

I'm trying to build some tests with googleTest which should test some MPI parallel code. Ideally I want our CI server to execute them through ctest.
My naive approach was to simply call ctest with MPI:
mpirun -n 3 ctest
However, this results in even trivial tests failing as long as at least two are executed.
Example of a trivial tests:
TEST(TestSuite, DummyTest1) {
EXPECT_TRUE(true);
}
TEST(TestSuite, DummyTest2) {
EXPECT_TRUE(true);
}
Am I supposed to launch ctest in some other way? Or do I have to approach this completely differently?
Additional info:
The tests are added via gtest_discover_tests().
Launching the test executable directly with MPI (mpirun -n 3 testExe) yields successful tests but I would prefer using ctest.
Version info:
googletest: 1.11.0
MPI: OpenMPI 4.0.3 AND MPICH 3.3.2
According to this CMake Forum post, which links to a Merge Requests and an issue in the CMake GitLab repository, gtest_discover_tests() currently simply doesn't support MPI.
A possible workaround is to abuse the CROSSCOMPILING_EMULATOR property to inject the wrapper into the test command. However, this changes the whole target and not only the tests so your mileage may vary.
set_property(TARGET TheExe PROPERTY CROSSCOMPILING_EMULATOR '${MPIEXEC_EXECUTABLE} ${MPIEXEC_NUMPROC_FLAG} 3')
gtest_discover_tests(TheExe)
See the forum post for a full minimal example.

How to run the gem5 unit tests?

Gem5 has several tests in the source tree, and there is some documentation at: http://www.gem5.org/Regression_Tests but those docs are not very clear.
What tests are there and how to run them?
Unit vs regression tests
gem5 has two kinds of tests:
regression: run some workload (full system or syscall emulation) on the entire simulator
unit: test only a tiny part of the simulator, without running the entire simulator binary
We will cover both on this answer.
Regression tests
2019 regression tests
A new testing framework was added in 2019 and it is documented at: https://gem5.googlesource.com/public/gem5/+/master/TESTING.md
Before sending patches, you basically want to run:
cd tests
./main.py run -j `nproc` -t `nproc`
This will:
build gem5 for the actively supported ISAs: X86, ARM, RISCV with nproc threads due to j
download binaries required to run tests from gem5.org, e.g. http://www.gem5.org/dist/current/arm/ see also: http://gem5.org/Download It was not currently possible to download out or the source tree, which is bad if you have a bunch of git worktrees lying around.
run the quick tests on nproc threads due to -t, which should finish in a few minutes
You can achieve the same as the previous command without cd by passing the tests/ directory as an argument:
./main.py run -j `nproc` -t `nproc` tests
but I wish neither were necessary: https://gem5.atlassian.net/browse/GEM5-397
This is exactly what the automated upstream precommit tests are running as can be seen from tests/jenkins/presubmit.sh.
Stdout contains clear result output of form:
Test: cpu_test_DerivO3CPU_FloatMM-ARM-opt Passed
Test: cpu_test_DerivO3CPU_FloatMM-ARM-opt-MatchStdout Passed
Test: realview-simple-atomic-ARM-opt Failed
Test: realview-simple-atomic-dual-ARM-opt Failed
and details about each test can be found under:
tests/.testing-results/
e.g.:
.testing-results/SuiteUID:tests-gem5-fs-linux-arm-test.py:realview-simple-atomic-ARM-opt/TestUID:tests-gem5-fs-linux-arm-test.py:realview-simple-atomic-ARM-opt:realview-simple-atomic-ARM-opt/
although we only see some minimal stdout/stderr output there which don't even show the gem5 stdout. The stderr file does however contain the full command:
CalledProcessError: Command '/path/to/gem5/build/ARM/gem5.opt -d /tmp/gem5outJtSLQ9 -re '/path/to/gem5/tests/gem5/fs/linux/arm/run.py /path/to/gem5/master/tests/configs/realview-simple-atomic.py' returned non-zero exit status 1
so you can remove -d and -re and re-run that to see what is happening, which is potentially slow, but I don't see another way.
If a test gets stuck running forever, you can find its raw command with Linux' ps aux command since processes are forked for each run.
Request to make it easier to get the raw run commands from stdout directly: https://gem5.atlassian.net/browse/GEM5-627
Request to properly save stdout: https://gem5.atlassian.net/browse/GEM5-608
To further stress test a single ISA, you can run all tests for one ISA with:
cd tests
./main.py run -j `nproc` -t `nproc` --isa ARM --length long --length quick
Each test is classified as either long or quick, and using both --length runs both.
long tests are typically very similar to the default quick ones, but using more detailed and therefore slower models, e.g.
tests/quick/se/10.mcf/ref/arm/linux/simple-atomic/ is quick with a faster atomic CPU
tests/long/se/10.mcf/ref/arm/linux/minor-timing/ is long with a slower Minor CPU
Tested in gem5 69930afa9b63c25baab86ff5fbe632fc02ce5369.
2019 regression tests run just one test
List all available tests:
./main.py list --length long --length quick
This shows both suites and tests, e.g.:
SuiteUID:tests/gem5/cpu_tests/test.py:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt
TestUID:tests/gem5/cpu_tests/test.py:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt
TestUID:tests/gem5/cpu_tests/test.py:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt-MatchStdout
And now you can run just one test with --uid:
./main.py run -j `nproc` -t `nproc` --isa ARM --uid SuiteUID:tests/gem5/cpu_tests/test.py:cpu_test_AtomicSimpleCPU_FloatMM-ARM
A bit confusingly, --uid must point to a SuiteUID, not TestUID.
Then, when you run the tests, and any of them fails, and you want to run just the failing one, the test failure gives you a line like:
Test: cpu_test_DerivO3CPU_FloatMM-ARM-opt Passed
and the only way to run just the test is to grep for that string in the output of ./main.py list since cpu_test_DerivO3CPU_FloatMM-ARM-opt is not a full test ID, which is very annoyng.
2019 regression tests out of tree
By default, tests/main.py places the build at gem5/build inside the source tree. Testing an out of tree build is possible with --build-dir:
./main.py run -j `nproc` -t `nproc` --isa ARM --length quick --build-dir path/to/my/build
which places the build in path/to/my/build/ARM/gem5.opt instead for example.
If your build is already done, save a few scons seconds with --skip-build option as well:
./main.py run -j `nproc` -t `nproc` --isa ARM --length quick --build-dir path/to/my/build --skip-build
Note however that --skip-build also skips the downloading of test binaries. TODO patch that.
2019 regression tests custom binary download director
Since https://gem5-review.googlesource.com/c/public/gem5/+/24525 you can use the --bin-path option to specify where the test binaries are downloaded, otherwise they just go into the source tree.
This allows you to reuse the large binaries such as disk images across tests in multiple worktrees in a single machine, saving time and space.
Pre-2019 regression tests
This way of running tests is deprecated and will be removed.
The tests are run directly with scons.
But since the test commands are a bit long, there is even an in-tree utility generate tests commands for you.
For example, to get the command to run X86 and ARM quick tests, run:
./util/regress -n --builds X86,ARM quick
The other options besides quick are long or all to do both long and quick at the same time.
With -n it just prints the test commands, and without it it actually runs them.
This outputs something like:
scons \
--ignore-style \
--no-lto \
build/X86/gem5.debug \
build/ARM/gem5.debug \
build/X86/gem5.fast \
build/ARM/gem5.fast \
build/X86/tests/opt/quick/se \
build/X86/tests/opt/quick/fs \
build/ARM/tests/opt/quick/se \
build/ARM/tests/opt/quick/fs
TODO: why does it build gem5.debug and gem5.fast, but then runs an /opt/ test?
So note how this would both:
build the gem5 executables, e.g. build/X86/gem5.debug
run the tests, e.g. build/X86/tests/opt/quick/fs
Or get the command to run all tests for all archs:
./util/regress -n all
Then, if you just want to run one of those types of tests, e.g. the quick X86 ones you can copy paste the scons just for that tests:
scons --ignore-style build/X86/tests/opt/quick/se
Running the tests with an out of tree build works as usual by magically parsing the target path: How to build gem5 out of tree?
scons --ignore-style /any/path/that/you/want/build/X86/tests/opt/quick/se
or you can pass the --build-dir option to util/regress:
./util/regress --build-dir /any/path/that/you/want all
The tests that boot Linux on the other hand require a Linux image with a specific name in the M5_PATH, which is also annoying.
This would however be very slow, not something that you can run after each commit: you are more likely to want to run just the quick tests for your ISA of interest.
Pre-2019 regression tests: Run just one test
If you just append the path under tests in the source tree to the test commands, it runs all the tests under a given directory.
For example, we had:
scons --ignore-style build/X86/tests/opt/quick/se
and we notice that the following path exists under tests in the source tree:
quick/se/00.hello/ref/x86/linux/simple-atomic/
so we massage the path by removing ref to get the final command:
scons build/X86/tests/opt/quick/se/00.hello/x86/linux/simple-atomic
Pre-2019 regression tests: Find out the exact gem5 CLI of the command run
When you run the tests, they output to stdout the m5out path.
Inside the m5out path, there is a simout with the emulator stdout, which contains the full gem5 command line used.
For example:
scons --ignore-style build/X86/tests/opt/quick/se
outputs:
Running test in /any/path/that/you/want/build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic.
and the file:
/any/path/that/you/want/build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic
contains:
command line: /path/to/mybuild/build/ARM/gem5.opt \
-d /path/to/mybuild/build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-atomic \
--stats-file 'text://stats.txt?desc=False' \
-re /path/to/mysource/tests/testing/../run.py \
quick/fs/10.linux-boot/arm/linux/realview-simple-atomic
Pre-2019 regression tests: Re-run just one test
If you just run a test twice, e.g. with:
scons build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-atomic
scons build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-atomic
the second run will not really re-run the test, but rather just compare the stats from the previous run.
To actually re-run the test, you must first clear the stats generated from the previous run before re-running:
rm -rf build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-atomic
Pre-2019 regression tests: Get test results
Even this is messy... scons does not return 0 success and 1 for failure, so you have to parse the logs. One easy way it to see:
scons --ignore-style build/X86/tests/opt/quick/se |& grep -E '^\*\*\*\*\* '
which contains three types of results: PASSSED, CHANGED or FAILED
CHANGED is mostly for stat comparisons that had a great difference, but those are generally very hard to maintain and permanently broken, so you should focus on FAILED
Note that most tests currently rely on SPEC2000 and fail unless you have access to this non-free benchmark...
Unit tests
The unit tests, which compile to separate executables from gem5, and just test a tiny bit of the code.
There are currently two types of tests:
UnitTest: old and deprecated, should be converted to GTest
GTest: new and good. Uses Google Test.
Placed next to the class that they test, e.g.:
src/base/cprintf.cc
src/base/cprintf.hh
src/base/cprintftest.cc
Compile and run all the GTest unit tests:
scons build/ARM/unittests.opt
Sample output excerpt:
build/ARM/base/cprintftest.opt --gtest_output=xml:build/ARM/unittests.opt/base/cprintftest.xml
Running main() from gtest_main.cc
[==========] Running 4 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 4 tests from CPrintf
[ RUN ] CPrintf.Misc
[ OK ] CPrintf.Misc (0 ms)
[ RUN ] CPrintf.FloatingPoint
[ OK ] CPrintf.FloatingPoint (0 ms)
[ RUN ] CPrintf.Types
[ OK ] CPrintf.Types (0 ms)
[ RUN ] CPrintf.SpecialFormatting
[ OK ] CPrintf.SpecialFormatting (0 ms)
[----------] 4 tests from CPrintf (0 ms total)
[----------] Global test environment tear-down
[==========] 4 tests from 1 test case ran. (0 ms total)
[ PASSED ] 4 tests.
Compile and run just one test file:
scons build/ARM/base/cprintf.test.opt
./build/ARM/base/cprintf.test.opt
List available test functions from a test file, and run just one of them:
./build/ARM/base/cprintftest.opt --gtest_list_tests
./build/ARM/base/cprintftest.opt SpecialFormatting
Tested on gem5 200281b08ca21f0d2678e23063f088960d3c0819, August 2018.
Unit tests with SimObjects
As of 2019, the unit tests are quite limited, because devs haven't yet found a proper way to test SimObjects in isolation, which make up the bulk of the simulator, and are tightly bound to the rest of the simulator. This unmerged patch attempted to address that: https://gem5-review.googlesource.com/c/public/gem5/+/15315
It might be possible to work around it with Google Mock, which is already present in-tree, but it is not clear if anyone has the patience to mock out enough of SimObject to actually make such tests.
I believe the only practical solution is to embed all tests into gem5.opt, and then have a --test <testname> option that runs tests instead of running simulating. This way we get a single binary without duplicating binary sizes, but can still access everything.
Related issue: https://gem5.atlassian.net/browse/GEM5-433
Continuous integration
20.1 Nightlies enabled
As mentioned at: https://www.gem5.org/project/2020/10/01/gem5-20-1.html a Jenkins that runs the long regressions was added at: https://jenkins.gem5.org/job/Nightly/
2019 CI
Around 2019-04 a precommit CI that runs after every pull request after the maintainer gives +1.
It uses a magic semi-internal Google provided Jenkins setup called Kokoro which provides low visibility on configuration.
See this for example: https://gem5-review.googlesource.com/c/public/gem5/+/18108 That server does not currently run nightlies. The entry point is tests/jenkins/presubmit.sh.
Nightlies were just disabled to start with.
What is the environment of the 2019 CI?
In-tree Docker images are used: https://askubuntu.com/questions/350475/how-can-i-install-gem5/1275773#1275773
Pre-2019 CI update
here was a server running somewhere that runs the quick tests for all archs nightly and posts them on the dev mailing list, adding to the endless noise of that enjoyable list :-)
Here is a sample run: https://www.mail-archive.com/gem5-dev#gem5.org/msg26855.html
As of 2019Q1, gem5 devs are trying to setup an automated magic Google Jenkins to run precommit tests, a link to a prototype can be found at: https://gem5-review.googlesource.com/c/public/gem5/+/17456/1#message-e9dceb1d3196b49f9094a01c54b06335cea4ff88 This new setup uses the new testing system in tests/main.py.
Pre-2019 CI: Why so many tests are CHANGED all the time?
As of August 2018, many tests have been CHANGED for a long time.
This is because stats can vary due to a very wide number of complex
factors. Some of those may be more accurate, others no one knows,
others just bugs.
Changes happen so often that devs haven't found the time to properly
understand and justify them.
If you really care about why they changed, the best advice I have is to bisect them.
But generally your best bet is to just re-run your old experiments on the newer gem5 version, and compare everything there.
gem5 is not a cycle accurate system simulator, so absolute values or
small variations are not meaningful in general.
This also teaches us that results obtained with small margins are
generally not meaningful for publication since the noise is too great.
What that error margin is, I don't know.

How to get around memory error with karma & phantomjs

We're running tests using karma and phantomjs Last week, our tests mysteriously started crashing phantomJS with an error of -1073741819.
Based on this thread for Chutzpah it appears that code indicates a native memory failure with PhantomJS.
Upon further investigation, we are consistently seeing phantom crash around 750MB of memory.
Is there a way to configure Karma so that it does not run up against this limit? Or a way to tell it to flush phantom?
We only have around 1200 tests so far. We're about 1/4 of the way through our project, so 5000 UI tests doesn't seem out of the question.
Thanks to the StackOverflow phenomenon of posting a question and quickly discovering an answer, we solved this by adding gulp tasks. Before we were just running karma start at the command line. This spun up a single instance of phantomjs that crashed when 750MB was reached.
Now we have a gulp command for each one of our sections of tests, e.g. gulp common-tests and gulp admin-tests and gulp customer-tests
Then a single gulp karma that runs each of those groupings. This allows each gulp command to have its own instance of phantom, and therefore stay underneath that threshold.
We ran into similar issue. Your approach is interesting and certainly side steps the issue. However, be prepared to face it again later.
I've done some investigation and found the cause of memory growth (at least in our case). Turns out when you use:
beforeEach(inject(SomeActualService)){ .... }
the memory taken up by SomeActualService does not get released at the end of the describe block and if you have multiple test files where you inject the same service (or other injectable objects) more memory will be allocated for it again.
I have a couple of ideas on how to avoid this:
1. create mock objects and never use inject to get real objects unless you are in the test that tests that module. This will require writing tons of extra code.
2. Create your own tracker (for tests only) for injectable objects. That way they can be loaded only once and reused between test files.
Forgot to mention: We are using angular 1.3.2, Jasmine 2.0 and hit this problem around 1000 tests.
I was also running into this issue after about 1037 tests on Windows 10 with PhantomJS 1.9.18.
It would appear as ERROR [launcher]: PhantomJS crashed. after the RAM for the process would exceed about 800-850 MB.
There appears to be a temporary fix here:
https://github.com/gskachkov/karma-phantomjs2-launcher
https://www.npmjs.com/package/karma-phantomjs2-launcher
You install it via npm install karma-phantomjs2-launcher --save-dev
But then need to use it in karma.conf.js via
config.set({
browsers: ['PhantomJS2'],
...
});
This seems to run the same set of tests while only using between 250-550 MB RAM and without crashing.
Note that this fix works out of the box on Windows and OS X, but not Linux (PhantomJS2 binaries won't start). This affects pushes to Travis CI.
To work around this issue on Debian/Ubuntu:
sudo apt-get install libicu52 libjpeg8 libfontconfig libwebp5
This is a problem with PhantomJS. According to another source, PhantomJS only runs the garbage collector when the page is closed, and this only happens after your tests run. Other browsers work fine because their garbage collectors work as expected.
After spending a few days on the issue, we concluded that the best solution was to split tests into groups. We had grunt create a profile for each directory dynamically and created a command that runs all those profiles. For all intents and purposes, it works just the same.
We had a similar issue on linux (ubuntu), that turned out to be the amount of memory segments that the process can manage:
$ cat /proc/sys/vm/max_map_count
65530
Then run this:
$ sudo bash -c 'echo 6553000 > /proc/sys/vm/max_map_count'
Note the number was multiplied by 100.
This will change the session settings. If it solves the problem, you can set it up for all future sessions:
$ sudo bash -c 'echo vm.max_map_count = 6553000 > /etc/sysctl.d/60-max_map_count.conf'
Responding to an old question, but hopefully this helps ...
I have a build process which a CI job runs in a command line only linux box. So, it seems that PhantomJS is my only option there. I have experienced this memory issue locally on my mac, but somehow it doesn't happen on the linux box. My solution was to add another test command to my package.json to run karma using Chrome, and run that locally to run my tests. When pushed up, Jenkins would kick off the regular test command, running PhantomJS.
Install this plugin: https://github.com/karma-runner/karma-chrome-launcher
Add this to package.json
"test": "karma start",
"test:chrome": "karma start --browsers Chrome"

Is it possible to run individual test cases/test classes on the command line with xcode 5?

I can run all unit tests using the following command:
xcodebuild test -workspace Project.xcworkspace -scheme Scheme -sdk iphonesimulator7.0 -destination platform='iOS Simulator',OS=7.0,name='iPhone Retina (4-inch)'
Is there anything I can pass to this to run individual unit tests/classes in the same way that you can using the Xcode UI?
Cheers.
Not that I know of, and it's not on the man page either. But you have such option in the excellent xctool (-only SomeTestTarget:SomeTestClass/testSomeMethod).
For those who are using later versions of Xcode (8.0+) and may run across this post in the future (like myself), this is now possible using the -only-testing flag, which is documented in the man page for xcodebuild. The format for using the flag is as follows:
xcodebuild \
-destination "$DEST" \
-workspace "$WRKSP" \
-scheme "$SCHEME" \
-only-testing:Target/Class/Method test
You can also omit elements from the path after the -only-testing flag, e.g. -only-testing:Scheme/Class will run all these tests within that class.
Here is a related Stackoverflow question, touching on the new functionality:
How to use xcodebuild with -only-testing and -skip-testing flag?
Another way is to edit the schemer of your test file and disable/enable the tests as you please. Then you can rerun the same command through the CLI and it will only run the tests you specified above.
If you're using Kiwi, you should supposedly be able to do it using the KW_SPEC environment variable. I haven't been able to, though: Running a single KIWI spec with xctool.