Speed up application's launch time when running UI Tests - xctest

When running UI tests, the application's launch oscillates between 4 and 10 seconds, going sometimes up to 37s seconds.
Is there anyway to make it faster?

Related

React native becomes slow in mobile. Javascript code takes 3-4 seconds to execute in emulator however, when installed in apk it beocmes slow

React-native application becomes very slow and takes around 2 mins to run when installed in mobile. However, in emulator it takes around 3-4 secs. The code has a lot of javascript code and for loops written in javascript.
This might be due to your emulator hardware is less compared with physical device and you can usually experience that difference of loading time, to overcome that loading we have a concept of Splash Screen which will hide that loading from users.

Selenium Run Functional Tests Distribute By Test Time

I'm running Run Functional Tests distributed by machines number.
Now I've around 850 tests and it takes around 3.5 hours. (8 Servers)
I added another 8 servers and now it takes around 2.2 hours.
Because it distributed by machine number, every machine gets 53 tests but there are tests which takes 30 seconds and some tests that takes 5 mintues.
That situation is annoying, because there are machine which finish after their job after (53 * 30seconds) = 1500 seconds, around 25 minutes while there are servers which finish their work after 2 hours.
It makes my build very slower.
I want somehow to distribute it by running time, so that every machine will work the same time and start and finish at the same time.
Thanks for helping
Task Run functional tests is deprecated in VSTS and TFS 2018. Instead of using Run functional tests, you should use Visual Studio Test task.
Task Visual Studio Test in VSTS/TFS 2018, supports Based on past running time of tests. Based on past running time of tests: This batching considers past running time to create batches of tests such that each batch has approximately equal running time. This option should meet you requirement.

Why does Jest --runInBand speed up tests?

I read that the --runInBand flag speeds up Jest test duration by 50% on CI servers. I can't really find an explanation online on what that flag does except that it lets tests run in the same thread and sequentially.
Why does running the test in the same thread and sequentially make it faster? Intuitively, shouldn't that make it slower?
Reading your linked page and some other related sources (like this github issue) some users have found that:
...using the --runInBand helps in an environment with limited resources.
and
... --runInBand took our tests from >1.5 hours (actually I don't know how long because Jenkins timed out at 1.5 hours) to around 4 minutes. (Note: we have really poor resources for our build server)
As we can see, those users had improvements in their performances on their machines even though they had limited resources on them. If we read what does the --runInBand flag does from the docs it says:
Alias: -i. Run all tests serially in the current process, rather than creating a worker pool of child processes that run tests. This can be useful for debugging.
Therefore, taking into consideration these comments and the docs, I believe the improvement in performance is due to the fact that now the process runs in a single thread. This greatly helps a limited-resource-computer because it does not have to spend memory and time dealing and handling multiple threads in a thread pool, a task that could prove to be too expensive for its limited resources.
However, I believe this is the case only if the machine you are using also has limited resources. If you used a more "powerful" machine (i.e.: several cores, decent RAM, SSD, etc.) using multiple threads probably will be better than running a single one.
When you run tests in multi-threads, jest creates a cache for every thread. When you run with --runInBand jest uses one cache storage for all tests.
I found it after runs 20 identical tests files, first with key --runInBand, a first test takes 25 seconds and next identical tests take 2-3s each.
When I run tests without --runInBand key, each identical test file executes in 25 seconds.

The "VM Periodic Task Thread" is run every 50 milliseconds, can I tune this?

On a normal hardware today this likely does not hurt ever, but on a Raspberry PI it is a bit annoying that the CPU is woken up every 50 milliseconds even for a java application which currently does absolutely nothing.
I verify with strace, that the "VM Periodic Task Thread" is active every 50 milliseconds. A rough answer of what it does is given here, but can I tune the 50 milliseconds somehow?
try setting -XX:PerfDataSamplingInterval=xxx, the default is 50 and performance sampling matches the description you linked, so that might be it.

What types of testing do you include in your build process?

I use TFS 2008. We run unit tests as part of our continuous integration build and integration tests nightly.
What other types of testing do you automate and include in your build process? what technologies do you use to do so?
I'm thinking about smoke tests, performance tests, load tests but don't know how realistic it is to integrate these with Team Build.
First, we have check-in (smoke) tests that must run before code can be checked in. It's done automatically by running a job that runs the tests and then makes the check-in to source control upon successful test completion. Second, cruise control kicks off build and regression tests. The product is built then several sets of integration tests are run. The number of tests vary by where we are in the release cycle. More testing is added late in the cycle during ramp down. Cruise control takes all submissions within a certain time window (12 minutes) so your changes may be built and tested with a small number of others. Third, there's an automated nightly build and tests that are quite extensive. We have load or milestone points every 2 or 3 weeks. At a load point, all automated tests are run plus manual testing is done. Performance testing is also done for each milestone. Performance tests can be kicked off on request but the hardware available is limited so people have to queue up for performance tests. Usually people rely on the load performance tests unless they are making changes specifically to improve performance. Finally, stress tests are also done for each load. These tests are focussed on making sure the product has no memory leaks or anything else that prevents 24/7 running of the product as opposed to performance. All of this is done with ant, cruise control, and Python scripts.
Integrating load testing during you build process is a bad idea, just do your normal unit testing to make sure that all your codes work as expected. Load and performance testing should be done separately.