Why Detox Automation Tool is faster than Appium and others? - detox

I was searching for some performance comparison e.g. interactions with elements, memory usage, etc with Appium/web driver and others. Unfortunately, I didn't end-up with some concrete benchmarking.
Is there any KPIs on why google/EarlGrey (used by Detox) claims to be faster then others ?

Detox iOS maintainer here.
Detox is faster because it injects itself inside your app and uses various tricks to monitor when your app is idle or busy, and only blocking test progression when absolutely necessary (e.g. your app is busy with network requests).
This is documented here:
https://github.com/wix/Detox/blob/master/docs/Introduction.DesignPrinciples.md

Related

When to use full system FS vs syscall emulation SE with userland programs in gem5?

Since syscall emulation is much easier to setup, I'm wondering what are the advantages of using the full system emulation when running an userland program.
Or in other words, what interesting aspects are modeled in the full system but not syscall emulation mode, and when are they significant?
It is mentioned in the docs at: http://gem5.org/Splash_benchmarks that full system is
Realistic: you're getting the actual Linux thread scheduler to schedule your threads
Is this the only advantage, or are there any other advantage for users that are optimizing their applications or investigating micro-architecture?
I also suspect that the MMU simulation is another important feature that is only modeled properly in full system mode, and could affect program performance.
Full system mode should be preferred (when it is possible to use it). There are benefits to using it, primarily fidelity in the simulation which is not possible with system call emulation mode. (The kernel interactions with an application can be important depending on the study that a researcher is trying to conduct.) Also, the user does not need to worry about implementing (or debugging) the system call implementation.
With that said, system call emulation mode can be useful under the right conditions. It is faster to run application code because there is no kernel running in the background. There is also no system noise if you want to mitigate it entirely. Arguably, it is easier to bootstrap a new device model as well. You can work on the model without driver support and make magic happen though fake interfaces. (It saves you having to model the bare-metal interface perfectly or having to write your own device driver.)
Your comments about dynamic linking and multi-threading support are related. If dynamic linking is fixed, you should be able to use your system's pthreads library and can forget about linking with m5threads entirely. The pthread library support has existed in the simulator for a while now (the system calls necessary for it to work properly).
However, there's a caveat to the threading implementation. You need to preallocate enough thread contexts at the start of simulation (by invoking with the -n option on the se.py script).
To elaborate, there is no operating system running in the background to schedule threads on the processors. (I use the terms threads and processors very loosely here.) To obviate the scheduling problem, you have to preallocate enough processors so that the threads can be created on calls to clone/execve. There is a constraint that you can never have more threads than processors (unlike a real system where the operating system can schedule them as it pleases).
The configuration scripts probably do not behave how a researcher would want them to behave for a multi-threaded workload. The researcher would need to verify that the caches were configured correctly and that they are sharing certain cache levels like a real machine would do. If the application calls clone/execve many times, it may not be possible to cause the generated configuration to behave realistically.
Your last statement about modeling accelerators is incorrect. The AMD GFX8 model does use system call emulation mode. (Also, we developed a NIC model which was never publicly released.) It involves creating a fake driver and manipulating it through the same ioctl interfaces that a real driver would use. Linux treats everything like a file so the driver is opened through the open system call interface and you can capture it there. There are other things which you might need to do (like map mmio ranges in the configuration), but the driver interface is the main piece. The application interacts with the driver and the driver interacts with the accelerator model.
Advantages of SE:
sometimes easier to setup benchmarks, if all syscalls you need are implemented (see also, see also), and if you have just the right cross compiler, which of course no one has documented properly which one that is.
SE runs Dhrystone about 2x https://github.com/cirosantilli/linux-kernel-module-cheat/tree/00d282d912173b72c63c0a2cc893a97d45498da5#user-mode-vs-full-system-benchmark That benchmark makes no syscalls (except for information before / after the actual benchmark runs)
it is easier to get greater visibility and control of what the application is doing since the kernel is not running in parallel. E.g. stats will be just for the application, GDB will be just for the application: thread-aware gdb for the Linux kernel
Disadvantages of SE:
in practice, harder to setup benchmarks, because it is too fragile / has too many restrictions.
If your content does not work immediately out of the box, it is easier to just create or download a full system image and go for that instead, which is much more reliable.
Here is a sample minimal working Ubuntu setup if you are still interested: How to compile and run an executable in gem5 syscall emulation mode with se.py?
less representative, since no actual OS is running
no dynamic linking for ARM as of June 2018: How to run a dynamically linked executable syscall emulation mode se.py in gem5?
if you want to evaluate an accelerator like a GPU, you will have to create some slightly custom interface for it, since there is no kernel driver running on top the the kernel as usual.
Brandon has pointed out in his answer that this has in fact been done before: https://stackoverflow.com/a/56371006/9160762
So my recommendation is:
try SE first. If it works, great. If it doesn't, try to fix it quickly, since most problems are trivial. Having the SE setup will save you a lot of time over full system, and it is often representative enough.
otherwise, use FS mode. It is just simpler to setup, more representative, and the performance hit is acceptable for most.
You could also use SE first, and then go to FS to further validate only your most important SE results, since FS is slower and you can therefore validate less different setups.

Do other apps affect my app's performance on iOS?

Given the multitasking function of iOS, will other applications that are currently sleeping affect my app (which is currently active) performance or memory consumption?
Absolutely. Any application that is running in the background (within the various parameters for when that's legal) will impact CPU availability. Apple apps can run in many more situations than 3rdparty apps, and they also will compete with you. I've particularly had trouble with Mail.app in the past.
Memory performance is a bit trickier, but yes, other applications are in memory at the same time and you can definitely generate memory warnings sooner with other applications in memory than you would otherwise. In principle, you should be able to get as much memory eventually as you would without other apps running, but that's not completely true. In particular, don't forget that Apple's apps don't always follow the same rules as 3rdparty apps, and if they're eating a lot of memory, they may or may not be killed.
The other performance consideration is network bandwidth, and this is most certainly a way that background apps may compete with you. I don't believe Apple is applying any bandwidth limiting on background apps, and downloading large files is a prime background activity. (There is some discussion that AppStore may decline apps that hit the network too hard while in the background, but I'm not aware of an official position on this. In any case, it it certainly legal to use some bandwidth in the background, and that's bandwidth not available to the foreground app.)
No they won't.
When the active app needs more memory being used by background apps, it automatically kills them off so the active app can use the resources.

Is there any limit to the number of Selenium RCs connected to a Selenium Hub?

Does anybody have experience using 20/50/100 RCs connected to a single Selenium Hub? I'm thinking of a centralized hub that multiple team could use together.
I heard that after 20 RCs, performance goes down significantly.
My tests use 104 RCs which are connected to a single Hub. Hub is able to handle all the RCs and the system is stable. One thing that you should note is, how you pass the commands to the Hub.This determines how much RCs Hub is going to utilize. What we do in our grid is, we run multiple builds at the same time - all pointing to the same Hub. This would mean Hub is always kept busy.
I'm not sure where the "20" value is coming from. I run with significantly more than that to power Mogotest and we run many more test sessions across various sites than most users of grid. Granted older versions had some thread problems, but 1.0.8 should be pretty solid and the 2.0 releases have not exhibited any problems.
FWIW, I maintain the grid project and backport any known fix for any known load issues. I also know of several others running large-scale grid installations without incident.
It boils down to how robust your h/w is. Though I can't say exact statistics yet suggest to keep it with in 20
Well there is not but yes very much close to 20 for good performance.

How do you test functionality across browsers and devices?

My Google Fu is failing me today...
I want to expand the number browsers I support for my application. The amount of time it takes to test my existing functionality and on multiple browsers has become a limiting factor for expansion.
Is there a more pragmatic what to test layout and functionality across the various mobile and traditional web browsers?
Welcome to the world of development :)
IOW, no - you need to test each release for compatibility across all the devices that you want to support.

Power Management in Symbian

Are there any "best practices" for writing a power-efficient background application in Symbian?
Specifically, is there any way (i.e. API) for a Symbian app to hint the OS regarding its current state in order to reduce battery consumption?
In Android, for instance, there is the notion of Wake Locks, which prevents the device from going into standby mode - Is there anything similar in Symbian?
EDIT:
Are there any implications when running code as a separate thread with the Open-C library, and not as "native" Symbian C++, using Active Objects etc.? (the Open-C code is blocking on IO most of the time).
You can check user (in-)activity with a RTimer::Inactivity() method. This way is described in Forum Nokia Wiki page. There it's also described how you can reset inactivity timer.
You can check whether device screen is turned on or off using HAL API. See classes HAL and HALData. You may use such a call:
TInt displayState;
HAL::Get(HALData::EDisplayState, displayState);
And the displayState will hold either 0 if display is turned off or 1 in other case.
With these APIs you will know whether user is active now, so you'll be able to change behavior of your background service to reduce its power consumption.
You can also use Nokia Energy Profiler application to record power consumption of handset, with different power saving options of your background service. Also please refer to Nokia's document describing best practices to save power of device. This document is quite straightforward, but useful nonetheless.
Hope this helps.
EDIT: About separate thread and Open C. As far as I know, Open C is just a plugin and deep down all the implementations are still "native Symbian". So, as far as you avoid periodic polling of some resource and just use usual blocking IO, your code is quite same economical on power as standard Symbian Active Objects techniques (which use Symbian-specific semaphores to block threads).
I have not come across anything special in Symbain to keep the device out of stand-by mode. Basically the "best practices" would be the same as all mobile devices:
Don't loop waiting for things, always use whatever signaling services avaialble on the platform, for Symbain ActiveObjects / User::WaitForXxx
Limit the number of background threads (currently all mobile devices are still only 1 CPU...)
Don't hang onto system services, close them ASAP (this is normally my main battery drain in my mobile applications, sometimes trying to find which system service causes the most battery drain can be a real pain, WinMo is very bad for this).
For me, I find that it mostly comes down to a tradeoff between battery life and performance / responsiveness for the application. Unfortunately power that be always seem to side with the performance / responsiveness side and damn the battery drain.....
Give your application low priority (see RProcess and RThread classes). Your approach will really depend on what your background application does. These things consume most battery: radio (GSM/3G/WIFI/BlueTooth), screen backlight, file accesses.
Symbian OS will always try to put your application to sleep, you don't need to tell it to do this. Just make sure your approach gives it the opportunity to put it to sleep.
Power management is an very most important issue while developing application.
In Symbian it depends on what you are using to run background activities .
Whether you are using Thread or ActiveX control.
For Eg. you are developing application browser that you want the browser to download something then that downloading activity should go in background and able activity starts and when to show progress and when it finishes it should again come to fore end.
It depends on how you are managing thread if you are using thread. You can do like which thread to pause when the long time taking activity starts and when to resume when background activity has finishes execution..
In fact this is the very good topic u have come across
There used to be an inactivity timer which could be reset by the application. This would prevent the screen from going into any screen saver mode.
If you use the various asynchronous function in Symbian, your app will run when appropriate.
One of these methods should work depending on your needs. If you describe what you want to achieve in more detail it would be easier to help you.