How to automate memory allocated (to find memory leaks) for an app using Robotium - robotium

I have to create a test suite to test for any memory leaks occured after doing some operations on my app. So can anyone guide me how to implement it using robotium. I'm able to achieve it a bit but need more help
Test script to find memory leak:
boolean value=Debug.dumpService ("com.apppackage.name", fd, null);
ActivityManager manager=(ActivityManager)getActivity().getSystemService(context.ACTIVITY_SERVICE);
List<RunningAppProcessInfo> service= manager.getRunningAppProcesses();
for(int i=0;i<service.size();i++){
if (service.get(i).processName.equals("com.apppackage.name"))
{
pid=service.get(i).pid;
}
}
android.os.Debug.MemoryInfo[] meminfo=manager.getProcessMemoryInfo (new int []{pid});
Log.d("meminfo",meminfo+"length");
String info = meminfo.toString();

The Solo class that Robotium uses to assist with testing has the method assertMemoryNotLow() which will cause your test to fail when it is invoked if the Android OS considers memory to be low, for example if you create a lot of Bitmaps without recycling them when you're done.
As for fully implementing your suite with Robotium, that kind of open ended question is outside of the scope of Stack Overflow. You can visit this link Getting Started With Robotium to see an example of how Robotium is used. It has a very useful link to download a test project created by the developers which should be enough to get you going. Once you've started, feel free to come back as you encounter challenges and ask some specific questions about the framework; the community will be happy to help. Good Luck!

Related

Understand Heap-dump and Thread-dump for Large-scale application

I go through some tutorials of Java Profilers (JVisualVM, JProfiler, YourKit) on Youtube as well as Pluralsight. I got a little bit idea regarding how to check heap-dump and how to find the memory leak. But these all tutorials are elementary.
My query is, when I analyse in heap-dump, I saw only 3 types of objects char[], java.lang.String and java.lang.Object[] which covers almost all memory(more than 70% always). But none from my application.
And the same way for thread dump, I saw HTTP-8080 request (the port I am using) and that leads me to Runnable()'s run method or Java Concurrent Package and again not to any specific code to my project.
I also discussed the problem with some of my friends and analyse their application as well (which doesn't face any issues regarding memory leak and performance), but their results are almost the same.
Would you guys help to understand how to analyse heap-dump and thread-dump in JVisualVM for the large scale application? Any video, blog, anything would be helpful.
I am using OpenJDK-11, AWS ECS-Docker and Tomcat as a web-server.
Checkout the Eclipse Memory Analyzer (https://www.eclipse.org/mat/), I used it in the past several times to successfully find memory leaks, but it takes some time to get familiar with it.
Another advise I can give you is to create benchmark tests with Apache JMeter (https://jmeter.apache.org/) or another tool that lets you reproduce the performance/memory issue and identify the execution path that cause you problems.
Be aware AWS doesn't like when someone execute performance/penetration tests against their services (https://aws.amazon.com/aup/)

Applying Non-Standard Power Assertions & Creating Virtual HIDs

I've got a big ask here, but I am hoping someone might be able to help me. If there's another site you think this should be posted on, please let me know.
I'm the developer of the free app Amphetamine for macOS and I'm hoping to add a new feature to the app - keeping a Mac awake while in closed-display (clamshell) mode while not having a keyboard/mouse/power adapter/display connected to the Mac. I get requests to add this feature on an almost daily basis.
I've been working on a solution (and it's mostly ready) which uses a non-App Store helper app that must be download and installed separately. I could still go with that solution, but I want to explore one more option before pushing the separate app solution out to the world.
An Amphetamine user tipped me off that another app, AntiSleep can keep a Mac awake while in closed-display mode, while not meeting Apple's requirements. I've tested this claim, and it's true. After doing a bit of digging into how AntiSleep might be accomplishing this, I've come up with 2 possible theories so far (though there may be more to it):
In addition to the standard power assertion types, it looks like AntiSleep is using (a) private framework(s) to apply non-standard power assertions. The following non-standard power assertion types are active when AntiSleep is keeping a Mac awake: DenySystemSleep, UserIsActive, RequiresDisplayAudio, & InternalPreventDisplaySleep. I haven't been able to find much information on these power assertion types beyond what appears in IOPMLibPrivate.h. I'm not familiar at all with using private frameworks, but I assume I could theoretically add the IOPMLibPrivate header file to a project and then create these power assertion types. I understand that would likely result in an App Store review rejection for Amphetamine, of course. What about non-App Store apps? Would Apple notarize an app using this? Beyond that, could someone help me confirm that the only way to apply these non-standard power assertions is to use a private framework?
I suspect that AntiSleep may also be creating a virtual keyboard and mouse. Certainly, the idea of creating a virtual keyboard and mouse to get around Apple's requirement of having a keyboard and mouse connected to the Mac when using closed-display mode is an intriguing idea. After doing some searching, I found foohid. However, I ran into all kinds of errors trying to add and use the foohid files in a test project. Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app? I'm not asking for code help with that (yet). I'd just like some help determining whether it might be possible to do.
Thank you in advance for taking a look.
Would Apple notarize an app using this?
I haven't seen any issues with notarising code that uses private APIs. Currently, Apple only seems to use notarisation for scanning for inclusion of known malware.
Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app?
Taking a quick glance at the code of that project, it's clear it implements a kernel extension (kext). Those are not allowed on the App Store.
However, since macOS 10.15 Catalina, there's a new way to write HID drivers, using DriverKit. The idea is that the APIs are very similar to the kernel APIs, although I suspect it'll be a rewrite of the kext as a DriverKit driver, rather than a simple port.
DriverKit drivers are permitted to be included in App Store apps.
I don't know if a DriverKit based HID driver will solve your specific power management issue.
If you go with a DriverKit solution, this will only work on 10.15+.
I suspect that AntiSleep may also be creating a virtual keyboard and mouse.
I haven't looked at AntiSleep, but I do know that in addition to writing an outright HID driver, it's possible to generate HID events using user space APIs such as IOHIDPostEvent(). I don't know if those are allowed on the App Store, but as far as I'm aware, IOKitLib is generally fine.
It's possible you might be able to implement your virtual input device using those.

Logging in automation testing

Is there any reason to use logging in automated tests? I am asking because I have understanding that test must be readable and you shouldn't use any logging to bloat the code. It is also used to understand what is going on in app, so if it fails I know why (assert message) and if not - ok, I don't care what is in the test.
Thank you in advance.
You are right for this statement
test must be readable
since test code is production code. And the quality should be high, if not the same. But IMHO
shouldn't use any logging to bloat the code.
is not correct. If you run your automation tests on remote (physical or VM) server, you need some way to understand what happened in each step, where are the errors and warnings.
How do you troubleshoot or reproduce failed test only using stack trace and the latest Assertion message? Logging helps you avoid the common cause of Obscure Test - the Mystery Guest. You should be able to see the cause and effect between fixture and verification logic, without too much effort. Let's take a look at the Log4j home page
Logging equips the developer with detailed context for application failures. On the other hand, testing provides quality assurance and confidence in the application. Logging and testing should not be confused. They are complementary. When logging is wisely used, it can prove to be an essential tool.
I hope till now I've managed to convince at least one person that logging is a fundamental part of the automation tests.

Help diagnosing crash in Cocoa framework - possible memory leak?

I'm currently migrating the Fragaria framework from a GC-only environment to GC being supported. After the work was done (or what I thought had to be done to make it work) I was able to run the examples that come with the framework without any problems and Instruments didn't show any major memory leaks.
I included non-GC Fragaria in my non-GC application and it crashes as soon as I place the cursor on it. To be honest the usage pattern is different from the examples as I'm embedding it in an instance of NSViewController instead of NSDocument.
Can you give me some tips on how to debug this? I'm a bit lost on where to proceed now.
First thing to do is Build and Analyze the code, then fix any problems it finds.
Next, try running with Zombie detection enabled (google NSZombie).
Finally, each crash's stacktrace should give you a pretty good idea where things have gone off the rails.

Code Coverage tool for BlackBerry

I'm looking for a code coverage tool that I can use with a BlackBerry application. I'm using J2ME-Unit for Unit Testing and I want to see how much of my code is being covered by my tests.
I've tried using Cobertura for J2ME but after days of wrestling with it I failed to get any results from it. (I believe that the instrumentation is un-done by the RAPC compilation). And despite this message, the project seems to be dead.
I've looked at JInjector but the project seems very incomplete. There is little (if any) documentation and although it claims to be able to work with BlackBerry projects, I haven't seen any places where it has been used for that purpose. I've played with the project quite a bit but to no avail.
I've also tried the "Coverage" view in the BlackBerry JDE, even though I use Eclipse for development. The view stays permanently blank, regardless of clicking "Refresh" and running the application from the JDE.
I've looked at most of the tools on this SO thread, but they won't work with J2ME/BlackBerry projects.
Has anyone had any success with any code coverage tools on the BlackBerry? If so, what tools have you used? How have you used them?
If anyone has managed to get JInjector or Cobertura for J2ME to work with a BlackBerry project, what did you have to do to get it working?
I can't speak for Coberatura or JInjector, because I don't know how they collect test coverage probe data.
What is
critical is how this data is captured (does it need Java runtime support only available in standard Java VMs?) and how it is exported to the test coverage display/report generation tools.
Our SD Java Test Coverage tool instruments your source code; at runtime this produces an array of native Java booleans representing the coverage data, without need for any special VM support. Normally, this array is exported directly to a file, used by the test coverage display mechanism, by a TCVDump method provided with the test coverage tool, as your application exits.
Java (and other programming langauges used) in embedded systems often requires custom methods to extract the test coverage data. You might need to code a special dump procedure (in Java) to write out that boolean array to an accessible place. Our experience with building such custom dump procedures is that they are generally pretty simple (a few dozen lines); the real trick is deciding how/where to put the data, so that it can be easily moved to the target file. Mostly this is just a peculiar pair of copies, the first of which copies the boolean array to some staging location, and the second which writes the staged data into the destination file. (The standard TCVdump method is provided in source form to enable this kind of customization).
While I haven't specifically looked at BlackBerry, if you can write the data anywhere, you can pretty much be assured you can achieve this. We've had success with other embedded hand-set systems, such as Symbian, doing this.
If you want a complete overview of how to generally instrument code for test coverage following this strategy, see this paper: Branch Coverage for Arbitrary Languages Made Easy
I was actively involved with JInjector while working at Google. We were able to use it to successfully obtain code coverage for Blackberry applications. The application lifecycle for Balckberry apps is less predictable than J2ME and we found we had to tweak the application code to ensure the coverage data was gathered. I didn't personally work on the blackberry apps, several other engineers did. I'd hoped we'd create an example blackberry application and make it available on the jinjector site, but events and life got in the way.
If you would be willing to provide a sample blackberry apps with some unit tests, I'd be willing to spend a few hours trying to help you get the code coverage working. I'm not actively working with either J2ME or Blackberry (I'm currently working on Android apps when I have time to experiment with mobile) so I'm quite rusty. I have a day job that doesn't involve much mobile test automation, however I continue to work on ways to improve the test automation for mobile apps e.g. http://code.google.com/p/mwta/downloads/list for Android Test Automation.
I'm julianharty at gmail.com