Jprofile "direct calls to methods of filtered classes" What's it mean? - jprofiler

I used CPU View to check the performace.
can someone suggest what does head 'direct calls to methods of filtered classes' signify?
I cannot upload screenshot.
it just like this:
"91.5% -60,324ms -14inv.direct calls to methods of filtered classes"
Any help will be appreciated.

This node contains threads that only make calls into unprofiled classes - starting from the top level run() method of a thread.
To see all method calls, switch to "Sampling" in the profiling settings and deactivate all filters.

Related

Reusing Java classes with procedural-style code?

There's a solid chance I'm misusing classes here which is why I need your help.
I've started developing with Java EE and one of the problems I am facing is I have a process which I have organised in a class, call it: "SendEmail.java".
Now let's say I have two other classes called "Thunderalert.java" and "FloodAlert.java" which will use all the methods that SendEmails.java has within it.
So I want to know the best way of using the SendEmails methods from each of the other classes.
Should I be creating an instance of SendEmails and accessing each method individually and error checking along the way (what if an exception is thrown?).. It's methods are just procedural code, so it's not really an 'object' as such
Shall I just be using the one method that runs all the other internal ones from within SendMail
Should this SendMail be redesigned as a helper class-type design?
I'm still quite new at Java EE so I'm not sure if there are any options available which I am missing
I think you should have one public method inside SendEmail class.
Btw, I would consider changing its name. I think having method send() when class is called SendEmail is not the best way (not to mention about names like call(), invoke() etc).
This is great article about this problem (The Kingdom of Nouns) in java.
What about something like: new Email(recipient, body).send()?
Or if you want to do it in a service style, I'd call it for example MailService

Getting lazy instance via kernel (Ninject)

I am using Ninject in substitution of MEF and I was wondering if it's possible to get lazy instances via standard kernel methods and not via [inject] .
I need this since when building up my application's menu I have to pass all particular view models and then if the user is enabled on that function to add it to the menu
Thanks
Sure thing, you can inject a Lazy<T> and the value will only be instanciated when you access Lazy<T>.Value.
You can also inject a Func<T> and use it to create T whenever you like (with the func, every call creates a new instance).
Of course you can also do IResolutionRoot.Get<Lazy<T>>() or IResolutionRoot.Get<Func<T>>(), but usually that's a sign of bad design (service locator), so use constructor injection when it's feasible.
EDIT: When is the "enabling of the user" happening? Is it a one time thing? What is being displayed before and after?
There might be other/better designs to achieve this but it's hard to say with that little information.

execution Vs. call Join point

I have two different aspect classes to count the number of non-static method calls for an execution of a test program. The first aspect counts methods on "call" join points:
pointcut methodCalls() : call (!static * test..*(..));
before(): methodCalls() {
counter.methodCallCounter();
}
while the second aspect counts methods on "execution" join points:
pointcut methodCalls() : execution (!static * test..*(..));
before(): methodCalls() {
counter.methodCallCounter();
}
methodCallCounter() is a static method in counter class.
The number of method calls for small test program is the same. But when I change the test program with a larger program the number of method calls in the second aspect class (with execution pointcut) is more than the number of method calls in the aspect class with call pointcut. This is reasonable since the call join point does not pick out the calls made with super and therefore does not count them.
However, I encountered a case where for the specific execution of program the number non-static method calls in the aspect class with "call pointcut" was higher than the number of method calls in the aspect class with "execution pointcut". I can not find any interpretation why this is happening. Any thought about the reason of second situation is appreciated.
Acutally the explanation is quite simple if you understand the basic difference between call() and execution() pointcuts: While the former intercepts all callers (i.e. the sources of method calls), the latter intercepts the calls themselves no matter where they originate from.
So how can the number of interceptions triggered by both pointcuts differ?
If you call JRE/JDK methods from your own code, AspectJ can weave into your calls, but not into the execution joinpoints within the JDK (unless you have created a woven JDK as a preparatory step). Thus, the number of calls will be higher than the number of executions.
Similarly, if you call methods in third party libraries which you have not woven with AspectJ because they were not on the in-path during LTW or CTW, again the executions will not be captured.
Last, but not least, it can happen the other way around if your own woven code is called by third party libs or by JRE/JDK classes. In this case the counted number of executions will be higher than the number of calls because they originate from places outside the control of your AspectJ code.
Generally, in all cases the reason is the difference between overall used code and the subset of woven code. In other words: the difference between code under and beyond your (or the aspects') control.
this image might help you visualize the difference between execution and call:

Generate a Mock object with a Method which raises an event

I am working on a VB.NET project which requires the extensive used of Unit Tests but am having problems mocking on of the classes.
Here is a breakdown of the issue:
Using NUnit and Rhino Mock 3.6
VS2010 & VB.NET
I have an interface which contains a number of methods and an Event.
The class which implements that Interface raises the event when one of the methods is called.
When I mock the object in my tests I can stub methods and create/assert expectations on the methods with no problems.
How do I configure the mock object so that when a method is called the event is raised so that I can assert that is was raised?
I have found numerous posts using C# which suggest code like this
mockObject.MyEvent += null...
When I try this 'MyEvent' does not appear in Intellisense.
I'm obviously not configuring my test/mock correctly but with so few VB.NET examples out there I'm drawing a blank.
Sorry for my lack of VB syntax; I'm a C# guy. Also, I think you should be congratulated for writing tests at all, regardless of test first or test last.
I think your code needs refactoring. It sounds like you have an interface that requires implementations to contain an event, and then another class (which you're testing) depends on this interface. The code under test then executes the event when certain things happen.
The question in my mind is, "Why is it a publically exposed event?" Why not just a method that implementations can define? I suppose the event could have multiple delegates being added to it dynamically somewhere, but if that's something you really need, then the implementation should figure out how that works. You could replace the event with a pair of methods: HandleEvent([event parameters]) and AddEventListener(TheDelegateType listener). I think the meaning and usage of those should be obvious enough. If the implementation wants to use events internally, it can, but I feel like that's an implementation detail that users of the interface should not care about. All they should care about is adding their listener and that all the listeners get called. Then you can just assert that HandleEvent or AddEventListener were called. This is probably the simplest way to make this more testable.
If you really need to keep the event, then see here for information on mocking delegates. My advice would be to mock a delegate, add it to the event during set up, and then assert it was called. This might also be useful if you need to test that things are added to the event.
Also, I wouldn't rely on Intellisense too much. Mocking is done via some crafty IL code, I believe. I wouldn't count on Intellisense to keep up with members of its objects, especially when you start getting beyond normal methods.

Custom performance profiler for Objective C

I want to create a simple to use and lightweight performance profile framework for Objective C. My goal is to measure the bottlenecks of my application.
Just to mention that I am not a beginner and I am aware of Instruments/Time Profiler. This is not what I am looking for. Time Profiler is a great tool but is too developer oriented. I want a framework that can collect performance data from a QA or pre production users and even incorporate in a real production environment to gather the real data.
The main part of this framework is the ability to measure how much time was spent in Objective C message (I am going to profile only Objective C messages).
The easiest way is to start timer in the beginning of a message and stop it at the end. It is the simplest way but its disadvantage is that it is to tedious and error prone - if any message has more than 1 return path then it will require to add the "stop timer" code before each return.
I am thinking of using method swizzling (just to note that I am aware that Apple are not happy with method swizzling but these profiled builds will be used internally only - will not be uploaded on the App Store).
My idea is to mark each message I want to profile and to generate automatically code for the method swizzling method (maybe using macros). When started, the application will swizzle the original selector with the generated one. The generated one will just start a timer, will call the original method and then will stop the timer. So in general the swizzled method will be just a wrapper of the original one.
One of the problems of the above idea is that I cannot think of an easy way how to automatically generate the methods to use for swizzling.
So I greatly will appreciate if anyone has any ideas how to automate the whole process. The perfect scenario is just to write one line of code anywhere mentioning the class and the selector I want to profile and the rest to be generated automatically.
Also will be very thankful if you have any other idea (beside method swizzling) of how to measure the performance.
I came up with a solution that works for me pretty well. First just to clarify that I was unable to find out an easy (and performance fast) way to automatically generate the appropriate swizzled methods for arbitrary selectors (i.e. with arbitrary arguments and return value) using only the selector name. So I had to add the arguments types and the return value for each selector, not only the selector name. In reality it should be relatively easy to create a small tool that would be able to parse all source files and detect automatically what are the arguments types and the returned value of the selector which we want to profile (and prepare the swizzled methods) but right now I don't need such an automated solution.
So right now my solution includes the above ideas for method swizzling, some C++ code and macros to automate and minimize some coding.
First here is the simple C++ class that measures time
class PerfTimer
{
public:
PerfTimer(PerfProfiledDataCounter* perfProfiledDataCounter);
~PerfTimer();
private:
uint64_t _startTime;
PerfProfiledDataCounter* _perfProfiledDataCounter;
};
I am using C++ to use that the destructor will be executed when object has exited the current scope. The idea is to create PerfTimer in the beginning of each swizzled method and it will take care of measuring the elapsed time for this method
The PerfProfiledDataCounter is a simple struct that counts the number of execution and the whole elapsed time (so it may find out what is the average time spent).
Also I am creating for each class I'd like profile, a category named "__Performance_Profiler_Category" and to conforms to "__Performance_Profiler_Marker" protocol. For easier creating I am using some macros that automatically create such categories. Also I have a set of macros that take selector name, return type and arguments type and create selectors for each selector name.
For all of the above tasks, I've created a set of macros to help me. Also I have a single file with .mm extension to register all classes and all selectors I'd like to profile. On app start, I am using the runtime to retrieve all classes that conforms to "__Performance_Profiler_Marker" protocol (i.e. the registered ones) and search for selectors that are marked for profiling (these selectors starts with predefined prefix). Note that this .mm file is the only file that needs .mm extension and there is no need to change file extension for each class I want to profile.
Afterwards the code swizzles the original selectors with the profiled ones. In each profiled one, I just create PerfTimer and call the swizzled method.
In brief that is my idea which turned out to work pretty smoothly.