How to profile JavaEE-REST endpoint with JProfiler 10 - jackson

I use JProfiler 10 to profile a call to a JavaEE REST endpoint, that serializes into JSON. My guess is that a lot of time is spent with serialization.
When I start recordings for the call tree then the serialization overhead is not included in the metrics so I observe pure business logic there (1 second).
When I use the JEE Servlet probe I see the correct total time (4 seconds) but no further details, no other method calls except the mere call to the resource path.
I have tried to disable all filters but that did not change the situation.
How can I profile everything what is going on with that servlet call?
Any help appreciated. Thank you.

I have found the answer and solution: The call tree view shows "Thread status: Runnable" by default. I have to change this to "Thread status: All states" to see the "correct" times - in my context there was not so obviously lots of "Net IO" which was hidden before.

Related

Capture start of long running POST VB.net MVC4

I have a subroutine in my Controller
<HttpPost>
Sub Index(Id, varLotsOfData)
'Point B.
'By the time it gets here - all the data has been accepted by server.
What I would like to do it capture the Id of the inbound POST and mark, for example, a database record to say "Id xx is receiving data"
The POST receive can take a long time as there is lots of data.
When execution gets to point B I can mark the record "All data received".
Where can I place this type of "pre-POST completed" code?
I should add - we are receiving the POST data from clients that we do not control - that is, it is most likely a client's server sending the data - not a webbrowser client that we have served up from our webserver.
UPDATE: This is looking more complex than I had imagined.
I'm thinking that a possible solution would be to inspect the worker processes in IIS programatically. Via the IIS Manager you can do this for example - How to use IIS Manager to get Worker Processes (w3wp.exe) details information ?
From your description, you want to display on the client page that the method is executing and you can show also a loading gif, and when the execution completed, you will show a message to the user that the execution is completed.
The answer is simply: use SignalR
here you can find some references
Getting started with signalR 1.x and Mvc4
Creating your first SignalR hub MVC project
Hope this will help you
If I understand your goal correctly, it sounds like HttpRequest.GetBufferlessInputStream might be worth a look. It allows you to begin acting on incoming post data immediately and in "pieces" rather than waiting until the entire post has been received.
An excerpt from Microsoft's documentation:
...provides an alternative to using the InputStream propertywhich waits until the whole request has been received. In contrast, the GetBufferlessInputStream method returns the Stream object immediately. You can use the method to begin processing the entity body before the complete contents of the body have been received and asynchronously read the request entity in chunks. This method can be useful if the request is uploading a large file and you want to begin accessing the file contents before the upload is finished.
So you could grab the beginning of the post, and provided your client-facing page sends the ID towards the beginning of its transmission, you may be able to pull that out. Of course, this would be reading raw byte data which would need to be decoded so you could grab the inbound post's ID. There's also a buffered one that will allow the stream to be read in pieces but will also build a complete request object for processing once it has been completely received.
Create a custom action filter,
Action Filters for executing filtering logic either before or after an action method is called. Action Filters are custom attributes that provide declarative means to add pre-action and post-action behavior to the controller's action methods.
Specifically you'll want to look at the
OnActionExecuted – This method is called after a controller action is executed.
Here are a couple of links:
http://www.infragistics.com/community/blogs/dhananjay_kumar/archive/2016/03/04/how-to-create-a-custom-action-filter-in-asp-net-mvc.aspx
http://www.asp.net/mvc/overview/older-versions-1/controllers-and-routing/understanding-action-filters-vb
Here is a lab, but I think it's C#
http://www.asp.net/mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-custom-action-filters

WebKitGTK about webkit_web_view_load_uri

I have a question about WebktGTK.
These days I am making a program which is can analysis web page if has suspicious web content.
When "load-failed" "load-changed" signal is emitted with WEBKIT_LOAD_FINISHED,
The program anlaysis the next page continuously by calling webkit_web_view_load_uri again again.
(http://webkitgtk.org/reference/webkit2gtk/stable/WebKitWebView.html#webkit-web-view-load-uri)
The question want to ask you is memory problem.
The more the program analsysis the webpages, The more WebKitWebProcess is bigger.
webkit_back_forward_list_get_length() return value also increased by analysising web pages. Where shoud I free memory?
Do you know how Can I solve this problem or Could give me any advice where Can I get advice?
Thank you very much :-) Have a nice day ^^
In theory, what you're doing is perfectly fine, and you shouldn't need to change your code at all. In practice, WebKit has a lot of memory leaks, and programatically loading many new URIs in the same web view is eventually going to be problematic, as you've found.
My recommendation is to periodically, every so many page loads, create a new web view that uses a separate web process, and destroy the original web view. (That will also reset the back/forward list to stop it from growing, though I suspect the memory lost to the back/forward list is probably not significant compared to memory leaks when rendering the page.) I filed Bug 151203 - [GTK] Start a new web process when calling webkit_web_view_load functions? to consider having this happen automatically; your issue indicates we may need to bump the priority on that. In the meantime, you'll have to do it manually:
Before doing anything else in your application, set the process model to WEBKIT_PROCESS_MODEL_MULTIPLE_SECONDARY_PROCESSES using webkit_web_context_set_process_model(). (If you are not creating your own web contexts, you'll need to use the default web context webkit_web_context_get_default().)
Periodically destroy your web view with gtk_widget_destroy(), then create a new one using webkit_web_view_new() et. al. and attach it somewhere in your widget hierarchy. (Be sure NOT to use webkit_web_view_new_with_related_view() as that's how you get two web views to use the same web process.)
If you have trouble getting that solution to work, an extreme alternative would be to periodically send SIGTERM to your web process to get a new one. Connect to WebKitWebView::web-process-crashed, and call webkit_web_view_load_uri() from there. That will result in the same web view using a new web process.

Asynchronous requests rails apache/phusionpassenger

I've implemented a controller method which makes a couple of requests to an third parry API, which is quite slow. Further I've utilized one of Thin's asynchronous features:
# This informs thin that the request will be handled asynchronously
self.response_body = ''
self.status = -1
Thread.new do
# This will be the response to the client
env['async.callback'].call('200', {}, "Response body")
end
(blog post about it)
However I'm curious if this could be implemented without using Thin, or to be more precise if that could be accomplished with Apache/Phusionpassenger.
Any suggestions, pointers, links, comments or answers are appreciated. Thanks
Not sure wether this is possible now with passenger 4. In this article They announced having made a complete
redesign to support the evented model. As they also do have plans to support Node.js, I would expect the above method to work.
However if you look at this post from them, they clearly say:
... There is another way to support high I/O concurrency though: multi-threading ...
And so this leaves multithreaded servers as the only serious options for handling streaming support in Rails apps....
Rails is just not designed for the evented process model, but it supports the multi-threaded model quite well. And multithreaded sutup can be achieved with passenger-enterprise.
Another option might be to extract this problem to another application (see Railscast).
So for example instead of directly calling a 3d party api in you controller, which will spend the most time on blocking the I/O call, you solve this by processing this request in a backround job. The user will get an immidiate reponse and after that directly subscribe on some faye message channel. In your background job, when the 3d party call is ready, you publish the response on this channel to faye.
PROFIT.

30 sec periodic task to poll external web service and cache data

I'm after some advice on polling an external web service every 30 secs from a Domino server side action.
A quick bit of background...
We track the location of cars thru the TomTom api. We now have a requirement to show this in our web app, overlayed onto a map (google, bing, etc.) and mashed up with other lat long data from our application. Think of it as dispatching calls to taxis and we want to assign those calls to the taxis (...it's not taxis\ calls, but it is similar process). We refresh the dispatch controllers screens quite aggressively, so they can see the status of all the objects and assign to the nearest car. If we trigger the pull of data from the refresh of the users screen, we get into some tricky controlling server side, else we will hit the max allowable requests per minute to the TomTom api.
Originally I was going to schedule an agent to poll the web service, write to a cached object in our app, and the refreshing dispatch controllers screen pulls the data from our cache....great, except, user requirement is our cache must be updated every 30secs. I can create a program doc that runs every 1 min, but still not aggressive enough.
So we are currently left with: our .net guy will create a service that polls TomTom every 30secs, and we retrieve from his service, or I figure out a way to do in Domino. It would be nice to do in Domino database, and not some stand alone java app or .net, to keep as much of the logic as possible in one system (Domino).
We use backing beans heavily in our system. I will be testing this later today I hope, but would this seem like a sensible route to go down..?:
Spawning threads in a JSF managed bean for scheduled tasks using a timer
...or are their limitations I am not aware of, has anyone tackled this before in Domino or have any comments?
Thanks in advance,
Nick
Check out DOTS (Domino OSGi Tasklet Service): http://www.openntf.org/internal/home.nsf/project.xsp?action=openDocument&name=OSGI%20Tasklet%20Service%20for%20IBM%20Lotus%20Domino
It allows you to define background Java tasks on a Domino server that have all the advantages of agents (can be scheduled or triggered) with none of the performance or maintenance issues.
If you cache the data in a bean (application or session scoped). Have a date object that contains the last refreshed date. When the data is requested, check last cached date against current time. If it's more than/equal to 30 seconds, refresh data.
A way of doing it would be to write a managed bean which is created in the application scope ( aka there can only be one..). In this managed bean you take care of the 30sec polling of the webservice by good old java webservice implementation and a java thread which you start at the creation of your managed-bean something like
public class ServicePoller{
private static myThread = null;
public ServicePoller(){
if(myThread == null){
myThread = new ServicePollThread();
(new Thread(myThread)).start());
}
}
}
class ServicePollThread implements Runnable(){
private hashMap yourcache = null;
public ServicePollThread(){
}
public void run(){
while(running){
doPoll();
Thread.sleep(4000);
}
}
....
}
This managed bean will then poll every 30 seconds the webservice and save it's findings in a hashmap or some other managed-bean classes. This way you dont need to run an agent or something like that and you achieve when you use the dispatch screen to retrieve data from the cache.
Another option would be to write an servlet ( that would be possible with the extlib but I cant find the information right now ) which does the threading and reading the service for you. Then in your database you should be able to read the cache of the servlet and use it wherever you need.
As Tim said DOTS or as jjtbsomhorst said a thread or an Eclipse job.
I've created a video describing DOTS: http://www.youtube.com/watch?v=CRuGeKkddVI&list=UUtMIOCuOQtR4w5xoTT4-uDw&index=4&feature=plcp
Next Monday I'll publish a sample how to do threads and Eclipse jobs. Here is a preview video: http://www.youtube.com/watch?v=uYgCfp1Bw8Q&list=UUtMIOCuOQtR4w5xoTT4-uDw&index=1&feature=plcp

WCF Paged Results & Data Export

I've walked into a project that is using a WCF service for the data tier. Currently, when data is needed for a grid, all rows are returned and the results are bound to a grid and the dataset is stuffed into a session variable for paging/sorting/rebinding. We've already hit a max message size problem, so I'm thinking it's time to convert from fetch and cache to fetch only the current page.
Face value this seems easy enough, but there's a small catch. The user is allowed to export the entire result set at any point. This means that for grid viewing purposes fetching the current page is fine, but when they want to do an export, I still need to make a call for all data.
This puts me back into the max message size issue. What is the recommended approach for this type of setup?
We are currently using the wsHttpBinding...
Thanks for any assistance.
I think the recommended approach for large files is to use WCF streaming. I'm not sure the exact details for your scenario, but you could take a look at this as a starting point:
http://msdn.microsoft.com/en-us/library/ms789010.aspx
I would probably do something like this in your case
create a service with a "paged" GetData() method - where you specify the page index and the page size as additional parameters. This should give you a nice clean interface for "regular" use, and that should not hit the maxMessageSize limits
create a second service (or method) that would send all data - ideally, you could bundle that up into a ZIP file or something on the server, before sending it. If that ZIP file is still too large, you might want to check out WCF streaming for handling large files, as Andy already pointed out
The maxMessageSizeLimit is in place for a good reason: to avoid Denial of Service attacks where a WCF service would just get flooded with large messages and thus brought to its knees. If you can, always try to keep that in mind and don't just jack up the maxMessageSize to 2 GB - it might come back to bite you :-)