Padarn Opennetcf RESTful Services - compact-framework

Hi I'm evaluating Padarn for my project and I implemented a very simple RESTful example(POST & GET). I need Padarn for my WIN CE 5.0 or 6.0 web project and I bought a license.
The RESTful service works well, but it's performance is not good enough.
According to firebug results, every requests completed in 80ms (average) but after 10 requests it was completed in more than 120ms and its repetitive over (10~15) requests.
How can I improve performance and decrease response time?
This is my web server config :
<WebServer DefaultPort="80" MaxConnections="256" DocumentRoot="\NANDFlash\Inetpub\" Logging="false" UseSsl="false" >
<DefaultDocuments>
<Document>index.html</Document>
</DefaultDocuments>
<httpHandlers>
<assembly>WebAgent.dll</assembly>
<add verb="POST" path="/mngmt" type="WebAgent.ManagmentHandler,WebAgent"/>
<add verb="GET" path="/notif" type="WebAgent.NotifHandler,WebAgent"/>
</httpHandlers>
<VirtualDirectories />
<Cookies />
<Caching />
</WebServer>
And this is my handler class :
namespace WebAgent
{
class ManagmentHandler : IHttpHandler
{
public bool IsReusable
{
get { return true; }
}
public void ProcessRequest(HttpContext context)
{
context.Response.Write("OK");
context.Response.Flush();
}
}
}
I need to prepare response faster than 80ms.
The firebug shows details of response time and it's a kind of "waiting time" that related to server side code(RESTful service).
I would appreciate it if you help me.

I'm not certain there is much that can be done to improve the speed over what you already have done. The path to handle this is pretty straightforward - Padarn is reusing an existing socket and reusing an existing handler class instance, so most of the time you see here is likely the time required to run the code (you've not said what sort of processor you're using) and to push the data out the network stack.
Licensed builds perform slightly faster because the license check isn't done after the first pass, but I don't think it would gain you a 50% speed.

Related

Spring sleuth Runtime Sampling and Tracing Decision

I am trying to integrate my Application with Spring sleuth.
I am able to do a successfull integration and I can see spans getting exported to Zipkin.
I am exporting zipkin over http.
Spring boot version - 1.5.10.RELEASE
Sleuth - 1.3.2.RELEASE
Cloud- Edgware.SR2
But now I need to do this in a more controlled way as application is already running in production and people are scared about the overhead which sleuth can have by adding #NewSpan on the methods.
I need to decide on runtime wether the Trace should be added or not (Not talking about exporting). Like for actuator trace is not getting added at all. I assume this will have no overhead on the application. Putting X-B3-Sampled = 0 is not exporting but adding tracing information. Something like skipPattern property but at runtime.
Always export the trace if service exceeds a certain threshold or in case of Exception.
If I am not exporting Spans to zipkin then will there be any overhead by tracing information?
What about this solution? I guess this will work in sampling specific request at runtime.
#Bean
public Sampler customSampler(){
return new Sampler() {
#Override
public boolean isSampled(Span span) {
logger.info("Inside sampling "+span.getTraceId());
HttpServletRequest httpServletRequest=HttpUtils.getRequest();
if(httpServletRequest!=null && httpServletRequest.getServletPath().startsWith("/test")){
return true;
}else
return false;
}
};
}
people are scared about the overhead which sleuth can have by adding #NewSpan on the methods.
Do they have any information about the overhead? Have they turned it on and the application started to lag significantly? What are they scared of? Is this a high-frequency trading application that you're doing where every microsecond counts?
I need to decide on runtime whether the Trace should be added or not (Not talking about exporting). Like for actuator trace is not getting added at all. I assume this will have no overhead on the application. Putting X-B3-Sampled = 0 is not exporting but adding tracing information. Something like skipPattern property but at runtime.
I don't think that's possible. The instrumentation is set up by adding interceptors, aspects etc. They are started upon application initialization.
Always export the trace if service exceeds a certain threshold or in case of Exception.
With the new Brave tracer instrumentation (Sleuth 2.0.0) you will be able to do it in a much easier way. Prior to this version you would have to implement your own version of a SpanReporter that verifies the tags (if it contains an error tag), and if that's the case send it to zipkin, otherwise not.
If I am not exporting Spans to zipkin then will there be any overhead by tracing information?
Yes, there is cause you need to pass tracing data. However, the overhead is small.

How to reduce round trip to database in Search engine using REST

I have thousands on records in MS Sql server database table, to get it search it quickly on web page I created WCF REST service that returns List of records fetched from database by keywords converted into JSON and get displayed in DIV just below html textbox in html page. (like google search textbox).
I used server side cache object to avoid database hit upto some extent.
But I am forced to hit REST GET url on every text change.
Any suggestions to make it faster?
There can be a way to reduce your REST calls. There are techniques available for client side caching which allows to cache the ajax responses so that next time if same request is repeated the results are produced from cache. But you have to Very Careful using such techniques as it may be endup giving wrong results and behavior.
See this answer. It is similar to your question but the discussion is really interesting which will give you insight of client side cache implementation to reduced Ajax call round trips.
As you're using a Rest, you're making a http request to your service. You can take advantage of the Output Cache of ASP.Net.
The call will hit the server, but it will automatically response your request without running the code.
You do it like this:
[AspNetCacheProfile("CachePoliceName")]
[WebGet(UriTemplate = "{userName}")]
public String GetData(string parameter)
{ // your code }
If requered, you need to enable AspNet compatibility in your configuration file:
<system.serviceModel>
<serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
</system.serviceModel>
See more here: https://msdn.microsoft.com/en-us/library/vstudio/ee230443%28v=vs.100%29.aspx
And here: http://blogs.msdn.com/b/endpoint/archive/2010/01/28/integrating-asp-net-output-caching-with-wcf-webhttp-services.aspx
Hope it helps.

30 sec periodic task to poll external web service and cache data

I'm after some advice on polling an external web service every 30 secs from a Domino server side action.
A quick bit of background...
We track the location of cars thru the TomTom api. We now have a requirement to show this in our web app, overlayed onto a map (google, bing, etc.) and mashed up with other lat long data from our application. Think of it as dispatching calls to taxis and we want to assign those calls to the taxis (...it's not taxis\ calls, but it is similar process). We refresh the dispatch controllers screens quite aggressively, so they can see the status of all the objects and assign to the nearest car. If we trigger the pull of data from the refresh of the users screen, we get into some tricky controlling server side, else we will hit the max allowable requests per minute to the TomTom api.
Originally I was going to schedule an agent to poll the web service, write to a cached object in our app, and the refreshing dispatch controllers screen pulls the data from our cache....great, except, user requirement is our cache must be updated every 30secs. I can create a program doc that runs every 1 min, but still not aggressive enough.
So we are currently left with: our .net guy will create a service that polls TomTom every 30secs, and we retrieve from his service, or I figure out a way to do in Domino. It would be nice to do in Domino database, and not some stand alone java app or .net, to keep as much of the logic as possible in one system (Domino).
We use backing beans heavily in our system. I will be testing this later today I hope, but would this seem like a sensible route to go down..?:
Spawning threads in a JSF managed bean for scheduled tasks using a timer
...or are their limitations I am not aware of, has anyone tackled this before in Domino or have any comments?
Thanks in advance,
Nick
Check out DOTS (Domino OSGi Tasklet Service): http://www.openntf.org/internal/home.nsf/project.xsp?action=openDocument&name=OSGI%20Tasklet%20Service%20for%20IBM%20Lotus%20Domino
It allows you to define background Java tasks on a Domino server that have all the advantages of agents (can be scheduled or triggered) with none of the performance or maintenance issues.
If you cache the data in a bean (application or session scoped). Have a date object that contains the last refreshed date. When the data is requested, check last cached date against current time. If it's more than/equal to 30 seconds, refresh data.
A way of doing it would be to write a managed bean which is created in the application scope ( aka there can only be one..). In this managed bean you take care of the 30sec polling of the webservice by good old java webservice implementation and a java thread which you start at the creation of your managed-bean something like
public class ServicePoller{
private static myThread = null;
public ServicePoller(){
if(myThread == null){
myThread = new ServicePollThread();
(new Thread(myThread)).start());
}
}
}
class ServicePollThread implements Runnable(){
private hashMap yourcache = null;
public ServicePollThread(){
}
public void run(){
while(running){
doPoll();
Thread.sleep(4000);
}
}
....
}
This managed bean will then poll every 30 seconds the webservice and save it's findings in a hashmap or some other managed-bean classes. This way you dont need to run an agent or something like that and you achieve when you use the dispatch screen to retrieve data from the cache.
Another option would be to write an servlet ( that would be possible with the extlib but I cant find the information right now ) which does the threading and reading the service for you. Then in your database you should be able to read the cache of the servlet and use it wherever you need.
As Tim said DOTS or as jjtbsomhorst said a thread or an Eclipse job.
I've created a video describing DOTS: http://www.youtube.com/watch?v=CRuGeKkddVI&list=UUtMIOCuOQtR4w5xoTT4-uDw&index=4&feature=plcp
Next Monday I'll publish a sample how to do threads and Eclipse jobs. Here is a preview video: http://www.youtube.com/watch?v=uYgCfp1Bw8Q&list=UUtMIOCuOQtR4w5xoTT4-uDw&index=1&feature=plcp

WSSecurityTokenSerializer ReadToken method performance

I have a Dispatch MessageInspector which is deserializing a SAML Token contained in the SOAP message header.
To do the deserialization I am using a variation of the following code:
List<SecurityToken> tokens = new List<SecurityToken>();
tokens.Add(new X509SecurityToken(CertificateUtility.GetCertificate()));
SecurityTokenResolver outOfBandTokenResolver = SecurityTokenResolver.CreateDefaultSecurityTokenResolver(new ReadOnlyCollection<SecurityToken>(tokens), true);
SecurityToken token = WSSecurityTokenSerializer.DefaultInstance.ReadToken(xr, outOfBandTokenResolver);
The problem I am seeing is that the performance of the ReadToken call varies depending on the account that is running the windows service (in which the WCF service is hosted).
If the service is running as a windows domain account the elapsed time for the ReadToken call is virtually zero. When running as a local machine account the call takes between 200 and 1000 milliseconds.
Can anyone shed any light on what is going on here and why the account running this bit of code makes a difference as to its performance?
Thanks,
Martin
When the service is running under a local account that there is considerably more activity taking place, examples of this are :
Accessing and using C:\WINDOWS\system32\certcli.dll
Accessing and using C:\WINDOWS\system32\atl.dll
Attempting to access registry keys e.g.
HKLM\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration
None of this extra activity appears to occur when running under a domain account.
A quick search on the internet for "certcli.dll domain user" brings up microsoft knowledge base article 948080 which sounds similar.
Unsure how to resolve this as ultimately a .Net method is being called (WSSecurityTokenSerializer.ReadToken) where you have little to no control over the internals.
This appears to also describe the same problem :
http://groups.google.com/group/microsoft.public.biztalk.general/browse_thread/thread/402a159810661bf6?pli=1

Asynchronous WCF Web Service Load Testing

I see several other questions about load testing web services. But as far as I can tell those are all synchronous load testing tools. (Meaning they send a ton of requests but the go one at a time.)
I am looking for a tool where I can say, "I want 100 requests to be launched at the exact same time".
Now, I am new to the whole load testing thing, so it is possible that those tools are asynchronous and I am just missing it.
Anyway, in short my question is: Is there a good tool for load testing WCF Web Services asynchronously (ie lots of threads).
In general, I recommend you look at soapUI, for anything to do with testing web services. They do have load testing features in the Professional edition (I haven't used these yet).
In addition, they've just entered beta with a loadUI product. If it's anywhere near as good as the parent product, then it's worth a hard look.
you can use the Visual Studio load testing agent components to run on multiple client machines and that will allow you to run as asynchronously as you have machines to load.
There is a licence requirement for using this feature.
There are no tools that will allow you to apply a load at exactly the same instant (i.e. within milliseconds), but this is not necessary to load test an application correctly.
For most needs a single load test server running Visual Studio Ultimate edition will be more than enough to get an understand of how your webservice performs under load.
Visual Studio and most other tools I imagine will apply load in an asynchronous manner, but I think in your view you want to apply a set load all at once.
This is not really necessary as in practice load is not applied to a service in this manner.
The best bet for services expecting high load is to load your service until a given number of "requests per second" is reached. Finding what level your application should expect is a bit trickier, but involves figuring out roughly how many users you would expect and the amount they will be using it over a given period.
The other test to do is to setup a load test harness and run the load up until either the webservice starts to perform badly or the test harness runs out of "oomph" and cannot create any more load.
For development time you can use NLoad (http://nload.github.io)
to run load tests on your development machine or testing environment.
For example
public class MyTest : ITest
{
public void Initialize()
{
// Initialize your test, e.g., create a WCF client, load files, etc.
}
public void Execute()
{
// Send http request, invoke a WCF service or whatever you want to load test.
}
}
Then create, configure and run a load test:
var loadTest = NLoad.Test<MyTest>()
.WithNumberOfThreads(100)
.WithDurationOf(TimeSpan.FromMinutes(5))
.WithDeleyBetweenThreadStart(TimeSpan.Zero)
.OnHeartbeat((s, e) => Console.WriteLine(e.Throughput))
.Build();
var result = loadTest.Run();