Weblogic: Getting tps through mbean - weblogic

Using ThreadPoolRuntime, I could get a throughput attiribute that means "The mean number of requests completed per second". It's not what I want. I want to get realtime figure that is not the mean number.

Requests per second is by it's nature an average, so I'm not too sure what you mean by a realtime figure - do you want the number of requests completed in the last second?
The ApplicationRuntimes/[appname]/WorkManagerRuntimes/default/CompletedRequests gives the total number of requests completed for one application, you can use this to calculate an RPS figure over whatever timeframe you want.

Unless this is a custom work manager's thread pool, the number you're going to get back isn't going to be terribly meaningful. And even in the case of a custom thread pool assigned to your particular application component (EJB, WAR file, etc) then the number still isn't likely to mean what you're looking for.
The thread pool is used to perform all work for that component (or in the case of the default thread pool, all work for the server, both internal and client-driven. This means that requests of wildly different 'cost' in terms of CPU and execution time go through the same pool.
What is the problem that you're trying to solve? Is it an understanding of how many requests per second are occurring for particular application components? You might want to look at WLDF as an alternative source for this kind of data, although in either case you'll need to post-process information to get something useful.

Related

ISO-8583 message processing(defining priority of messages)

I need to get an understanding of ISO-8583 message platform,lets say i want to perform a authorization of a card transaction,so in real time at a particular instance lets say i got 100000 requests from network(VISA/MASTERCARD) all for authorization,how do i define priority of there request and the response,can the connection pool handle it(in my case its HIKARI),how is it done banks/financial institutions for authorizing a request.Please provide me some insights on how to manage all these requests.Should i go for a MQ?
Tech used are:-spring boot,hibernate,spring-tcp-starter
Your question doesn't seem to be very well researched as there are a ton of switch platforms out there that due this today and many of their technology guides can be found on the web including for major vendors like ACI, FIS, AJB,.. etc if you look yard enough.
I have worked with several iso-interface specifications, commercial switches, and home grown platforms and it is actually pretty consistent in how they do the core realtime processing.
This information on prioritization is generally in each ISO-8583 message processing specification and is made explicitly clear in almost every specification I've ever read written by someone who is familar with ISO-8533 and not just making up their own variant or copying someone elses.
That said.. in general at a high level authorizations / financials (0100, 0200) requests always have high priority than force posts (0x20) messages.
Administrative messages in the 05xx and 06xx and 08xx sometimes also get bumped up above other advices.. but these are still advices and almost always auths/financials are always processed first as they A) Impact the customer B) have much tighter timers than any advice by usually more than double or more.
Most switches I have seen do it entirely in memory without going to MQ and or some other disk for core authorization process to manage these.. but not to say there is not some sort of home grown middle ware sometimes involved.. but non-realtime processes regularly use a MQ process to queue or disk queuing these up into processes not in-line of the approval for this Store-and-forward (SAF) processing.. but many of these still use memory only processing for the front of their queue.
It is important to also differentiate between 100000 requests and 100000 transactions.. the various exchanges both internal and external make a big difference in the number of actual requests/responses in flight at even given time.. a basic transaction can be accomplished in like two messages.. but some of the more complex ones can easily exceed 20 messages just for a pre-authorization or a completion component.
If you are dealing with largely batch transaction bursts.. I can see the challenge of queuing but almost every application I have seen has a max in flight for advices and requests separate of each other.. and sometimes even with different timers.. and the apps pumping the transactions almost always wait for the response back before sending more.. and this tends to work fine for just about everyone.. including big posting batches from retailers and card networks. So if your app doesn't have them.. you probably need to add them.
In fact your 100000 requests should be sorted by (Terminal ID and/or Merchant ID) + (timestamp/local timestamp) + (STAN and/or RRN).
Duplicated transaction requests expected to be rejected.
If you simulating multiple requests from single terminal (or host) with same test card details the increasing of STAN/RRN would be a case.
Please refer to previous answers about STAN and RRN ISO 8583 fields.
In ISO message, what's the use of stan and rrn ?

How can I run a scheduling process incrementally?

I ideally want to run a scheduling but this needs to be run incrementally.
Scheduling:
Given a set of resource R1,R2...Rn we ideally want to choose a Resource based on a set of constraints and assign it to a Entity for a given period of time in a day. Once a resource is assigned for a given time period, we cannot use that particular resource in the same time. Does this look similar to Meeting scheduling?
Scheduling process, Something like, say,
At the beginning of time, no resources are allotted to any entities.
When a request comes from a particular entity for a resource, we send a request JSON response and find a resource for a given time period that fits the criteria and returns a JSON response.
As more requests come in, we maintain the existing state of resource-entity but the newer requests will be solved. So the current state might need to be stored and needs to be given for future requests.
How can I do this with JSON requests/response.
Is there any example I can use for reference.
The attached diagram shows that this might be possible.
In the user guide, take a look at Continuous Planning and Real-time planning (including daemon mode).
Note that if you may only assign one resource at a time and you can't reassign existing resources, then it's not NP-hard. This means you can't do any big cost savings and there's no need to use OptaPlanner (Drools for example suffices).

How does performing processing server-side affect the overall performance of a site?

I'm working on an application that will process data submitted by the user, and compare with past logged data. I don't need to return or respond to the post straight away, just need to process it. This "processing" involves logging the response (in this case a score from 1 to 10) that's submitted by the user every day, then comparing it against the previous scores they submitted. Then if something found, do something (not sure yet, maybe email).
Though I'm worried about the effectiveness of doing this and how it could affect the site's performance. I'd like to keep it server side so the script for calculating isn't exposed. The site is only dealing with 500-1500 responses (users) per day, so it isn't a massive amount, but just interested to know if this route of processing will work. The server the site will be hosted on won't be anything special, probably a small(/est) AWS instance.
Also, will be using Node.js and SQL/PSQL database.
It depends on how do you implement this processing algorithm and how heavy on resources this algorithm is.
If your task is completely syncronous its obviously going to block any incoming requests for your application until its finished.
You can make this "processing-application" as a seperate node process and communicate with it only what you need.
If this is a heavy task and you worry about performance its a good idea to make it a seperate node process so it does not impact the serving of the users.
I recoment to google for "node js asynchronous" to better understand the subject.

Philips Hue command limitation

First of all I'm developing my own C# library for controlling Philips Hue, which means I'm not using the official SDK. (I'm guessing that the SDK will make sure you won't have any problems)
I'm a little confused about the limitation in the Core concepts page in the API, which states:
We can’t send commands to the lights too fast. If you stick to around 10 commands per second to the /lights resource as maximum you should be fine. For /groups commands you should keep to a maximum of 1 per second.
I intend to respect this limitation, but does the limitation still apply when you are performing GET requests on the /lights resource, or is it only for sending actual commands with PUT requests to /lights/<id>/state that change the state of the light? Same question goes for the /groups resource.
Also is it even possible to damage anything by sending too many requests, or will it just take longer to get all responses?
Edit:
My overall question is: How should I understand the API limitation?
A more specific sub-question is: Should I wait 100 ms before sending another /lights command, relative to when I received a response, or relative to when I sent the previous command?
Another sub-question is: Should I consider this limitation only when using PUT requests on e.g. /lights/<id>/state, or on all request types GET/PUT/POST/DELETE
I don't know if anything was changed in firmware updates, but I have discovered that the bridge might not be so simple as you would think, and that the API description isn't very clear.
I've done a little testing while running firmware 01009914.
The bridge seems to have some kind of queue of incoming commands. I sent {"bri":254} to a group 9 times and 1 final command of {"bri":1}. From the first command to when the light is actually dimmed, takes roughly 3-4 seconds. Each time I sent a command the bridge replied almost instantly with success token.
I did the same small tests sending other commands, 10 of each JSON object:
{"bri":254} 3-4 seconds
{"on":true, "bri":254} 6-7 seconds
{"on":true, "bri":254, "alert":"none", "effect":"none"} 12-13 seconds
This actually shows that each change of attributes takes roughly 0.3 seconds for the bridge to handle.
I will claim that for each attribute we change, the bridge takes about 300 ms to finish, and the limitation of commands should be understood as: As long as you stick with changing one attribute of a group each second, you should be fine.
Note: I only tried with one group consisting of three lights, and I don't know if the bridge actually does have a queue of incoming commands, and in case it does have a queue, I don't know what the limit of items in it is.
Edit:
Now we have some official clarification of the Hue System Performance.
I'm fairly certain that the 10 commands per second is a guideline to prevent failure of the Bridge, and is a technical limitation of the hardware. Any more than that and you're apt to overload the bridge. I believe this applies to commands as well as requests.
Both approaches are reasonable. For laziness' sake, you could wait for 100ms to send a response, but I would only rely on this method if you don't plan on any other interactions with the Bridge.
I consider this limitation on all request types.
You won't damage anything if you send commands too fast. However, if you send commands too fast the bridge might become unresponsive and/or some messages can be ignored.
When it comes to the bridge, the way I think of it is that the bridge is more or less single threaded, so it works best if you make sure you don't send the next command before the previous one has returned.
In practice we've found that this works much better than waiting a fixed time between each request. In fact, you can pretty much send commands as fast as you want as long as you wait for the previous one to finish.
When you send a command to the bridge, the bridge has to then send it to the lamps through Zigbee. Since it's a mesh network in some cases the message has to make a couple of hops from lamp to lamp before it reaches the target. Depending on how many lamps you have and how many hops the signal needs to take, this can take a while. Also, it's possible that some messages randomly take much longer than others.
In general the system is not designed to handle very fast changes, but if you keep the above in mind you can make many cool effects :)

Work managers threads constraint and page cannot be displayed

We have a memory intensive processing for certain functionality and we would like to limit the number of parallel requests to this processing. We are able to configure by using "Work Managers" in WebLogic and putting a limit on the number of threads for that servlet.
For example, if we put maximim thread limit as 3, then if there are 10 parallel requests; 7 requests are in queue. There could be situations where these the requests waiting in queue could take up to 30-40 minutes to be processed. We did simple testing and the received page cannot be displayed due to timeout after 15 mins and received the message after 1 hour.
Does any one know if there is a setting in WebLogic to increase/decrease timeout and avoid page cannot be displayed?
Appreciate if any one has any thoughts around this.
Does any one know if there is a setting in WebLogic to increase/decrease timeout and avoid page cannot be displayed?
There might be something but I actually didn't check as it would be a bad advice anyway. By looking for this, you are trying to solve the wrong problem here. A browser is just not made for long-running process like the one you are describing (>30mn) even if you don't mind the user waiting (not mentioning that he could refresh the page and queue more and more jobs).
So, the right answer here is in my opinion: use asynchronism, this is the perfect use case. When the user clicks on the button, send a JMS message to a queue (or create a Quartz job) and send the user a page with a request ID telling him to come back later. When the processing is done, update the status somewhere and make the status/result available to the user. Really, the user experience will be better doing this and you'll face less problems than with a browser.
1) Use some other tool (not browser) like WGET where you can control timeout parameter (--timeout).
2) Why do you use HTTP? Use message driven beans and send message JMS to that and don't care about time outs.
Perhaps quartz can do what you need? Start a job and check in on it as you need to?