I need to order some inventory then sending the multiple http requests at same time from different devices to server.Can we do all tasks at a time .I mean no time difference in each transactions all are need to complete at same time[no sec difference also].can we achieve this in windows 8?
Thanks in advance,
No, you cannot control this. HTTP Request timing is not something you have fine grained control over. You make the requests and they either succeed or fail.
Several factors affect the timing including how many requests are currently out, what is processing, network speed/latency, etc. There is no 'request transaction' that this would be enclosed in like you would see with a database.
What are you trying to accomplish though?
Related
I need to get an understanding of ISO-8583 message platform,lets say i want to perform a authorization of a card transaction,so in real time at a particular instance lets say i got 100000 requests from network(VISA/MASTERCARD) all for authorization,how do i define priority of there request and the response,can the connection pool handle it(in my case its HIKARI),how is it done banks/financial institutions for authorizing a request.Please provide me some insights on how to manage all these requests.Should i go for a MQ?
Tech used are:-spring boot,hibernate,spring-tcp-starter
Your question doesn't seem to be very well researched as there are a ton of switch platforms out there that due this today and many of their technology guides can be found on the web including for major vendors like ACI, FIS, AJB,.. etc if you look yard enough.
I have worked with several iso-interface specifications, commercial switches, and home grown platforms and it is actually pretty consistent in how they do the core realtime processing.
This information on prioritization is generally in each ISO-8583 message processing specification and is made explicitly clear in almost every specification I've ever read written by someone who is familar with ISO-8533 and not just making up their own variant or copying someone elses.
That said.. in general at a high level authorizations / financials (0100, 0200) requests always have high priority than force posts (0x20) messages.
Administrative messages in the 05xx and 06xx and 08xx sometimes also get bumped up above other advices.. but these are still advices and almost always auths/financials are always processed first as they A) Impact the customer B) have much tighter timers than any advice by usually more than double or more.
Most switches I have seen do it entirely in memory without going to MQ and or some other disk for core authorization process to manage these.. but not to say there is not some sort of home grown middle ware sometimes involved.. but non-realtime processes regularly use a MQ process to queue or disk queuing these up into processes not in-line of the approval for this Store-and-forward (SAF) processing.. but many of these still use memory only processing for the front of their queue.
It is important to also differentiate between 100000 requests and 100000 transactions.. the various exchanges both internal and external make a big difference in the number of actual requests/responses in flight at even given time.. a basic transaction can be accomplished in like two messages.. but some of the more complex ones can easily exceed 20 messages just for a pre-authorization or a completion component.
If you are dealing with largely batch transaction bursts.. I can see the challenge of queuing but almost every application I have seen has a max in flight for advices and requests separate of each other.. and sometimes even with different timers.. and the apps pumping the transactions almost always wait for the response back before sending more.. and this tends to work fine for just about everyone.. including big posting batches from retailers and card networks. So if your app doesn't have them.. you probably need to add them.
In fact your 100000 requests should be sorted by (Terminal ID and/or Merchant ID) + (timestamp/local timestamp) + (STAN and/or RRN).
Duplicated transaction requests expected to be rejected.
If you simulating multiple requests from single terminal (or host) with same test card details the increasing of STAN/RRN would be a case.
Please refer to previous answers about STAN and RRN ISO 8583 fields.
In ISO message, what's the use of stan and rrn ?
First of all I'm developing my own C# library for controlling Philips Hue, which means I'm not using the official SDK. (I'm guessing that the SDK will make sure you won't have any problems)
I'm a little confused about the limitation in the Core concepts page in the API, which states:
We can’t send commands to the lights too fast. If you stick to around 10 commands per second to the /lights resource as maximum you should be fine. For /groups commands you should keep to a maximum of 1 per second.
I intend to respect this limitation, but does the limitation still apply when you are performing GET requests on the /lights resource, or is it only for sending actual commands with PUT requests to /lights/<id>/state that change the state of the light? Same question goes for the /groups resource.
Also is it even possible to damage anything by sending too many requests, or will it just take longer to get all responses?
Edit:
My overall question is: How should I understand the API limitation?
A more specific sub-question is: Should I wait 100 ms before sending another /lights command, relative to when I received a response, or relative to when I sent the previous command?
Another sub-question is: Should I consider this limitation only when using PUT requests on e.g. /lights/<id>/state, or on all request types GET/PUT/POST/DELETE
I don't know if anything was changed in firmware updates, but I have discovered that the bridge might not be so simple as you would think, and that the API description isn't very clear.
I've done a little testing while running firmware 01009914.
The bridge seems to have some kind of queue of incoming commands. I sent {"bri":254} to a group 9 times and 1 final command of {"bri":1}. From the first command to when the light is actually dimmed, takes roughly 3-4 seconds. Each time I sent a command the bridge replied almost instantly with success token.
I did the same small tests sending other commands, 10 of each JSON object:
{"bri":254} 3-4 seconds
{"on":true, "bri":254} 6-7 seconds
{"on":true, "bri":254, "alert":"none", "effect":"none"} 12-13 seconds
This actually shows that each change of attributes takes roughly 0.3 seconds for the bridge to handle.
I will claim that for each attribute we change, the bridge takes about 300 ms to finish, and the limitation of commands should be understood as: As long as you stick with changing one attribute of a group each second, you should be fine.
Note: I only tried with one group consisting of three lights, and I don't know if the bridge actually does have a queue of incoming commands, and in case it does have a queue, I don't know what the limit of items in it is.
Edit:
Now we have some official clarification of the Hue System Performance.
I'm fairly certain that the 10 commands per second is a guideline to prevent failure of the Bridge, and is a technical limitation of the hardware. Any more than that and you're apt to overload the bridge. I believe this applies to commands as well as requests.
Both approaches are reasonable. For laziness' sake, you could wait for 100ms to send a response, but I would only rely on this method if you don't plan on any other interactions with the Bridge.
I consider this limitation on all request types.
You won't damage anything if you send commands too fast. However, if you send commands too fast the bridge might become unresponsive and/or some messages can be ignored.
When it comes to the bridge, the way I think of it is that the bridge is more or less single threaded, so it works best if you make sure you don't send the next command before the previous one has returned.
In practice we've found that this works much better than waiting a fixed time between each request. In fact, you can pretty much send commands as fast as you want as long as you wait for the previous one to finish.
When you send a command to the bridge, the bridge has to then send it to the lamps through Zigbee. Since it's a mesh network in some cases the message has to make a couple of hops from lamp to lamp before it reaches the target. Depending on how many lamps you have and how many hops the signal needs to take, this can take a while. Also, it's possible that some messages randomly take much longer than others.
In general the system is not designed to handle very fast changes, but if you keep the above in mind you can make many cool effects :)
Long polling has solved 99% of my problems. There is now just one other problem. Imagine a penny auction site, where people bid. On the frontpage, there are several Auctions.
If the user opens three of these auctions, and because javascript is not multithreaded, how would you get the other pages to ever load? Won't they always get bogged down and not load because they are waiting for long polling to end? In practice, I've experienced this and I can't think of a way around it. Any ideas?
There are two ways that javascript gets around some of this.
While javascript is single threaded conceptually, it does its io in separate threads using completion handlers. This means other pieces of javascript can be running while you are waiting for your network request to complete.
Javascript for each page (or even each frame in each page) is isolated from Javascript on the other pages/frames. This means that each copy of javascript can be running in its own thread.
A bigger issue for you is likely to be that browsers often limit the number of concurrent connections to a given site, and it sounds like you want to make many concurrent connections to the same site. In this case you will get a lock up.
If you control both the sever and client, you will need to combined the multiple long-poll request from the client into a single long-poll request to the server.
Using ThreadPoolRuntime, I could get a throughput attiribute that means "The mean number of requests completed per second". It's not what I want. I want to get realtime figure that is not the mean number.
Requests per second is by it's nature an average, so I'm not too sure what you mean by a realtime figure - do you want the number of requests completed in the last second?
The ApplicationRuntimes/[appname]/WorkManagerRuntimes/default/CompletedRequests gives the total number of requests completed for one application, you can use this to calculate an RPS figure over whatever timeframe you want.
Unless this is a custom work manager's thread pool, the number you're going to get back isn't going to be terribly meaningful. And even in the case of a custom thread pool assigned to your particular application component (EJB, WAR file, etc) then the number still isn't likely to mean what you're looking for.
The thread pool is used to perform all work for that component (or in the case of the default thread pool, all work for the server, both internal and client-driven. This means that requests of wildly different 'cost' in terms of CPU and execution time go through the same pool.
What is the problem that you're trying to solve? Is it an understanding of how many requests per second are occurring for particular application components? You might want to look at WLDF as an alternative source for this kind of data, although in either case you'll need to post-process information to get something useful.
We have a memory intensive processing for certain functionality and we would like to limit the number of parallel requests to this processing. We are able to configure by using "Work Managers" in WebLogic and putting a limit on the number of threads for that servlet.
For example, if we put maximim thread limit as 3, then if there are 10 parallel requests; 7 requests are in queue. There could be situations where these the requests waiting in queue could take up to 30-40 minutes to be processed. We did simple testing and the received page cannot be displayed due to timeout after 15 mins and received the message after 1 hour.
Does any one know if there is a setting in WebLogic to increase/decrease timeout and avoid page cannot be displayed?
Appreciate if any one has any thoughts around this.
Does any one know if there is a setting in WebLogic to increase/decrease timeout and avoid page cannot be displayed?
There might be something but I actually didn't check as it would be a bad advice anyway. By looking for this, you are trying to solve the wrong problem here. A browser is just not made for long-running process like the one you are describing (>30mn) even if you don't mind the user waiting (not mentioning that he could refresh the page and queue more and more jobs).
So, the right answer here is in my opinion: use asynchronism, this is the perfect use case. When the user clicks on the button, send a JMS message to a queue (or create a Quartz job) and send the user a page with a request ID telling him to come back later. When the processing is done, update the status somewhere and make the status/result available to the user. Really, the user experience will be better doing this and you'll face less problems than with a browser.
1) Use some other tool (not browser) like WGET where you can control timeout parameter (--timeout).
2) Why do you use HTTP? Use message driven beans and send message JMS to that and don't care about time outs.
Perhaps quartz can do what you need? Start a job and check in on it as you need to?