Long polling blocking multiple windows? - long-polling

Long polling has solved 99% of my problems. There is now just one other problem. Imagine a penny auction site, where people bid. On the frontpage, there are several Auctions.
If the user opens three of these auctions, and because javascript is not multithreaded, how would you get the other pages to ever load? Won't they always get bogged down and not load because they are waiting for long polling to end? In practice, I've experienced this and I can't think of a way around it. Any ideas?

There are two ways that javascript gets around some of this.
While javascript is single threaded conceptually, it does its io in separate threads using completion handlers. This means other pieces of javascript can be running while you are waiting for your network request to complete.
Javascript for each page (or even each frame in each page) is isolated from Javascript on the other pages/frames. This means that each copy of javascript can be running in its own thread.
A bigger issue for you is likely to be that browsers often limit the number of concurrent connections to a given site, and it sounds like you want to make many concurrent connections to the same site. In this case you will get a lock up.
If you control both the sever and client, you will need to combined the multiple long-poll request from the client into a single long-poll request to the server.

Related

Would a blocking web server get hung up to the sense it needs restarting, if many http clients send requests at most in parallel?

I read there are web servers their behaviors are called blocking whereas Node.js's is said non-blocking.
Would a blocking web server get hung up to the sense it needs restarting, if many http clients send requests at most in parallel?
As a complement, I don't say that it needs restarting while it potentially works fine again after the flood of parallel requests have stopped.
And I currently don't understand how request buffers and overflows work for web servers.
Although technically it could be possible to make a single-thread, single-process blocking server that can only handle 1 request at a time, it doesn't really practically make sense. Concurrency is kind of important.
The three main paradigms for parallelism (that I know of) are:
Multi-process/forking
Threading
Using an event loop/reactor pattern
Node falls in the third category, and also a bit in the second category depending on how you look at it.
Most languages can look at a socket and read from it, and immediately move on if there was nothing to read. Therefore most languages can have this non-blocking behavior.

Why do I get many SSE request that slow down my webpage in my NUXT.js project?

I have a project implemented using NUXT.js (ssr mode). Every time I refresh pages, I got three or four sse requests (like _loading/sse) in the network console. Those sse requests are slow and would fail in the end and they cause page loading time slow in my computer (I run whole project in local computer).
Anyone knows what those sse requests are and how to get rid of them?
What you refer to is a loading-screen and looks like there must be something in your app which si firing many requests and takes quite a while to render or fail.
You need to check through your app code what generates those requests and where they might fail.

How does performing processing server-side affect the overall performance of a site?

I'm working on an application that will process data submitted by the user, and compare with past logged data. I don't need to return or respond to the post straight away, just need to process it. This "processing" involves logging the response (in this case a score from 1 to 10) that's submitted by the user every day, then comparing it against the previous scores they submitted. Then if something found, do something (not sure yet, maybe email).
Though I'm worried about the effectiveness of doing this and how it could affect the site's performance. I'd like to keep it server side so the script for calculating isn't exposed. The site is only dealing with 500-1500 responses (users) per day, so it isn't a massive amount, but just interested to know if this route of processing will work. The server the site will be hosted on won't be anything special, probably a small(/est) AWS instance.
Also, will be using Node.js and SQL/PSQL database.
It depends on how do you implement this processing algorithm and how heavy on resources this algorithm is.
If your task is completely syncronous its obviously going to block any incoming requests for your application until its finished.
You can make this "processing-application" as a seperate node process and communicate with it only what you need.
If this is a heavy task and you worry about performance its a good idea to make it a seperate node process so it does not impact the serving of the users.
I recoment to google for "node js asynchronous" to better understand the subject.

Philips Hue command limitation

First of all I'm developing my own C# library for controlling Philips Hue, which means I'm not using the official SDK. (I'm guessing that the SDK will make sure you won't have any problems)
I'm a little confused about the limitation in the Core concepts page in the API, which states:
We can’t send commands to the lights too fast. If you stick to around 10 commands per second to the /lights resource as maximum you should be fine. For /groups commands you should keep to a maximum of 1 per second.
I intend to respect this limitation, but does the limitation still apply when you are performing GET requests on the /lights resource, or is it only for sending actual commands with PUT requests to /lights/<id>/state that change the state of the light? Same question goes for the /groups resource.
Also is it even possible to damage anything by sending too many requests, or will it just take longer to get all responses?
Edit:
My overall question is: How should I understand the API limitation?
A more specific sub-question is: Should I wait 100 ms before sending another /lights command, relative to when I received a response, or relative to when I sent the previous command?
Another sub-question is: Should I consider this limitation only when using PUT requests on e.g. /lights/<id>/state, or on all request types GET/PUT/POST/DELETE
I don't know if anything was changed in firmware updates, but I have discovered that the bridge might not be so simple as you would think, and that the API description isn't very clear.
I've done a little testing while running firmware 01009914.
The bridge seems to have some kind of queue of incoming commands. I sent {"bri":254} to a group 9 times and 1 final command of {"bri":1}. From the first command to when the light is actually dimmed, takes roughly 3-4 seconds. Each time I sent a command the bridge replied almost instantly with success token.
I did the same small tests sending other commands, 10 of each JSON object:
{"bri":254} 3-4 seconds
{"on":true, "bri":254} 6-7 seconds
{"on":true, "bri":254, "alert":"none", "effect":"none"} 12-13 seconds
This actually shows that each change of attributes takes roughly 0.3 seconds for the bridge to handle.
I will claim that for each attribute we change, the bridge takes about 300 ms to finish, and the limitation of commands should be understood as: As long as you stick with changing one attribute of a group each second, you should be fine.
Note: I only tried with one group consisting of three lights, and I don't know if the bridge actually does have a queue of incoming commands, and in case it does have a queue, I don't know what the limit of items in it is.
Edit:
Now we have some official clarification of the Hue System Performance.
I'm fairly certain that the 10 commands per second is a guideline to prevent failure of the Bridge, and is a technical limitation of the hardware. Any more than that and you're apt to overload the bridge. I believe this applies to commands as well as requests.
Both approaches are reasonable. For laziness' sake, you could wait for 100ms to send a response, but I would only rely on this method if you don't plan on any other interactions with the Bridge.
I consider this limitation on all request types.
You won't damage anything if you send commands too fast. However, if you send commands too fast the bridge might become unresponsive and/or some messages can be ignored.
When it comes to the bridge, the way I think of it is that the bridge is more or less single threaded, so it works best if you make sure you don't send the next command before the previous one has returned.
In practice we've found that this works much better than waiting a fixed time between each request. In fact, you can pretty much send commands as fast as you want as long as you wait for the previous one to finish.
When you send a command to the bridge, the bridge has to then send it to the lamps through Zigbee. Since it's a mesh network in some cases the message has to make a couple of hops from lamp to lamp before it reaches the target. Depending on how many lamps you have and how many hops the signal needs to take, this can take a while. Also, it's possible that some messages randomly take much longer than others.
In general the system is not designed to handle very fast changes, but if you keep the above in mind you can make many cool effects :)

Work managers threads constraint and page cannot be displayed

We have a memory intensive processing for certain functionality and we would like to limit the number of parallel requests to this processing. We are able to configure by using "Work Managers" in WebLogic and putting a limit on the number of threads for that servlet.
For example, if we put maximim thread limit as 3, then if there are 10 parallel requests; 7 requests are in queue. There could be situations where these the requests waiting in queue could take up to 30-40 minutes to be processed. We did simple testing and the received page cannot be displayed due to timeout after 15 mins and received the message after 1 hour.
Does any one know if there is a setting in WebLogic to increase/decrease timeout and avoid page cannot be displayed?
Appreciate if any one has any thoughts around this.
Does any one know if there is a setting in WebLogic to increase/decrease timeout and avoid page cannot be displayed?
There might be something but I actually didn't check as it would be a bad advice anyway. By looking for this, you are trying to solve the wrong problem here. A browser is just not made for long-running process like the one you are describing (>30mn) even if you don't mind the user waiting (not mentioning that he could refresh the page and queue more and more jobs).
So, the right answer here is in my opinion: use asynchronism, this is the perfect use case. When the user clicks on the button, send a JMS message to a queue (or create a Quartz job) and send the user a page with a request ID telling him to come back later. When the processing is done, update the status somewhere and make the status/result available to the user. Really, the user experience will be better doing this and you'll face less problems than with a browser.
1) Use some other tool (not browser) like WGET where you can control timeout parameter (--timeout).
2) Why do you use HTTP? Use message driven beans and send message JMS to that and don't care about time outs.
Perhaps quartz can do what you need? Start a job and check in on it as you need to?