I have a serious problem, is that I can not launch any query on clicking on some link in my website until the first request is finished:
I have a musical website in which I generate waveform on click on the track, the problem is that generating the waveform take some sometime (about 12 seconds) to be finished, so if I click on another track it will not start untill the generation (the call to the webservice) is not yet finished.
I have seen in FRONT calls but the problem seems to be in backend, because even in the logs of apache, it says that the first query was called at x time, when it finishes apache write down in the log the second call.
The strange thing is that the time of the second call written by apache is the x + some micro seconds, as the apache also was blocked waiting for the first request also to be finished.
Please help me.
BTW : the website is secured by a password so I must have contact with someone to see the problem
Ask me what ever you want for more details.
Kind regards.
Related
If we fire multiple adapter call with the gap of 2 -3 sec then how to to stop first call which is running in background ?
Lets say :
I am calling A-Adapter which gives some data after success but at the same time with a gap of 2-3 second if i call B-Adapter which gives some small data within millisecond.
But still the first adapter call is taking time and respond back after 4 second or suppose timed out. Now we are getting success or failure of A-Adapter after B-Adapter success.
Now My doubt is
Can we stop or unsubscribe first adapter call at some point of time whenever required ?
Is there anything in worklight for doing this ?
Issue which we are facing right now is major issue, given below.
Lets say :
I am calling login adapter which gives login success or failure and it is taking some time let say 5 minutes. So what i did i close the app and launched the application again.
I again clicked on login and i am getting successful login and now i am inside the app and doing some work. Now at this point of time , I am getting failure response of login adapter which was taking time.
The answer to your direct question is, no, there is no API that will let you terminate an adapter procedure invocation that is in progress, prior to when it finishes on it's own. Once an adapter procedure is invoked, it must either succeed, fail, or time out.
Where you discuss the possibility of A-adapter finishing after B-adapter, I could not tell if you just intended that as an observation regarding a situation that could possibly occur, or if you see it as a problem or a bug - if the latter, you should understand that since adapter procedure invocations are completely asynchronous, there is no guarantee that adapter procedures will finish in the order they are invoked, and there is not intended to be any such guarantee.
In order to handle the issue you have described, what I would suggest would be to use an invocationContext to make sure that, when your success or failure callback fires, that this corresponds to an adapter procedure invocation that you are expecting a response for, and to ignore the result if it does not. Fore more information, see the section of the Worklight Information Center that describes the options Object.
If the usual, "normal" response time of the adapter procedure is small, you could also try to mitigate this issue by setting the procedure invocation timeout to a small amount of time. So, for instance, if an adapter procedure normally completes within about 4 seconds, maybe set the timeout to 15 seconds - assuming that, if the adapter procedure hasn't finished after that amount of time, something is wrong (maybe the back-end system you're retrieving data from has hung or crashed, or something like that) and it's just going to eventually fail anyway, so just let it return a timeout failure and give up. That way, you don't need to worry about what happens when it eventually fails some minutes later... There was another StackOverflow question asked in the past, where it was explained how to change this timeout.
The Seaside book says: "saving [an image] while processing http requests is a risk you want to avoid“.
Why is this? Does it just temporarily slow down serving http requests or will requests get lost or will errors occur?
Before an image is saved registered shutdown actions are executed. This means source files are closed and web servers are shut down. After the image is saved it executes the startup actions, which typically bring up the web-server again. Depending on the server implementation open connections might be closed.
This means that you cannot accept new connections while you save an image and open connections might be temporarily suspended or closed. For both issues there are (at least) two easy workarounds:
Fork the image using OSProcess before you save it (DabbleDB, CmsBox).
Use multiple images and a load balancer so that you can remove images one at a time from the active servers before saving them.
It seems that it's just a question of slowing things down. There is this quite thorough thread on the Seaside list, the most relevant post of which is this case study of an eCommerce site:
Consequently, currently this is what happens:
image is saved from time to time (usually daily), and copied to a separate "backup" machine.
if anything bad happens, the last image is grabbed, and the orders and/or gift certificates that were issued since the last image save
are simply re-entered.
And, #2 has been very rarely done-- maybe a two or three times a
year, and then it turns out it is usually because I did something
stupid.
Also, one of the great things about Smalltalk is that it's so easy to run quick experiments. You can download Seaside and put a halt in a callback of one of the examples. For example:
WACounter>>renderContentOn: html
...
html anchor
callback: [
self halt.
self increase ];
with: '++'.
...
Open a browser on the Seaside server (port 8080 by default)
Click "Counter" to go to the example app
Click the "++" link
Switch back to Seaside. You'll see the pre-debug window for the halt
Save the image
Click "Proceed"
You'll see that the counter is correctly incremented, with no apparent ill effect from the save.
My app has an API that users can request data. Sometimes that data takes time to process and is breaking my code.
I need a solution for this and I was thinking in using delayed_job but I'm not sure how this works. If the user makes a request, I need to give him an answer. Even if I process the data in background, the call still needs to wait until the job returns.
What is the solution for this? I am not sure how to do it.
Thanks
Heroku has a 30 second timeout, which is why your requests are failing (Probably H12 or H13 in your heroku logs).
There are three methods to work around this.
Keep the connection open by sending blank data.
You'll need to respond within the first 30 seconds and every 55 seconds after that. Use the time in between to process the data. Sending spaces should not affect the ability of the browser to read the response.
Callback
Have the user provide a callback URL in the initial request. When you finish processing the data, hit the callback url with your response.
Polling
As suggested by Codeglot, you can provide the user with a key. To check on their request, they can ping your server with that key.
Tell the user that their data is being processed and will be available shortly. Youtube, Vimeo, Facebook, Twitter, they all do this.
Im working on a silverlight application where a user can create, edit, delete objects. The changes they make are placed in a queue which is processed every 4 minutes. When it is processed, the updates are sent over an async web method call to be saved in a sql database, one at a time. When the first update finishes, the next starts.
Im having a problem when a user makes a change and then exits the browser app before the 4 minute timer has expired. Currently the changes are getting lost.
Ive built on what the guy working on this before me has done, and explored the Dispose and Finalize methods, trying to start the update process when the factory is being shut down, but that isnt working due to the async nature of the web service calls. I get errors saying needed objects have already been disposed of.
Im looking for a way to save the data in the updatequeue using a webmethod when the user tries to close or refresh the webpage. Im not expecting the queue to be packed full with updates. This is an application that would usually be run for several hours at a time.
You can use Javascript to stop the user leaving the page. StackOverflow does it (try editing an answer and leaving the page). That works on browser close as well as page navigation. From Javascript you can also notify the Silverlight app to save any queued data (Silverlight support exposing methods to Javascript).
Q. Saving every 4 minutes is slightly odd behaviour for a Silverlight App. I am guessing it is only deigned to be run by one user at a time. What restricts you from saving more frequently?
We have a memory intensive processing for certain functionality and we would like to limit the number of parallel requests to this processing. We are able to configure by using "Work Managers" in WebLogic and putting a limit on the number of threads for that servlet.
For example, if we put maximim thread limit as 3, then if there are 10 parallel requests; 7 requests are in queue. There could be situations where these the requests waiting in queue could take up to 30-40 minutes to be processed. We did simple testing and the received page cannot be displayed due to timeout after 15 mins and received the message after 1 hour.
Does any one know if there is a setting in WebLogic to increase/decrease timeout and avoid page cannot be displayed?
Appreciate if any one has any thoughts around this.
Does any one know if there is a setting in WebLogic to increase/decrease timeout and avoid page cannot be displayed?
There might be something but I actually didn't check as it would be a bad advice anyway. By looking for this, you are trying to solve the wrong problem here. A browser is just not made for long-running process like the one you are describing (>30mn) even if you don't mind the user waiting (not mentioning that he could refresh the page and queue more and more jobs).
So, the right answer here is in my opinion: use asynchronism, this is the perfect use case. When the user clicks on the button, send a JMS message to a queue (or create a Quartz job) and send the user a page with a request ID telling him to come back later. When the processing is done, update the status somewhere and make the status/result available to the user. Really, the user experience will be better doing this and you'll face less problems than with a browser.
1) Use some other tool (not browser) like WGET where you can control timeout parameter (--timeout).
2) Why do you use HTTP? Use message driven beans and send message JMS to that and don't care about time outs.
Perhaps quartz can do what you need? Start a job and check in on it as you need to?