jQuery Ajax abort and new request in quick succession - apache

I have a mobile app, which makes a
JqXHR = $.ajax({
url: 'url',
data: null,
async: true,
cache: false,
dataType: 'json',
type: 'GET',
crossDomain: true,
timeout: 63000,
success: function(data, textStatus, jqXHR) {},
error: function(jqXHR, textStatus, errorThrown) {}
});
request. It waits for 63 seconds (the PHP backend CAN run for ~62 seconds) for user interaction at the other end. Now, if in the meanwhile I decide to abort this request, then i call JqXHR.abort(). In the error handler I already handle/differentiate between real errors and aborts, that works. Right after the abort, I want to send an other API call to the server to tie the loose ends and make sure my cancel request is logged.
And there is the problem. Even tho I abort() the first request, the PHP script is still running on the server, which wouldn't be a problem if it also executed the second request, which would make it to stop and die(). But it is not happening. The second request is not happening until the first finishes.
Any ideas?
jQuery 1.8.2, jQuery Mobile 1.2.0, PhoneGap 2.0.0 and 2.1.0, Apache 2, Linux, PHP 5.3

Some information to: Parallel-Ajax vs Apache-Session locking
Session data is usually stored after your script terminated, but as session data is locked to prevent concurrent writes only one script may operate on a session at any time.
When e.g. using framesets together with sessions you will experience the frames loading one by one due to this locking. You can reduce the time needed to load all the frames by ending the session as soon as possible.
So you can use sessions in ajax scripts with
session_start(); (maybe handled automatically) followed immediately (soon as possible) by session_write_close();
session_write_close(); will "end" the current session and store the session data.
But: session_id() will still deliver the correct (current) PHPSESSID so you're able to re obtain write access to the current session by simply doing session_start() again at any time you need it.
I use it this way in all my ajax scripts to implement session handling and allowing parallel request

It seems I ran into the good old PHP sessions vs. AJAX requests issue. Actually my boss found out about this issue by googling some expressions i never thought of. I am using Zend framework in the back-end, and it automatically starts a session namespace, so in my API controller's preDispatch() method I had to put in a #session_write_close(); line, and as if by magic, it works like a charm.
Thanks Arun for your quick reply, it is most appretiated.
So, in short: If you use Zend Framework or session_autostart or other means of starting sessions, they won't fly with parallel AJAX requests.

The abort method will not terminate the server process, it will just terminate the client side wait for the server response.

Related

HttpContext.Session in Blazor Server Application

I am trying to use HttpContext.Session in my ASP.NET Core Blazor Server application (as described in this MS Doc, I mean: all correctly set up in startup)
Here is the code part when I try to set a value:
var session = _contextAccessor.HttpContext?.Session;
if (session != null && session.IsAvailable)
{
session.Set(key, data);
await session.CommitAsync();
}
When this code called in Razor component's OnAfterRenderAsync the session.Set throws following exception:
The session cannot be established after the response has started.
I (probably) understand the message, but this renders the Session infrastructure pretty unusable: the application needs to access its state in every phase of the execution...
Question
Should I forget completely the DistributedSession infrastructure, and go for Cookies, or Browser SessionStorage? ...or is there a workaround here still utilizing HttpContext.Session? I would not want to just drop the distributed session infra for a way lower level implementation...
(just for the record: Browser's Session Storage is NOT across tabs, which is a pain)
Blazor is fundamentally incompatible with the concept of traditional server-side sessions, especially in the client-side or WebAssembly hosting model where there is no server-side to begin with. Even in the "server-side" hosting model, though, communication with the server is over websockets. There's only one initial request. Server-side sessions require a cookie which must be sent to the client when the session is established, which means the only point you could do that is on the first load. Afterwards, there's no further requests, and thus no opportunity to establish a session.
The docs give guidance on how to maintain state in a Blazor app. For the closest thing to traditional server-side sessions, you're looking at using the browser's sessionStorage.
Note: I know this answer is a little old, but I use sessions with WebSockets just fine, and I wanted to share my findings.
Answer
I think this Session.Set() error that you're describing is a bug, since Session.Get() works just fine even after the response has started, but Session.Set() doesn't. Regardless, the workaround (or "hack" if you will) includes making a throwaway call to Session.Set() to "prime" the session for future writing. Just find a line of code in your application where you KNOW the response hasn't sent, and insert a throwaway call to Session.Set() there. Then you will be able to make subsequent calls to Session.Set() with no error, including ones after the response has started, inside your OnInitializedAsync() method. You can check if the response is started by checking the property HttpContext.Response.HasStarted.
Try adding this app.Use() snippet into your Startup.cs Configure() method. Try to ensure the line is placed somewhere before app.UseRouting():
...
...
app.UseHttpsRedirection();
app.UseStaticFiles();
//begin Set() hack
app.Use(async delegate (HttpContext Context, Func<Task> Next)
{
//this throwaway session variable will "prime" the Set() method
//to allow it to be called after the response has started
var TempKey = Guid.NewGuid().ToString(); //create a random key
Context.Session.Set(TempKey, Array.Empty<byte>()); //set the throwaway session variable
Context.Session.Remove(TempKey); //remove the throwaway session variable
await Next(); //continue on with the request
});
//end Set() hack
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapBlazorHub();
endpoints.MapFallbackToPage("/_Host");
});
...
...
Background Info
The info I can share here is not Blazor specific, but will help you pinpoint what's happening in your setup, as I've come across the same error myself. The error occurs when BOTH of the following criteria are met simultaneously:
Criteria 1. A request is sent to the server with no session cookie, or the included session cookie is invalid/expired.
Criteria 2. The request in Criteria 1 makes a call to Session.Set() after the response has started. In other words, if the property HttpContext.Response.HasStarted is true, and Session.Set() is called, the exception will be thrown.
Important: If Criteria 1 is not met, then calling Session.Set() after the response has started will NOT cause the error.
That is why the error only seems to happen upon first load of a page--it's because often in first loads, there is no session cookie that the server can use (or the one that was provided is invalid or too old), and the server has to spin up a new session data store (I don't know why it has to spin up a new one for Set(), that's why I say I think this is a bug). If the server has to spin up a new session data store, it does so upon the first call to Session.Set(), and new session data stores cannot be spun up after the response has started. On the other hand, if the session cookie provided was a valid one, then no new data store needs to be spun up, and thus you can call Session.Set() anytime you want, including after the response has started.
What you need to do, is make a preliminary call to Session.Set() before the response gets started, so that the session data store gets spun up, and then your call to Session.Set() won't cause the error.
SessionStorege has more space than cookies.
Syncing (two ways!) the sessionStorage is impossible correctly
I think you are thinking that if it is on the browser, how can you access that in C#? Please see some examples. It actually read from the browser and transfers (use) on the server side.
sessionstorage and localstorage in blazor are encrypted. We do not need to do extra for encryption. The same applies for serialization.

HttpRequest not aborted (cancelled) on browser abort in ASP.NET Core MVC

I wrote the following MVC Controller to test cancellation functionality:
class MyController : Controller
{
[HttpGet("api/CancelTest")]
async Task<IActionResult> Get()
{
await Task.Delay(1000);
CancellationToken token = HttpContext.RequestAborted;
bool cancelled = token.IsCancellationRequested;
logger.LogDebug(cancelled.ToString());
return Ok();
}
}
Say, I want to cancel the request, so the value 'true' is logged in the controller action above. This is possible server-side if the server implements the IHttpRequestLifetimeFeature. Luckily Kestrel does, and this can be accomplished the following way:
var feature = (IHttpRequestLifetimeFeature) HttpContext.Features[typeof(IHttpRequestLifetimeFeature)];
feature.Abort();
The problem however is that I want to cancel the request on the client side. For example, in the browser. In pre-core versions of ASP.NET MVC/WebApi the cancellation token would automatically be cancelled if the browser aborted a request. Example: refresh the page a couple of times in Chrome. In the Network tab of the chrome dev tools you can now see the previous (unfinished) request be cancelled.
The thing is: in ASP.NET Core running on Kestrel, I can only see the following entry in the log:
Microsoft.AspNetCore.Server.Kestrel.Internal.Networking.UvException:
Error -4081 ECANCELED operation canceled
So the abort request from the browser DOES arrive and is handled by the Kestrel webserver. It does however not affect the RequestAborted property of the HttpContext in the controller, because the value 'false' is still logged by the method.
Question:
Is there a way to abort/cancel my controller's method, so that the HttpContext.RequestAborted property will be marked as cancelled?
Perhaps I can make something that would subscribe to Kestrel's operation cancelled trigger and call the IHttpRequestLifetimeFeature.Abort() method?
Update:
I did some further testing and it seems the HttpRequest IS in fact aborted, but there seems to be some kind of delay before the cancellation actually takes place. The delay is not time-factored, and seems to come straight from libuv (the library where the Kestrel webserver is build on top of). I posted more info on https://github.com/aspnet/KestrelHttpServer/issues/1103
More updates:
Issue has been moved to another one, because the previous one contained multiple problems. https://github.com/aspnet/KestrelHttpServer/issues/1139
Turns out that that simply using HttpContext.RequestAborted is indeed the right way, but due to a bug in Kestrel (the order in which FIN/RST packages were handled), the request was not aborted on a browser abort.
The bug should finally be fixed in Kestrel 2.0.
See the updates in my question for more information.

can I invoke a procedure synchronously?

This might sound a bit crazy but is there a way to call a procedure synchronously?
I am using MobileFirst Platform Foundation 7.1 and I am writing an app in javascript for the browser. I usually call my javascript adapter by:
WL.Client.invokeProcedure({
adapter: 'MyAdapter',
procedure: 'myProcedureName',
parameters: []
}).then(function(res) {
...
});
But in this particular case I need to open another window after getting some data from the server. Since browsers will block windows when they come from async ajax my new windows does never open.
A way to solve this would be to do the ajax request sync. Is this possible with WL.Client apis? Is there a way for constructing the request manually so I can set the sync ajax flag by myself?
PS: in my case doing sync ajax request would work nice since I show a "Loading ..." view on top of everything to prevent user interaction while the request is being done.
WL.Client.connect() does not support .then. Additionally, starting 7.0 you should use the REST API method WLResourceRequest: https://developer.ibm.com/mobilefirstplatform/documentation/getting-started-7-1/foundation/server-side-development-category/
Lastly, you could just put the second request in the onSuccess callback of the first...

Caching best practice for mobile hybrid/bridge app development

I really need to limit any unnecessary network traffic and server trips. Solution: common sense caching. (I am not going to cache everything under the sun).
However, after reading through the Caching Files documentation and implementing a couple of quick examples, when is the best time to cache an ajax json result? Sure I can do the usual cache/no cache check each time my view is displayed. But is there a way to perform an asynchronous load during initial application startup to prefetch remote data that I know the user is going to need? Is using the connectionStateChanged event the only way (or closest way)? Is there a way to "hook" into the splash screen (yes, I know Apple wants the splash screen for mostly transition)? window.onload?
So if I understand you correctly, you're looking for a way to asynchronously fetch remote resources once for each time the app starts up, and cache those data away?
Our request module is asynchronous by nature, so you could simply drop in a forge.request.ajax to start fetching an Ajax response, then store it away in the preferences module.
Although it's probably identical in practice, you could even wrap it in a setTimeout to make it even more asynchronous:
setTimeout(function ()
forge.request.ajax({
url: 'http://example.com/method.json',
success: function (data) {
forge.prefs.set("method.json-cache", data);
}
});
}, 10);

HTML5 Server-Sent Events prototyping - ambiguous error and repeated polling?

I'm trying to get to grips with Server-Side Events as they fit my requirements perfectly and seem like they should be simple to implement, however I can't get past a vague error and what looks like the connection repeatedly being closed and re-opened. Everything I have tried is based on this and other tutorials.
The PHP is a single script:
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
function sendMsg($id, $msg) {
echo "id: $id" . PHP_EOL;
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
$serverTime = time();
sendMsg($serverTime, 'server time: ' . date("h:i:s", time()));
?>
and the JavaScript looks like this (run on body load):
function init() {
var source;
if (!!window.EventSource) {
source = new EventSource('events.php');
source.addEventListener('message', function(e) {
document.getElementById('output').innerHTML += e.data + '<br />';
}, false);
source.addEventListener('open', function(e) {
document.getElementById('output').innerHTML += 'connection opened<br />';
}, false);
source.addEventListener('error', function(e) {
document.getElementById('output').innerHTML += 'error<br />';
}, false);
}
else {
alert("Browser doesn't support Server-Sent Events");
}
}
I have searched around a bit but can't find information on
If Apache needs any special configuration to support server-sent events, and
How I can initiate a push from the server with this kind of setup (e.g. can I simply execute the PHP script from CLI to give a push to the already-connected-browser?)
If I run this JS in Chrome (16.0.912.77) it opens the connection, receives the time, then errors (with no useful information in the error object), then reconnects in 3 seconds and goes through the same process. In Firefox (10.0) I get the same behaviour.
EDIT 1: I thought the issue could be related to the server I was using, so I tested on a vanilla XAMPP install and the same error comes up. Should a basic server configuration be able to handle this without modification / extra configuration?
EDIT 2: The following is an example of output from the browser:
connection opened
server time: 01:47:20
error
connection opened
server time: 01:47:23
error
connection opened
server time: 01:47:26
error
Can anyone tell me where this is going wrong? The tutorials I have seen make it look like SSE is very straightforward. Also any answers to my two numbered questions above would be really helpful.
Thanks.
The problem is your php.
With the way your php script is written, only one message is sent per execution. That's how it works if you access the php file directly, and that's how it works if you access the file with an EventSource. So in order to make your php script send multiple messages, you need a loop.
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
function sendMsg($id, $msg) {
echo "id: $id" . PHP_EOL;
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
while(true) {
$serverTime = time();
sendMsg($serverTime, 'server time: ' . date("h:i:s", time()));
sleep(1);
}
?>
I have altered your code to include an infinite loop that waits 1 second after every message sent (following an example found here: Using server-sent events).
This type of loop is what I'm currently using and it eliminated the constant connection drop and reconnect every 3 seconds. However (and I've only tested this in chrome), the connections are now only kept alive for 30 seconds. I will be continuing to figure out why this is the case and I'll post a solution when I find one, but until then this should at least get you closer to your goal.
Hope that helps,
Edit:
In order to keep the connection open for ridiculously long times with php, you need to set the max_execution_time (Thanks to tomfumb for this). This can be accomplished in at least three ways:
If you can alter your php.ini, change the value for "max_execution_time." This will allow all of your scripts to run for the time you specify though.
In the script you wish to run for a long time, use the function ini_set(key, value), where key is 'max_execution_time' and value is the time in seconds you wish your script to run for.
In the script you wish to run for a long time, use the function set_time_limit(n) where n is the number of seconds that you wish your script to run.
Server Sent Events are easy only when it comes to the Javascript part. First of all a lot of tutorials on SSE in the internet are closing their connections in the server part. Be it PHP or Java examples. This is really astonishing because what you get then is just a different way of implementing a "Ajax Polling" system with a strictly defined payload structure (and some minor features like client retry values set by server side). You can easily implement that with a few lines of jQuery. No need for SSE then.
According to the spec of SSE, i would say that the retry shouldnt be the normal way of implementing a client side loop. For me SSE is a one way streaming method which relies on a server backend which does not close the connection after pushing the first data to the client.
In Java its useful to use Servlet3 Async spec in order to free the request thread immediately and do the processing / streaming in a different thread. This works so far but still i dont like the 30 seconds connection lifetime for the EventSource request. Even i am pushing data every 5 seconds, the connection will be terminated after 30 seconds (chrome, firefox). Of course SSE will reconnect per default after 3 seconds but still i dont think this is the way it should be.
One problem is that some Java MVC frameworks dont have the ability to keep the connection open after data sending, so that you end up coding to the bare Servlet API. After on 24hours on coding prototypes in Java, i am more or less dissapointed because the gain over a traditional jQuery-Ajax-loop is not THAT much. And the problem with polyfilling the SSE feature is also existant.
The problem is not a server side issue, this all happens on the client and is part of the spec (I know it sounds weird).
http://dev.w3.org/html5/eventsource/
"When a user agent is to reestablish the connection, the user agent must run the following steps. These steps are run asynchronously, not as part of a task. (The tasks that it queues, of course, are run like normal tasks and not asynchronously.)"
Queue a task to run the following steps:
If the readyState attribute is set to CLOSED, abort the task.
Set the readyState attribute to CONNECTING.
Fire a simple event named error at the EventSource object.
I can't see any need to have an error here, so I have modified your Init function to filter out the error event fired whilst connecting.
function init() {
var CONNECTING = 0;
var source;
if (!!window.EventSource) {
source = new EventSource('events.php');
source.addEventListener('message', function (e) {
document.getElementById('output').innerHTML += e.data + '';
}, false);
source.addEventListener('open', function (e) {
document.getElementById('output').innerHTML += 'connection opened';
}, false);
source.addEventListener('error', function (e) {
if (source.readyState != CONNECTING) {
document.getElementById('output').innerHTML += 'error';
}
}, false);
}
else {
alert("Browser doesn't support Server-Sent Events");
}
}
There is no actual issue with the code, that I can see. The answer selected as correct, is then, incorrect.
This sums up the behavior mentioned in the question (http://www.w3.org/TR/2009/WD-html5-20090212/comms.html):
"If such a resource (with the correct MIME type) completes loading (i.e. the entire HTTP response body is received or the connection itself closes), the user agent should request the event source resource again after a delay equal to the reconnection time of the event source. This doesn't apply for the error cases that are listed below."
The problem lies with the stream. I've successfully kept a single EventStream open before in perl; just send the appropriate HTTP headers, and start sending stream data; never shutdown the stream server side. The issue is that it seems most HTTP libraries attempt to close the stream after its been opened. This will cause the client to attempt to reconnect to the server, which is fully standard compliant.
This means that it will appear that the problem is solved by running a while loop, for a couple of reasons:
A) The code will continue to send data, as if it were pushing out a large file
B) The code (php server) will never have the chance to attempt to close the connection
However, the problem here is obvious: to keep the stream alive, a constant stream of data must be sent. This results in wasteful utilization of resources, and negates any benefits the SSE stream is supposed to provide.
I'm not enough of a php guru to know, but I'd imagine that something in the php server/later in the code is prematurely closing the stream; I had to manipulate the stream at Socket level with Perl to keep it open, since HTTP::Response was closing the connection, and causing the client browser to attempt to re-open the connection. In Mojolicious (another Perl web framework), this can be done by opening a Stream object and setting the timeout to zero, so that the stream never times out.
So, the proper solution here is not to use a while loop; it is to call the appropriate php functions for opening, and keeping open, a php stream.
I was able to do it by implementing a custom event loop. It seems that this html5 feature is not ready at all and has compatibility issues even with the latest version of google chrome. Here it is, working on firefox (can't get the message sent correctly on chrome) :
var source;
function Body_Load(event) {
loopEvent();
}
function loopEvent() {
if (source == undefined) {
source = new EventSource("event/message.php");
}
source.onmessage = function(event) {
_e("out").value = event.data;
loopEvent();
}
}
P.S. : _e is a function that calls document.getElementById(id);
According to the Spec, the 3 second reconnection is by design when the connection is closed. PHP with a loop should theoretically stop this but the PHP script will be running indefinitely and wasting resources. You should try to avoid using apache and php for SSE because of this issue.
The standard http response should close a connection once the response is sent. You can change this with the header "connection: keep-alive" which should tell the browser that the connection is meant to stay open although this can cause problems if you're using proxies.
node.js or something similar is a better engine to use for SSE rather than apache/php and since it's basically JavaScript, its pretty easy to get to grips with.
Server Sent Event as name suggests the data should be traveling from server to client if it has to reconnect every three seconds to retrieve data from server then it is no different than other polling mechanisms.The purpose of SSE is to alert client as soon as there is new data which client is unaware of.Since server closes connection even if header is keep-alive there is no other way than to run php script in infinite loop but with considerable thread sleep to prevent burden on server.Till now i don't see any other way out and its better than spamming server every 3 seconds for new data.
I'm trying the same thing. With varying degrees of success.
Had the same problem with Firefox, running the same js code as mentioned.
Using the Nginx server and some PHP that exited(ie no continual loop), I could get messages back to a "Request" from firefox only once the PHP exited.
Run the PHP as a script in PHP.exe and all is good on the concole, stings are printed when flushed. However, Nginx doesn't send the data until the PHP has completed. Tried adding extra \r\n\r\n and flush() or ob_flush() did not help.
There is no pushing of data, as shown in Wireshark logs, just a delayed response packet to the GET.
Read that I need a "push" module for Nginx that requires a re-build from source.
So this is definitely an Nginx problem.
Using a socket in 'C' I was able to push data to Firefox as expected, and the socket was kept open, and no messages were missed. However this has the disadvantage that I need to server the page.html and the events/stream from the same socket or firefox will not connect due to Cross Site Url problems. There are some ways around this in certain situations, but not for a iframe in a menu system. This approach did prove the point that the SSE does work with firefox and there are pushed packets in the wireshark log. Where option 1 only had request/reply packets.
All this said, I still don't have a solution. I've tried to remove the buffering on the PHP and Nginx. But still nothing until PHP finishes. Tried different header options, eg chunks didn't help either.
I don't feel like writing a full blown http server in 'C' but this seems to be the only option that is working for me at the moment.
I'm about to try Apache, but most write ups suggest that this is worse than Nginx at this job.