if(isset($_GET['actionid']) && isset($_GET['profileid']))
{
$actionid = $_GET['actionid'];
$profileid = $_GET['profileid'];
$res = $database->news_poll($profileid,$actionid);
$k = 0;
while(!$NROW =$res->fetch_array())
{
usleep('50000000');
$res = $database->news_poll($profileid,$actionid);
}
$action = actiontype_encode($NROW,'0',$json,$encode,$database);
$data['action'] = $action;
echo json_encode($data);
}
this is my script for polling the server for new data.
but the working browser stops working only for my site. I guess the problem is that when a particular browser subscribes for the new data the connection is kept open so no further request can be made by the browser to same server. please explain if any problem.
If there is any way at all you can, I recommend setting yourself up with NodeJS and SocketIO for long polling. Your web server needs to keep a request open for every connected user, and that is more than Apache/PHP can handle for very long.
If that's not possible I recommend short polling, doing a normal ajax request every 3 seconds. That's not perfect but manageble.
I answered a similar question recently with more details.
Regardless of language I strongly advise against writing your own long polling server, unless you want that to be your project for a couple of years. I have been in a project that used a home grown long polling server written C and then re-written in Java, and it was not pretty.
I figured out the problem is that Apache serves multiple requests from a single client one at a time. So when a request is made to the long polling script at backend for new data that request hangs other requests from the same browser to the same server.
To overcome this drawback one needs to use node.js or tornado.
Related
I have a very simple Cosmos DB query that I am making from an asp.net core 3 Razor Pages application. The same query I make in Data Explorer in Azure will return results in 0.02ms. When I run it through the application, setting up stopwatches to see the duration of the calls, it can be anywhere from 400ms to 2000ms.
QueryDefinition queryDefinition = new QueryDefinition("SELECT * FROM Cache where Cache.JoinCode = #jc").WithParameter("#jc", JoinCode);
var query = _container.GetItemQueryIterator<HostCache>(queryDefinition);
List<HostCache> results = new List<HostCache>();
while (query.HasMoreResults)
{
var response = await query.ReadNextAsync();
results.AddRange(response.ToList());
}
return results.FirstOrDefault();
The long running request is the await query.ReadNextAsync();. Is there anything I can do to speed that up? Maybe I'm doing it wrong?
First, I would highly recommend that you (or anyone using Cosmos DB .Net SDK) to watch this video on Cosmos DB Youtube Channel: https://www.youtube.com/watch?v=McZIQhZpvew. This provides really useful information about the best practices to follow when working with this SDK.
This video will explain why the first request takes so much time and how you can speed that up.
To summarize for the purpose of this answer, creating an instance of Cosmos Client (with "Direct" connection mode) does not do much. When you make the 1st request with that client, the initialization happens and at that time SDK makes a few network requests to get necessary information about establishing "Direct" (TCP) connection. That's why it takes a great deal of time with the 1st request. After the 1st request, the information is cached by the SDK so subsequent requests take much less time than the 1st one.
To do the initialization while creating Cosmos client, you would need to use CreateAndInitializeAsync method of the CosmosClient. Here's an example of the same from the documentation page:
using Microsoft.Azure.Cosmos;
List<(string, string)> containersToInitialize = new List<(string, string)>
{ ("DatabaseName1", "ContainerName1"), ("DatabaseName2", "ContainerName2") };
CosmosClient cosmosClient = await CosmosClient.CreateAndInitializeAsync("connection-string-from-portal",
containersToInitialize)
hi
I want to build a control panel for a web art application that needs to run in fullscreen, so all this panel, that controls stuff like colors and speed values, have to be located at a different window.
My idea is to have a database storing all these values and when I make a change in the control panel window the corresponding variable in the application window gets updated too. So, it's basically a real-time update that I could do with AJAX setting a interval to keep checking for changes BUT my problem is: I can't wait 30 seconds or so for the update to happen and I guess a every-1-second AJAX request would be impossible.
Final question: is there a way to create a sort of a listener to changes in the database and fire the update event in the main application only immediately after I change some value in the control panel? Does Angular or another framework have this capability?
(Sorry for the long explanation, but I hope my question is clearer by offering the context [: )
A web socket powered application would have this benefit. This carries a bit more complexity on the back end, but has the benefit of making your application as close to real-time as can be reasonably expected.
The Mozilla Development Network has some good documentation on websockets.
On the front end, the WebSocket object should work for you on most modern browsers.
I'm not sure what your back end is written in, but Socket.IO for Node.js and Tornado for Python will make your applications web-socket capable
If one window is opening the other windows via JavaScript, you can keep the reference to the opened window and use otherWindow.postMessage to pass messages across
"Parent" window looks like
// set up to receive messages
window.addEventListener('message', function (e) {
if (e.origin !== 'http://my.url')
return; // ignore unknown source
console.log(e.message);
});
// set up to send messages
var otherWindow = window.open('/foo', '_blank');
otherWindow.postMessage('hello world', 'http://my.url');
"Child" windows look similar
// same setup to recieve
// ...
// set up to send
var otherWindow = window.opener;
// ... same as before
For the realtime I would recommend using a library like socket.io or using a database like firebase.
For the fullscreen I would recommend using a library like angular-screenfull
i use https://pushjs.io/, had exactly the same problem and this is a really simple solution for your problem. It is capable of sending and listening to events without any database interference.
How optimize an API call in symfony?
I call with Guzzle bundle, but the time in some situations is very long.
In client application call a function from the server.
In server application extract the objects from the database and send back to the client.
In client creat the new object with properties from server respons.
One of the ways to improve your API calls is to use caching. In Symfony there are many different ways to achieve this. I can show you one of them (PhpFileCache example):
In services.yml create cache service:
your_app.cache_provider:
class: Doctrine\Common\Cache\PhpFileCache
arguments: ["%kernel.cache_dir%/path_to/your_cache_dir", ".your.cached_file_name.php"]
(Remember, you need Doctrine extension in your app to work)
Then pass your caching service your_app.cache_provider to any service where you need caching:
Again in your services.yml:
some_service_of_yours:
class: AppBundle\Services\YourService
arguments: ['#your_app.cache_provider']
Finally, in your service (where you want to perform API caching):
use Doctrine\Common\Cache\CacheProvider;
class YourService
{
private $cache;
public function __construct(CacheProvider $cache)
{
$this->cache = $cache;
}
public function makeApiRequest()
{
$key = 'some_unique_identifier_of_your_cache_record';
if(!$data = $this->cache->fetch($key))
{
$data = $provider->makeActualApiCallHere('http://some_url');
$this->cache->save($key, serialize($data), 10800); //10800 here is amount of seconds to store your data in cache before its invalidated, change it to your needs
}
return $data; // now you can use the data
}
}
This is quite GENERIC example, you should change it to your exact needs, but idea is simple. You can cache data and avoid unnecessary API calls to speed things up. Be careful though, because cache has drawback of presenting stale(obsolete) data. Some things can (and should) be cached, but some things don't.
If you control the server
You should put a cache reverse proxy like Varnish on top of your PHP server. The PHP app must send HTTP cache headers to tell to the proxy how many time it must cache the request. Alternatively, you can use a library like FOSHttpCache to setup a cache invalidation strategy (the PHP server will purge the cache from the proxy when an update of the data occurs - it's a more advanced and complex scenario).
The PHP server will not even be called if the requested resource is in the reverse proxy cache.
You should also use a profiler like Blackfire.io or xhprof to find why some parts of your PHP code (or your SQL queries) take so many time to be executed, then optimize.
If you control the client
You can use this HTTP cache middleware for Guzzle to cache every API result according to HTTP headers sent by the API.
I have a very frustrating problem with a client's network environment, and I'm hoping someone can lend a hand in helping me figure this out...
They have an app that for now is written entirely inside of VBA for Excel. (No laughing.)
Part of my helping them improve their product and user experience involved converting their UI from VBA form elements to a single WebBrowser element that houses a rich web app which communicates between Excel and their servers. It does this primarily via a socket.io server/connection.
When the user logs in, a connection is made to a room on the socket server.
Initial "owner" called:
socket.on('create', function (roomName, userName) {
socket.username = userName;
socket.join(roomName);
});
Followup "participant" called:
socket.on('adduser', function (userName, roomName){
socket.username = userName;
socket.join(roomName);
servletparam = roomName;
var request = require('request');
request(bserURL + servletparam, function (error, response, body) {
io.sockets.to(roomName).emit('messages', body);
});
servletparam = roomName + '|' + userName;
request( baseURL + servletparam, function (error, response, body) {
io.sockets.to(roomName).emit('participantList', body);
});
});
This all worked beautifully well until we got to the point where their VBA code would lock everything up causing the socket connection to get lost. When the client surfaces form it's forced VBA induced pause (that lasts anywhere from 20 seconds to 3 minutes), I try to join the room again by passing an onclick to an HTML element that triggers a script to rejoin. Oddly, that doesn't work. However if I wait a few seconds and click the object by hand, it does rejoin the room. Yes, the click is getting received from the Excel file... we see the message to the socket server, but it doesn't allow that call to rejoin the room.
Here's what makes this really hard to debug. There's no ability to see a console in VBA's WebBrowser object, so I use weinre as a remote debugger, but a) it seems to not output logs and errors to the console unless I'm triggering them to happen in the console, and b) it loses its connection when socket.io does, and I'm dead in the water.
Now, for completeness, if I remove the .join() calls and the .to() calls, it all works like we'd expect it to minus all messages being written into a big non-private room. So it's an issue with rejoining rooms.
As a long-time user of StackOverflow, I know that a long question with very little code is frowned upon, but there is absolutely nothing special about this setup (which is likely part of the problem). It's just simple emits and broadcasts (from the client). I'm happy to fill anything in based on followup questions.
To anyone that might run across this in the future...
The answer is to manage your room reconnection on the server side of things. If your client can't make reliable connections, or is getting disconnected a lot, the trick it to keep track of the rooms on the server side and join them when they do a connect.
The other piece of this that was a stumper was that the chat server and the web UI weren't on the same domain, so I couldn't share cookies to know who was connecting. In their case there wasn't a need to have them hosted in two different places, so I merged them, had Express serve the UI, and then when the client surfaced after a forced disconnect, I'd look at their user ID cookie, match them to the rooms they were in that I kept track of on the server, and rejoined them.
I'm trying to get to grips with Server-Side Events as they fit my requirements perfectly and seem like they should be simple to implement, however I can't get past a vague error and what looks like the connection repeatedly being closed and re-opened. Everything I have tried is based on this and other tutorials.
The PHP is a single script:
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
function sendMsg($id, $msg) {
echo "id: $id" . PHP_EOL;
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
$serverTime = time();
sendMsg($serverTime, 'server time: ' . date("h:i:s", time()));
?>
and the JavaScript looks like this (run on body load):
function init() {
var source;
if (!!window.EventSource) {
source = new EventSource('events.php');
source.addEventListener('message', function(e) {
document.getElementById('output').innerHTML += e.data + '<br />';
}, false);
source.addEventListener('open', function(e) {
document.getElementById('output').innerHTML += 'connection opened<br />';
}, false);
source.addEventListener('error', function(e) {
document.getElementById('output').innerHTML += 'error<br />';
}, false);
}
else {
alert("Browser doesn't support Server-Sent Events");
}
}
I have searched around a bit but can't find information on
If Apache needs any special configuration to support server-sent events, and
How I can initiate a push from the server with this kind of setup (e.g. can I simply execute the PHP script from CLI to give a push to the already-connected-browser?)
If I run this JS in Chrome (16.0.912.77) it opens the connection, receives the time, then errors (with no useful information in the error object), then reconnects in 3 seconds and goes through the same process. In Firefox (10.0) I get the same behaviour.
EDIT 1: I thought the issue could be related to the server I was using, so I tested on a vanilla XAMPP install and the same error comes up. Should a basic server configuration be able to handle this without modification / extra configuration?
EDIT 2: The following is an example of output from the browser:
connection opened
server time: 01:47:20
error
connection opened
server time: 01:47:23
error
connection opened
server time: 01:47:26
error
Can anyone tell me where this is going wrong? The tutorials I have seen make it look like SSE is very straightforward. Also any answers to my two numbered questions above would be really helpful.
Thanks.
The problem is your php.
With the way your php script is written, only one message is sent per execution. That's how it works if you access the php file directly, and that's how it works if you access the file with an EventSource. So in order to make your php script send multiple messages, you need a loop.
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
function sendMsg($id, $msg) {
echo "id: $id" . PHP_EOL;
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
while(true) {
$serverTime = time();
sendMsg($serverTime, 'server time: ' . date("h:i:s", time()));
sleep(1);
}
?>
I have altered your code to include an infinite loop that waits 1 second after every message sent (following an example found here: Using server-sent events).
This type of loop is what I'm currently using and it eliminated the constant connection drop and reconnect every 3 seconds. However (and I've only tested this in chrome), the connections are now only kept alive for 30 seconds. I will be continuing to figure out why this is the case and I'll post a solution when I find one, but until then this should at least get you closer to your goal.
Hope that helps,
Edit:
In order to keep the connection open for ridiculously long times with php, you need to set the max_execution_time (Thanks to tomfumb for this). This can be accomplished in at least three ways:
If you can alter your php.ini, change the value for "max_execution_time." This will allow all of your scripts to run for the time you specify though.
In the script you wish to run for a long time, use the function ini_set(key, value), where key is 'max_execution_time' and value is the time in seconds you wish your script to run for.
In the script you wish to run for a long time, use the function set_time_limit(n) where n is the number of seconds that you wish your script to run.
Server Sent Events are easy only when it comes to the Javascript part. First of all a lot of tutorials on SSE in the internet are closing their connections in the server part. Be it PHP or Java examples. This is really astonishing because what you get then is just a different way of implementing a "Ajax Polling" system with a strictly defined payload structure (and some minor features like client retry values set by server side). You can easily implement that with a few lines of jQuery. No need for SSE then.
According to the spec of SSE, i would say that the retry shouldnt be the normal way of implementing a client side loop. For me SSE is a one way streaming method which relies on a server backend which does not close the connection after pushing the first data to the client.
In Java its useful to use Servlet3 Async spec in order to free the request thread immediately and do the processing / streaming in a different thread. This works so far but still i dont like the 30 seconds connection lifetime for the EventSource request. Even i am pushing data every 5 seconds, the connection will be terminated after 30 seconds (chrome, firefox). Of course SSE will reconnect per default after 3 seconds but still i dont think this is the way it should be.
One problem is that some Java MVC frameworks dont have the ability to keep the connection open after data sending, so that you end up coding to the bare Servlet API. After on 24hours on coding prototypes in Java, i am more or less dissapointed because the gain over a traditional jQuery-Ajax-loop is not THAT much. And the problem with polyfilling the SSE feature is also existant.
The problem is not a server side issue, this all happens on the client and is part of the spec (I know it sounds weird).
http://dev.w3.org/html5/eventsource/
"When a user agent is to reestablish the connection, the user agent must run the following steps. These steps are run asynchronously, not as part of a task. (The tasks that it queues, of course, are run like normal tasks and not asynchronously.)"
Queue a task to run the following steps:
If the readyState attribute is set to CLOSED, abort the task.
Set the readyState attribute to CONNECTING.
Fire a simple event named error at the EventSource object.
I can't see any need to have an error here, so I have modified your Init function to filter out the error event fired whilst connecting.
function init() {
var CONNECTING = 0;
var source;
if (!!window.EventSource) {
source = new EventSource('events.php');
source.addEventListener('message', function (e) {
document.getElementById('output').innerHTML += e.data + '';
}, false);
source.addEventListener('open', function (e) {
document.getElementById('output').innerHTML += 'connection opened';
}, false);
source.addEventListener('error', function (e) {
if (source.readyState != CONNECTING) {
document.getElementById('output').innerHTML += 'error';
}
}, false);
}
else {
alert("Browser doesn't support Server-Sent Events");
}
}
There is no actual issue with the code, that I can see. The answer selected as correct, is then, incorrect.
This sums up the behavior mentioned in the question (http://www.w3.org/TR/2009/WD-html5-20090212/comms.html):
"If such a resource (with the correct MIME type) completes loading (i.e. the entire HTTP response body is received or the connection itself closes), the user agent should request the event source resource again after a delay equal to the reconnection time of the event source. This doesn't apply for the error cases that are listed below."
The problem lies with the stream. I've successfully kept a single EventStream open before in perl; just send the appropriate HTTP headers, and start sending stream data; never shutdown the stream server side. The issue is that it seems most HTTP libraries attempt to close the stream after its been opened. This will cause the client to attempt to reconnect to the server, which is fully standard compliant.
This means that it will appear that the problem is solved by running a while loop, for a couple of reasons:
A) The code will continue to send data, as if it were pushing out a large file
B) The code (php server) will never have the chance to attempt to close the connection
However, the problem here is obvious: to keep the stream alive, a constant stream of data must be sent. This results in wasteful utilization of resources, and negates any benefits the SSE stream is supposed to provide.
I'm not enough of a php guru to know, but I'd imagine that something in the php server/later in the code is prematurely closing the stream; I had to manipulate the stream at Socket level with Perl to keep it open, since HTTP::Response was closing the connection, and causing the client browser to attempt to re-open the connection. In Mojolicious (another Perl web framework), this can be done by opening a Stream object and setting the timeout to zero, so that the stream never times out.
So, the proper solution here is not to use a while loop; it is to call the appropriate php functions for opening, and keeping open, a php stream.
I was able to do it by implementing a custom event loop. It seems that this html5 feature is not ready at all and has compatibility issues even with the latest version of google chrome. Here it is, working on firefox (can't get the message sent correctly on chrome) :
var source;
function Body_Load(event) {
loopEvent();
}
function loopEvent() {
if (source == undefined) {
source = new EventSource("event/message.php");
}
source.onmessage = function(event) {
_e("out").value = event.data;
loopEvent();
}
}
P.S. : _e is a function that calls document.getElementById(id);
According to the Spec, the 3 second reconnection is by design when the connection is closed. PHP with a loop should theoretically stop this but the PHP script will be running indefinitely and wasting resources. You should try to avoid using apache and php for SSE because of this issue.
The standard http response should close a connection once the response is sent. You can change this with the header "connection: keep-alive" which should tell the browser that the connection is meant to stay open although this can cause problems if you're using proxies.
node.js or something similar is a better engine to use for SSE rather than apache/php and since it's basically JavaScript, its pretty easy to get to grips with.
Server Sent Event as name suggests the data should be traveling from server to client if it has to reconnect every three seconds to retrieve data from server then it is no different than other polling mechanisms.The purpose of SSE is to alert client as soon as there is new data which client is unaware of.Since server closes connection even if header is keep-alive there is no other way than to run php script in infinite loop but with considerable thread sleep to prevent burden on server.Till now i don't see any other way out and its better than spamming server every 3 seconds for new data.
I'm trying the same thing. With varying degrees of success.
Had the same problem with Firefox, running the same js code as mentioned.
Using the Nginx server and some PHP that exited(ie no continual loop), I could get messages back to a "Request" from firefox only once the PHP exited.
Run the PHP as a script in PHP.exe and all is good on the concole, stings are printed when flushed. However, Nginx doesn't send the data until the PHP has completed. Tried adding extra \r\n\r\n and flush() or ob_flush() did not help.
There is no pushing of data, as shown in Wireshark logs, just a delayed response packet to the GET.
Read that I need a "push" module for Nginx that requires a re-build from source.
So this is definitely an Nginx problem.
Using a socket in 'C' I was able to push data to Firefox as expected, and the socket was kept open, and no messages were missed. However this has the disadvantage that I need to server the page.html and the events/stream from the same socket or firefox will not connect due to Cross Site Url problems. There are some ways around this in certain situations, but not for a iframe in a menu system. This approach did prove the point that the SSE does work with firefox and there are pushed packets in the wireshark log. Where option 1 only had request/reply packets.
All this said, I still don't have a solution. I've tried to remove the buffering on the PHP and Nginx. But still nothing until PHP finishes. Tried different header options, eg chunks didn't help either.
I don't feel like writing a full blown http server in 'C' but this seems to be the only option that is working for me at the moment.
I'm about to try Apache, but most write ups suggest that this is worse than Nginx at this job.