print jfreechart on IE - internet-explorer-11

I print a jfreechart on Internet Explorer and i got a blank page.
Actually, the version of my IE is 11.0.9600.18697CO, 11.0.43 (KB4021558).
I hadn't this problem with older version IE.
I haven't this problme on Chrome and Firefox.
My freechart is generated on the server, showed on the client by a servlet and deleted on the server (it is a jfreechart-one time]).
On the console debug of IE, when i execute window.print(); there are requets sended to server. I think it caused problem (although http code = 200).
If i don't delete the chart on the server, i have no problem.
Someone hit the same problem? Solution?
thanks a lot,
best regards

In a servlet context using ChartUtilities, instead of using one of the save… methods, try using the corresponding write… method.
public void doGet(HttpServletRequest request, HttpServletResponse response) throws … {
OutputStream out = response.getOutputStream();
…
//ChartUtilities.saveChartAsPNG(file, chart, …);
ChartUtilities.writeChartAsPNG(out, chart, …);
}
Could you explain why? I am using ServletUtilities.saveChartAsPNG()
I'm guessing that there's a race condition that allows the file to be deleted prematurely. If you need the ChartRenderingInfo, the corresponding ChartUtilities method would likely be writeChartAsPNG(). If you can't switch, use a DelayQueue<File> to defer deleting the temporary file.

Related

How to resolve JSF1095: The response was already committed by the time (...), thrown in every Request

I've read a lot about how Flash is bugged in Mojarra, but I haven't any problems with it since now. I have a JavaEE-Project (1.8) where I have a #ManagedBean (javax.faces.bean) with #ViewScope (javax.faces.bean) and a function like
public void foo() throws IOException{
FacesContext.getCurrentInstance().getExternalContext().getFlash().setKeepMessages(keepMessages);
FacesContext.getCurrentInstance().addMessage("msg", new FacesMessage(FacesMessage.SEVERITY_INFO, message, detail));
redirectController.redirect("index.html");
}
The Redirect-Function in redirectController is just
public void redirect(String url) throws IOException {
FacesContext.getCurrentInstance().getExternalContext().redirect(url);
}
I'm calling this function from a xhtml-File by
<p:commandLink action="#{bean.foo()}"
update="#form">
buttontext
</p:commandLink>
When I'm pressing the Button, the redirect works and the message appears, but with this and and every following request I get this message in server.log:
Warnung: JSF1095: The response was already committed by the time we tried to set the outgoing cookie for the flash. Any values stored to the flash will not be available on the next request.
I'm using Glassfish 4.1.1 which runs Mojarra 2.2.12 and Primefaces 5.2.
I'm worried that this could cause sideeffects, other then spamming the log. What am I doing wrong?
PS: Since this is my first post on stackoverflow, I would like to say thank you for all the good answers you gave others in the past which saved me a lot of time and trouble. :-)

AzureReader2 not working with querystring

I have images in private blockblobs in Azure.
I am using Azure Reader 2 and can access the image like this http://localhost:55328/azure/00001/IMG_0001.JPG - it works fine and redirects to the blob with a Shared Access Signature.
However, if I try to resize the image, e.g. IMG_0001.JPG?width=100&height=100, I just get a 404.
Stepping through the code, I notice this line
if (e.VirtualPath.StartsWith(prefix, StringComparison.OrdinalIgnoreCase) && e.QueryString.Count == 0)
{
....
}
So, if there's a QueryString, no processing happens.
Debug output here:
https://gist.github.com/anonymous/28fd112eec194181baae
Thanks in advance
Your debugging misled you. It's true that redirection only happens when there is no querystring. When there are parameters, the blob needs to be modified, which means we must proxy it. A 302 redirect in that scenario is impossible.
AzureReader registers a IVirtualImageProvider, which ImageResizer automatically uses when handling all the proxying, processing, and caching.
The default behavior is to download, modify, and re-serve the data. The 302 redirect is just an optimization for throughput on unmodified files.
Notes:
sharedAccessExpiryTime is ignored, there is no setting by that name.
If you are going to reference code, it's best to link to the line in the file on github, otherwise we can't easily find the context. Press y on any github page to get a permalink, then click a line number (or range).

Binary data corrupted when hosting ServiceStack in Mono + FastCGI

I have a ServiceStack service with a method to handle a GET request. This method returns binary data.
public object Get(DownloadFile request) {
return new HttpResult(new FileInfo("some file"), "application/octet-stream", asAttachment: true);
}
When the host is Windows it works fine but when I'm running it in Linux with Mono+FastCGI the data I download is not the same.
I analyzed the returned bytes for a few files and concluded that there is a pattern. The data is getting wrapped in this way:
original data size + \r\n + original data + \r\n\r\n0\r\n\r\n
Why is this happening and how to fix it?
Edit:
Turns out this is due to chunked transfers which are part of HTTP 1.1.
Knocte's answer pointed me in the right direction and I was able to work around the problem by forcing my request to use HTTP 1.0:
var req = (HttpWebRequest)WebRequest.Create(url);
req.ProtocolVersion = new Version("1.0");
I didn't need to try the patch suggested by knocte but it looks like it's the proper way to fix the problem instead of avoiding it like I did.
I think you're being affected by this bug.
If the patch that is attached to it works for you, then you could clean it up, and propose it as a pull request to mono in github.

WinRT HttpClient blocks splashcreen

I do asynchronous requests in LoadState method of a certain Page. I use HttpClient to make a request and I expect the splashscreen to go away while I await the result.
If I am not connected to any networks, the splashscreen immediately goes away and I get a blank page because the request obviously didn't happen.
But if I am connected to a network but have connectivity issues (for example, I set a wrong IP address) it seems to start a request and just block.
My expectation was that the HttpClient would realize that it cannot send a request and either throw an exception or just return something.
I managed to solve the issue of blocking by setting a timeout of around 800 milliseconds, but now it doesn't work properly when the Internet connection is ok. Is this the best solution, should I be setting the timeout at all? What is the timeout that's appropriate which would enable me to differentiate between an indefinitely blocking call and a proper call that's just on a slower network?
I could perhaps check for Internet connectivity before each request, but that sounds like an unpredictable solution...
EDIT: Now, it's really interesting. I have tried again, and it blocks at this point:
var rd = await httpClient.SendAsync(requestMsg);
If I use Task.Run() as suggested in the comments and get a new Thread, then it's always fine.
BUT it's also fine without Task.Run() if there is no Internet access but the network access is not "Limited" (it says that the IPv4 connectivity is "Internet access" although I cannot open a single website in a browser and no data is returned from the web service. It just throws System.Net.Http.HttpRequestException which was something I was expecting in the first place) Only blocks when the network connection is Limited.
What if instead of setting a timeout, you checked the connection status using
public static bool IsConnected
{
get
{
return NetworkInformation.GetInternetConnectionProfile() != null;
}
}
This way if IsConnected, then you make the call; otherwise, ignore it.
I'm not sure if you are running this in App.xaml.cs? I've found requests made in that class can be fickle and it may be best to move the functionality to an extended splash screen to ensure the application makes it all the way through the activation process.
http://msdn.microsoft.com/en-us/library/windows/apps/xaml/Hh868191(v=win.10).aspx

HTML5 Server-Sent Events prototyping - ambiguous error and repeated polling?

I'm trying to get to grips with Server-Side Events as they fit my requirements perfectly and seem like they should be simple to implement, however I can't get past a vague error and what looks like the connection repeatedly being closed and re-opened. Everything I have tried is based on this and other tutorials.
The PHP is a single script:
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
function sendMsg($id, $msg) {
echo "id: $id" . PHP_EOL;
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
$serverTime = time();
sendMsg($serverTime, 'server time: ' . date("h:i:s", time()));
?>
and the JavaScript looks like this (run on body load):
function init() {
var source;
if (!!window.EventSource) {
source = new EventSource('events.php');
source.addEventListener('message', function(e) {
document.getElementById('output').innerHTML += e.data + '<br />';
}, false);
source.addEventListener('open', function(e) {
document.getElementById('output').innerHTML += 'connection opened<br />';
}, false);
source.addEventListener('error', function(e) {
document.getElementById('output').innerHTML += 'error<br />';
}, false);
}
else {
alert("Browser doesn't support Server-Sent Events");
}
}
I have searched around a bit but can't find information on
If Apache needs any special configuration to support server-sent events, and
How I can initiate a push from the server with this kind of setup (e.g. can I simply execute the PHP script from CLI to give a push to the already-connected-browser?)
If I run this JS in Chrome (16.0.912.77) it opens the connection, receives the time, then errors (with no useful information in the error object), then reconnects in 3 seconds and goes through the same process. In Firefox (10.0) I get the same behaviour.
EDIT 1: I thought the issue could be related to the server I was using, so I tested on a vanilla XAMPP install and the same error comes up. Should a basic server configuration be able to handle this without modification / extra configuration?
EDIT 2: The following is an example of output from the browser:
connection opened
server time: 01:47:20
error
connection opened
server time: 01:47:23
error
connection opened
server time: 01:47:26
error
Can anyone tell me where this is going wrong? The tutorials I have seen make it look like SSE is very straightforward. Also any answers to my two numbered questions above would be really helpful.
Thanks.
The problem is your php.
With the way your php script is written, only one message is sent per execution. That's how it works if you access the php file directly, and that's how it works if you access the file with an EventSource. So in order to make your php script send multiple messages, you need a loop.
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
function sendMsg($id, $msg) {
echo "id: $id" . PHP_EOL;
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
while(true) {
$serverTime = time();
sendMsg($serverTime, 'server time: ' . date("h:i:s", time()));
sleep(1);
}
?>
I have altered your code to include an infinite loop that waits 1 second after every message sent (following an example found here: Using server-sent events).
This type of loop is what I'm currently using and it eliminated the constant connection drop and reconnect every 3 seconds. However (and I've only tested this in chrome), the connections are now only kept alive for 30 seconds. I will be continuing to figure out why this is the case and I'll post a solution when I find one, but until then this should at least get you closer to your goal.
Hope that helps,
Edit:
In order to keep the connection open for ridiculously long times with php, you need to set the max_execution_time (Thanks to tomfumb for this). This can be accomplished in at least three ways:
If you can alter your php.ini, change the value for "max_execution_time." This will allow all of your scripts to run for the time you specify though.
In the script you wish to run for a long time, use the function ini_set(key, value), where key is 'max_execution_time' and value is the time in seconds you wish your script to run for.
In the script you wish to run for a long time, use the function set_time_limit(n) where n is the number of seconds that you wish your script to run.
Server Sent Events are easy only when it comes to the Javascript part. First of all a lot of tutorials on SSE in the internet are closing their connections in the server part. Be it PHP or Java examples. This is really astonishing because what you get then is just a different way of implementing a "Ajax Polling" system with a strictly defined payload structure (and some minor features like client retry values set by server side). You can easily implement that with a few lines of jQuery. No need for SSE then.
According to the spec of SSE, i would say that the retry shouldnt be the normal way of implementing a client side loop. For me SSE is a one way streaming method which relies on a server backend which does not close the connection after pushing the first data to the client.
In Java its useful to use Servlet3 Async spec in order to free the request thread immediately and do the processing / streaming in a different thread. This works so far but still i dont like the 30 seconds connection lifetime for the EventSource request. Even i am pushing data every 5 seconds, the connection will be terminated after 30 seconds (chrome, firefox). Of course SSE will reconnect per default after 3 seconds but still i dont think this is the way it should be.
One problem is that some Java MVC frameworks dont have the ability to keep the connection open after data sending, so that you end up coding to the bare Servlet API. After on 24hours on coding prototypes in Java, i am more or less dissapointed because the gain over a traditional jQuery-Ajax-loop is not THAT much. And the problem with polyfilling the SSE feature is also existant.
The problem is not a server side issue, this all happens on the client and is part of the spec (I know it sounds weird).
http://dev.w3.org/html5/eventsource/
"When a user agent is to reestablish the connection, the user agent must run the following steps. These steps are run asynchronously, not as part of a task. (The tasks that it queues, of course, are run like normal tasks and not asynchronously.)"
Queue a task to run the following steps:
If the readyState attribute is set to CLOSED, abort the task.
Set the readyState attribute to CONNECTING.
Fire a simple event named error at the EventSource object.
I can't see any need to have an error here, so I have modified your Init function to filter out the error event fired whilst connecting.
function init() {
var CONNECTING = 0;
var source;
if (!!window.EventSource) {
source = new EventSource('events.php');
source.addEventListener('message', function (e) {
document.getElementById('output').innerHTML += e.data + '';
}, false);
source.addEventListener('open', function (e) {
document.getElementById('output').innerHTML += 'connection opened';
}, false);
source.addEventListener('error', function (e) {
if (source.readyState != CONNECTING) {
document.getElementById('output').innerHTML += 'error';
}
}, false);
}
else {
alert("Browser doesn't support Server-Sent Events");
}
}
There is no actual issue with the code, that I can see. The answer selected as correct, is then, incorrect.
This sums up the behavior mentioned in the question (http://www.w3.org/TR/2009/WD-html5-20090212/comms.html):
"If such a resource (with the correct MIME type) completes loading (i.e. the entire HTTP response body is received or the connection itself closes), the user agent should request the event source resource again after a delay equal to the reconnection time of the event source. This doesn't apply for the error cases that are listed below."
The problem lies with the stream. I've successfully kept a single EventStream open before in perl; just send the appropriate HTTP headers, and start sending stream data; never shutdown the stream server side. The issue is that it seems most HTTP libraries attempt to close the stream after its been opened. This will cause the client to attempt to reconnect to the server, which is fully standard compliant.
This means that it will appear that the problem is solved by running a while loop, for a couple of reasons:
A) The code will continue to send data, as if it were pushing out a large file
B) The code (php server) will never have the chance to attempt to close the connection
However, the problem here is obvious: to keep the stream alive, a constant stream of data must be sent. This results in wasteful utilization of resources, and negates any benefits the SSE stream is supposed to provide.
I'm not enough of a php guru to know, but I'd imagine that something in the php server/later in the code is prematurely closing the stream; I had to manipulate the stream at Socket level with Perl to keep it open, since HTTP::Response was closing the connection, and causing the client browser to attempt to re-open the connection. In Mojolicious (another Perl web framework), this can be done by opening a Stream object and setting the timeout to zero, so that the stream never times out.
So, the proper solution here is not to use a while loop; it is to call the appropriate php functions for opening, and keeping open, a php stream.
I was able to do it by implementing a custom event loop. It seems that this html5 feature is not ready at all and has compatibility issues even with the latest version of google chrome. Here it is, working on firefox (can't get the message sent correctly on chrome) :
var source;
function Body_Load(event) {
loopEvent();
}
function loopEvent() {
if (source == undefined) {
source = new EventSource("event/message.php");
}
source.onmessage = function(event) {
_e("out").value = event.data;
loopEvent();
}
}
P.S. : _e is a function that calls document.getElementById(id);
According to the Spec, the 3 second reconnection is by design when the connection is closed. PHP with a loop should theoretically stop this but the PHP script will be running indefinitely and wasting resources. You should try to avoid using apache and php for SSE because of this issue.
The standard http response should close a connection once the response is sent. You can change this with the header "connection: keep-alive" which should tell the browser that the connection is meant to stay open although this can cause problems if you're using proxies.
node.js or something similar is a better engine to use for SSE rather than apache/php and since it's basically JavaScript, its pretty easy to get to grips with.
Server Sent Event as name suggests the data should be traveling from server to client if it has to reconnect every three seconds to retrieve data from server then it is no different than other polling mechanisms.The purpose of SSE is to alert client as soon as there is new data which client is unaware of.Since server closes connection even if header is keep-alive there is no other way than to run php script in infinite loop but with considerable thread sleep to prevent burden on server.Till now i don't see any other way out and its better than spamming server every 3 seconds for new data.
I'm trying the same thing. With varying degrees of success.
Had the same problem with Firefox, running the same js code as mentioned.
Using the Nginx server and some PHP that exited(ie no continual loop), I could get messages back to a "Request" from firefox only once the PHP exited.
Run the PHP as a script in PHP.exe and all is good on the concole, stings are printed when flushed. However, Nginx doesn't send the data until the PHP has completed. Tried adding extra \r\n\r\n and flush() or ob_flush() did not help.
There is no pushing of data, as shown in Wireshark logs, just a delayed response packet to the GET.
Read that I need a "push" module for Nginx that requires a re-build from source.
So this is definitely an Nginx problem.
Using a socket in 'C' I was able to push data to Firefox as expected, and the socket was kept open, and no messages were missed. However this has the disadvantage that I need to server the page.html and the events/stream from the same socket or firefox will not connect due to Cross Site Url problems. There are some ways around this in certain situations, but not for a iframe in a menu system. This approach did prove the point that the SSE does work with firefox and there are pushed packets in the wireshark log. Where option 1 only had request/reply packets.
All this said, I still don't have a solution. I've tried to remove the buffering on the PHP and Nginx. But still nothing until PHP finishes. Tried different header options, eg chunks didn't help either.
I don't feel like writing a full blown http server in 'C' but this seems to be the only option that is working for me at the moment.
I'm about to try Apache, but most write ups suggest that this is worse than Nginx at this job.