This has been driving me crazy.
Somehow when ever apache serves the following file:
$().ready(function(){
hideDataSetSelection();
var selector = "select#interface";
alert($(selector + "").val());
});
function hideDataSetSelection(){
$("div#dataBtn").hide();
}
function showDataSetSelection(){
$("div#dataBtn").show();
}
abc
xyz
123
456
It actually outputs the following:
function hideDataSetSelection(){
$("div#dataBtn").hide();
}
function showDataSetSelection(){
$("div#dataBtn").show();
}
$().ready(function(){
hideDataSetSelection();
var selector = "select#interface";
alert($(selector + " option:selected").val());
al
Which actually has content from a 'previous' version of the file.
This seems to occur for files that end in js or json only. Depending on the contents of the file, the output will get scrambled. I can only imagine that Apache is caching the file incorrectly or something...
This is within a Virtualbox environment.
Any ideas what the cause of this issue might be?
My issue has been solved by turning off sendfile. It seems the same issue has been described here:
Vagrant/VirtualBox/Apache2 Strange Cache Behaviour
Related
I'm generating a HTML-table with data from a geojson-file using leaflet. It works fine, but only if I do not delete the "alert" from the following code. Otherwise the data are displaied without table. How to solve this?
$.ajax({url:"wind.geojson"}).done(function(data) {
var data = JSON.parse(data);
L.geoJson(data,
{pointToLayer: MarkerStyle1});
});
alert();
function MarkerStyle1 (feature,latlng) {
...
document.writeln ("<td width='40'><div align='center'>" ,feature.properties.title, "</div></td>\n");
...
};
I do not see any possibility here to upload a file, it's 200kb. It's to find here: 1 The file works well, when I show the objects on a map.
2 shows on older version of the site, where the table is generated with php (done by a friend, I do not use php).
Perhaps it is a problem, that I do not have a "map", no "map.addLayer()" in this code!?
I now realized that the problem is caused by the asynchronous working of AJAX. I changed the code to
$.ajax({url:"wind.geojson", async: false})
Now it is working without the "alert()"-line!
I am trying to serve a PDF by adding watermark to it. I am using the image-watermark package for it
let option ={'text' : 'hello','color' : 'rgb(154, 50, 46)'};
res.setHeader('Content-Type', 'application/pdf');
res.setHeader('Content-Disposition', 'attachment; filename=test.pdf');
fs.createReadStream(watermark.embedWatermark('/pdf/js_tut.pdf', option)).pipe(res);
I am confused at what i did wrong? I am getting a 500 error. did I put the path name wrong? I have a public folder inside which I have a pdf folder inside which i have js_tut.pdf
Looking at the source code for the function embedWatermark, it doesn't actually return anything. You can see that here.
There is not a single return statement anywhere there.
So essentially what you're doing is this:
fs.createReadStream(undefined).pipe(res)
Which is likely why you're getting 500.
I am new to S3 and need to use it for image storage. I found a half dozen versions of an s2wrapper for cf but it appears that the only one set of for v4 is one modified by Leigh
https://gist.github.com/Leigh-/26993ed79c956c9309a9dfe40f1fce29
Dropped in the com directory and created a "test" page that contains the following code:
s3 = createObject('component','com.S3Wrapper').init(application.s3.AccessKeyId,application.s3.SecretAccessKey);
but got the following error :
So I changed the line 37 from
variables.Sv4Util = createObject('component', 'Sv4').init(arguments.S3AccessKey, arguments.S3SecretAccessKey);
to
variables.Sv4Util = createObject('component', 'Sv4Util').init(arguments.S3AccessKey, arguments.S3SecretAccessKey);
Now I am getting:
I feel like going through Leigh code and start changing things is a bad idea since I have lurked here for year an know Leigh's code is solid.
Does any know if there are any examples on how to use this anywhere? If not what I am doing wrong. If it makes a difference I am using Lucee 5 and not Adobe's CF engine.
UPDATE :
I followed Leigh's directions and the error is now gone. I am addedsome more code to my test page which now looks like this :
<cfscript>
s3 = createObject('component','com.S3v4').init(application.s3.AccessKeyId,application.s3.SecretAccessKey);
bucket = "imgbkt.domain.com";
obj = "fake.ping";
region = "s3-us-west-1"
test = s3.getObject(bucket,obj,region);
writeDump(test);
test2 = s3.getObjectLink(bucket,obj,region);
writeDump(test2);
writeDump(s3);
</cfscript>
Regardless of what I put in for bucket, obj or region I get :
JIC I did go to AWS and get new keys:
Leigh if you are still around or anyone how has used one of the s3Wrappers any suggestions or guidance?
UPDATE #2:
Even after Alex's help I am not able to get this to work. The Link I receive from getObjectLink is not valid and getObject never does download an object. I thought I would try the putObject method
test3 = s3.putObject(bucketName=bucket,regionName=region,keyName="favicon.ico");
writeDump(test3);
to see if there is any additional information, I received this :
I did find this article https://shlomoswidler.com/2009/08/amazon-s3-gotcha-using-virtual-host.html but it is pretty old and since S3 specifically suggests using dots in bucketnames I don't that it is relevant any longer. There is obviously something I am doing wrong but I have spent hours trying to resolve this and I can't seem to figure out what it might be.
I will give you a rundown of what the code does:
getObjectLink returns a HTTP URL for the file fake.ping that is found looking in the bucket imgbkt.domain.com of region s3-us-west-1. This link is temporary and expires after 60 seconds by default.
getObject invokes getObjectLink and immediately requests the URL using HTTP GET. The response is then saved to the directory of the S3v4.cfc with the filename fake.ping by default. Finally the function returns the full path of the downloaded file: E:\wwwDevRoot\taa\fake.ping
To save the file in a different location, you would invoke:
downloadPath = 'E:\';
test = s3.getObject(bucket,obj,region,downloadPath);
writeDump(test);
The HTTP request is synchronous, meaning the file will be downloaded completely when the functions returns the filepath.
If you want to access the actual content of the file, you can do this:
test = s3.getObject(bucket,obj,region);
contentAsString = fileRead(test); // returns the file content as string
// or
contentAsBinary = fileReadBinary(test); // returns the content as binary (byte array)
writeDump(contentAsString);
writeDump(contentAsBinary);
(You might want to stream the content if the file is large since fileRead/fileReadBinary reads the whole file into buffer. Use fileOpen to stream the content.
Does that help you?
I am trying to upload a file using YUI.
Below is my code that works fine in Firefox and Chrome.But not working in IE 8.
this.portlet_view_object.delegate('change', function(e) {
......
var fileField = Y.one('#newcase_file_'+context.imageCount);
var file = fileField._node.files[0];
if(!context.maxFileSize.call(context,file)){
return;
}
....
Here, the maxFileSize, is the method to which i pass the file object and perform operation related to fiel(e.g. fileSize, fileName).
In firefox and chrome, i am getting the file object by fileField._node.files[0];
But the same thing is not working in IE 8,and getting below error.
_node.files.0' is null or not an object
Any suggestions are welcomed.
Thanks.
This issue isn't related to YUI, just to IE.
.files holds multiple selected files, but IE8 does not support this methos and can only select one file. Therefore this property isn't recognized.
You can use this workaround:
if ('files' in fileField._node)
var file = fileField._node.files[0];
else
var file = fileField._node.value;
Or skip the whole .files completely if you want to, although I personally wouldn't recommend it.
I'm testing using RApache as an SSE (Server Sent Events) and similar (long poll, comet, etc.) back-end. I seem to be stuck on how to flush my output. Is it possible?
Here is my test R script:
setContentType("text/plain")
repeat{
cat(format(Sys.time()),"\n")
#sendBin(paste(format(Sys.time()),"\n"))
flush(stdout())
Sys.sleep(1)
}
My Rapache.conf entry is:
<Location /rtest/sse>
Options -MultiViews
SetHandler r-handler
RFileHandler /var/www/local/rtest/sse.r
</Location>
And I test it using either wget or curl:
wget -O - http://localhost/rtest/sse
curl http://localhost/rtest/sse
Both just sit there, meaning nothing is being sent.
Using sendBin() made no change, and neither did using flush().
If I change repeat to for(i in 1:5) then it sits there for 5 seconds and then shows 5 timestamps (spaced one second apart). So, I believe everything else is working fine and this is purely a buffering issue.
UPDATE: Looking at this with fresh eyes after 5 months, I think I could have described the problem more clearly: the problem is that RApache appears to be buffering all the output, and not sending anything until the R script exits. To be useful for streaming it has to send data out of Apache and on to the client each time flush() is called, i.e. while the R script is still running.
So, my question is: is there a way to get RApache to behave like that?
UPDATE 2 I tried adding flush.console() before or after the flush(stdout()) but no difference. I also tried setStatus(status=200L) at the top. And I tried SERVER$no_cache=T;SERVER$no_local_copy=T; at the top of the script. Again it made no difference. (Yes, none of those should have helped, but it never hurts to try!)
Here is a link to how PHP implements flush when it is running as an Apache module:
http://git.php.net/?p=php-src.git;a=blob;f=sapi/apache2handler/sapi_apache2.c#l290
I think the key point is that there is a call to ap_rflush(r). I'm guessing that RApache is not making the ap_rflush() call.
You are passing the wrong MIME type. Try changing with
setContentType("text/event-stream")
EDIT1:
this is the attempt, (still unsuccessful) I mentioned in the comment below, to implement SSE in Rook.
<%
res$header('Content-Type', 'text/event-stream')
res$header('Cache-Control', 'no-cache')
res$header('Connection', 'keep-alive')
A <- 1
sendMessage <- function(){
while(A<=4){
cat("id: ", Sys.time(), "\n", "data: hello\n\n", sep="")
A <- A+1
flush(stdout())
Sys.sleep(1)
}
}
-%>
<% sendMessage() %>
the while loop condition was supposed to be always TRUE but I'm having your same problem so I had to do a finite loop...
The good new is I DO have data reaching the browser. I can tell by looking, in developer tools, at the Content-Length in the Response Header section. it says 114 for the above code and you change, say, "Hello" in "Hello!" it'll say 118.
The js code is: (you'll need JQuery as well)
$(document).ready(function(){
$("button").click(function(){
var source = new EventSource("../R/sse.Rhtml");
source.onopen = function(event){
console.log("readyState: " + source.readyState);
}
source.onmessage = function(event){
$("#div").append(event.data);
};
source.onerror = function(event){
console.log(event);
};
});
});
So, in essence
1) The connection is open (readyState 1)
2) Buffering is still there
3) Data (after buffering) reaches the browser but an error happens in receiving them properly.
EIDT2:
it's interesting to note that brew()ing the above .Rhtml file the output is not buffered. There must be a configuration the in the web server (both the R internal and Apache) that buffer the data flows.
As a side note, flush is not even needed, cat's output defaults to stout(). So the options are:
Web server configuration
The R equivalent of the PHP ob_flush(); which is always used in any PHP implementation I've seen. this is example