I have made an application using Apache Camel that integrates well with AWS S3. Here is the code:
Predicate newFilePred = header(NEW_FILE_RECEIVED_HEADER).isEqualTo(SUCCESS);
from(incomingEndpoint)
.process((exchange) -> {
logger.info("Checking S3 bucket.");
Date newFileUploadDate = (Date) exchange.getIn().getHeaders().get("CamelAwsS3LastModified");
if (fileIsNew(newFileUploadDate))
exchange.getIn().setHeader(NEW_FILE_RECEIVED_HEADER, SUCCESS);
} else {
exchange.getIn().setHeader(NEW_FILE_RECEIVED_HEADER, FAILURE);
}
})
.choice()
.when(newFilePred)
.to(outgoingEndpoint)
.endChoice()
.end();
The application works well, but it throws a really annoying warning seen below:
Camel Thread #1 - aws-s3://mybucket] | c.a.s.s.i.S3AbortableInputStream Not all bytes were read from the
S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior.
Request only the bytes you need via a ranged GET or drain the input stream after use. request_id=
I know the problem is that I dont direct the output of the file anywhere if the header "NEW_FILE_RECEIVED_HEADER" is set to failure. This is by design because I do not want to download the file in this case. Is there anyway to tell Camel to abort the connection properly after I have identified the file as "unwanted"? I could create another camel route directly to a "trash" directory, but these would be useless cycles.
Thank you for your help!
For lack of a better solution, I managed to mute these messages by implementing a garbage/failure route that finishes the http get/input stream from S3.
If someone comes up with a better method or a different approach entirely, please feel free to "beat" this answer. The underlying rule that was discovered is that a finished route (A -> B) is required or a warning will be thrown. The warnings are harmless unless you view a polluted log file like I do.
Finished route conditionals look like this:
.choice()
.when(newFilePred)
.to(successfulOutgoingEndpoint)
.otherwise()
.to(failureOutgoingEndpoint);
Related
My ASP.NET Core 3.0 in a particular configuration/deployment logs:
[INF] CORS policy execution failed.
[INF] Request origin https://bla.com does not have permission to access the resource.
How can I log at that point the resource that was requested for debugging ?
(note this question is not about the actual issue or solving it)
(note that I am not after globally increasing the log level etc)
Well, that middleware is locked down pretty badly, and I haven't found any sensible way to hook into it.
If you want to replace the CorsMiddleware, you can't just create a decorator that calls Invoke() on the middleware, because you'll have no idea what happened.
Another solution might be to replace the CorsService:ICorsService registration in the service collection with a decorator, and then check the CorsResult after delegating the call to EvaluatePolicy(). That way you could emit an additional log message close to where the original message is emitted.
But there is another possible solution, both very simple and very crude: To check what happened in the request. Albeit that is a bit farther away from the original logged message.
The code below is a delegate added to the pipeline (in Startup/Configure, before .UseCors()) that checks if the request was a preflight request (the same way CorsService does), and if it was successful, i.e. the AccessControlAllowOrigin header is present. If it wasn't successful, it logs a message with the same EventId and source as the CorsService.
app.Use(async (ctx, next) =>
{
await next();
var wasPreflightRequest = HttpMethods.IsOptions(ctx.Request.Method)
&& ctx.Request.Headers.ContainsKey(CorsConstants.AccessControlRequestMethod);
var isCorsHeaderReturned = ctx.Response.Headers.ContainsKey(HeaderNames.AccessControlAllowOrigin);
if (wasPreflightRequest && !isCorsHeaderReturned)
{
ctx.RequestServices.GetRequiredService<ILoggerFactory>()
.CreateLogger<CorsService>()
.LogInformation(new EventId(5, "PolicyFailure"),
$"CORS preflight failed at resource: {ctx.Request.Path}.");
}
});
Based on my testing it seems to work. ¯\_(ツ)_/¯
It might not be what you were looking for, but who knows, maybe it will be useful for someone.
(Obviously a good way to deal with these things is to use a structured logging solution, like Serilog, and add enrichers to capture additional request information, or add stuff manually to a diagnostic context. But setting that up is quite a bit more involved.)
My question is kind of similar to this question but I think the response is not answering the question at all.
To elaborate,
I have the following code piece:
Configuration:
BusConfiguration busConfiguration = new BusConfiguration();
busConfiguration.EndpointName("Samples.DataBus.Sender");
busConfiguration.UseSerialization<JsonSerializer>();
busConfiguration.UseDataBus<FileShareDataBus>().BasePath(BasePath);
busConfiguration.UsePersistence<InMemoryPersistence>();
busConfiguration.EnableInstallers();
using (IBus bus = Bus.Create(busConfiguration).Start())
....
Message:
[TimeToBeReceived("00:01:00")]
public class MessageWithLargePayload : ICommand
{
public string SomeProperty { get; set; }
public DataBusProperty<byte[]> LargeBlob { get; set; }
}
This works fine (it creates queues, sends messages in the queue, creates a file for the LargeBlob property and stores it in the base path, receiver takes the message and handles it).
My question is: Is there any way to remove the created files (LargeBlob) after the message has been handled or taken out of the queue or after it lands in the error-queue.
The Documentation clearly states that files are not cleaned up, but I think this is kind of a messy behaviour, can anyone help?
Is there any way to remove the files after the message has been handled or taken out of the queue or after it lands in the error-queue.
After message handled
After taken out of the queue
After it is in error queue
I'm not really sure what you're after? You want to remove the files, but you're not sure when?
NServiceBus has no way to figure out when the file should be deleted. Perhaps you're deferring a message for the file to be processed later. Or you're giving the task to another handler. Which means that if the file is removed, there's no way that other handler can process the file. So removing the file depends on your functional needs.
When the message is in the error queue, it is most likely that you want to try and process it again. Why else put the message in an error queue, instead of just removing the message altogether?
Besides that, the file system isn't transactional. So there's no way for any software to tell if messages got processed correctly and the file should be deleted. And when the outbox has been enabled in NServiceBus, the message is removed from the queuing storage, but it's not been processed yet. If the file would've been removed by then, it also can't be processed anymore.
As you can tell, there are a large number of scenarios where removing the file can pose a problem. The only one who actually knows when which file could be removed, is you as a developer. You'll have to come up with a strategy to remove the files.
The sample has a class Program with a static field BasePath. Make it public so your handler can access it. Then in the handler you can obtain the file location like this:
public void Handle(MessageWithLargePayload message)
{
var filename = Path.Combine(Program.BasePath, message.LargeBlob.Key);
Console.WriteLine(filename);
UPDATE
Added some documentation about a possible cleanup strategy. We have some plans for a really good solution, but it'll take time. So for now perhaps this can help.
I solved the file cleanup challenge by creating a service that removes the files after a configurable number of hours. I will say that with a large number of bus files, you will be more successful if you do a rolling delete vs trying to do it once a day.
There are two options out on GitHub that are already coded that can be used:
PowerShell Script:
https://gist.github.com/smartindale/50674db76bd2e36cc94f
Windows Service: https://github.com/bradjolicoeur/VapoDataBus
The PowerShell script worked with low volumes, but I had issues with running reliably with larger volumes of files.
The Windows Service is reliable on large volumes and is more efficient.
let's say we have a P2P multi-player Flash based game hosted on a website. Would it be possible to create a browser extension that would listen to what is going on within the Flash application? For example, I would like to know when a player connects to a room, gets kicked or banned, or simply leaves by himself. I'm sorry this is not really a specific question but I need a direction to start. Thanks in advance!
I can see a few ways to communicate between Flash and a browser plugin.
One is to open a socket to a server running on the local machine. Because of the security sandbox, this may not be the easiest approach, but if feasible, it is of course probably the one to go for because you've already got your socket-handling code written, and listening/writing to a additional socket isn't terribly complicated. For this approach, you just need your plugin to start listening on a socket, and get the flash applet to connect to it.
Another way might be to try something with passing messages in cookies. Pretty sure this would just cause much grief, though.
Another way, and I suspect this may turn out to be the easier path, is to communicate between Flash and JavaScript using the ExternalInterface class, then from JavaScript to the plugin. Adobe's IntrovertIM example should get you started if you can find a copy on the web.
In Flash, create two functions, a jsToSwf(command:String, args:Array<String>):Dynamic function, to handle incoming messages from JS that are sent to that callback, and a swfToJs(command:String, args:Array<String> = null):Dynamic function, which calls flash.external.ExternalInterface.call("swfToJs", command, args);.
To set it up, you need to do something like:
if (flash.external.ExternalInterface.available) {
flash.external.ExternalInterface.addCallback("jsToSwf", jsToSwf);
swfToJs("IS JS READY?");
}
(The two parameters to addCallback are what the function is called in JS, and what it's called in Flash. They don't have to be the same thing, but it sort of makes sense that they do)
In JS, you need the same functions: function swfToJs(command, params) accepts commands and parameter lists from Flash; and jsToSwf(command, params) calls getSwf("Furcadia").jsToSwf(command, params);.
getSwf("name") should probably be something like:
/** Get ref to specified SWF file.
// Unfortunately, document.getElementById() doesn't
// work well with Flash Player/ExternalInterface. */
function getSwf(movieName) {
result = '';
if (navigator.appName.indexOf("Microsoft") != -1) {
result = window[movieName];
} else {
result = document[movieName];
}
return result;
}
The only fiddly bit there is that you need to do a little handshake to make sure everyone's listening. So when you have Flash ready, it calls swfToJs("IS JS READY?"); then the JS side, on getting that command, replies with jsToSwf("JS IS READY!"); then on getting that, Flash confirms receipt with swfToJs("FLASH IS READY!"); and both sides set a flag saying they're now clear to send any commands they like.
So, you've now got Flash talking with JS. But how does JS talk with a browser extension? And, do you mean extension, or add-on, since there's a difference! Well, that becomes a whole 'nother can of worms, since you didn't specify which browser.
Every browser handles things differently. For example, Mozilla has port.emit()/port.on() and the older postMessage() as APIs for JS to communicate with add-ons.
Still, I think ExternalInterface lets us reduce a hard question (Flash-to-external-code comms) to a much simpler question (Js-to-external-code comms).
I get the following error within Magento CE 1.6.1.0
Warning: session_start() [<a href='function.session-start'>function.session-start</a>]: Cannot send session cookie - headers already sent by (output started at /home/dev/env/var/www/user/dev/wdcastaging/lib/Zend/Controller/Response/Abstract.php:586) in /home/dev/env/var/www/user/dev/wdcastaging/app/code/core/Mage/Core/Model/Session/Abstract/Varien.php on line 119
when accessing /api/soap/?wsdl
Apparently, a session_start() is being attempted after the entire contents of the WSDL file have already been output, resulting in the error.
Why is magento attempting to start a session after outputting all the datums? I'm glad you asked. So it looks like controller_front_send_response_after is being hooked by Mage_Persistent in order to call synchronizePersistentInfo(), which in turn ends up getting that session_start() to fire.
The interesting thing is that this wasn't always happening, initially the WSDL loaded just fine for me, initially I racked my brains to try and see what customization may have been made to our install to cause this, but the tracing I've done seems to indicate that this is all happening entirely inside of core.
We have also experienced a tiny bit of (completely unrelated) strangeness with Mage_Persistent which makes me a little more willing to throw my hands up at this point and SO it.
I've done a bit of searching on SO and have found some questions related to the whole "headers already sent" thing in general, but not this specific case.
Any thoughts?
Oh, and the temporary workaround I have in place is simply disabling Mage_Persistent via the persistent/options/enable config data. I also did a little bit of digging as to whether it might be possible to observe an event in order to disable this module only for the WSDL controller (since that seems to be the only one having problems), but it looks like that module relies exclusively on this config flag to determine it's enabled status.
UPDATE: Bug has been reported: http://www.magentocommerce.com/bug-tracking/issue?issue=13370
I'd report this is a bug to the Magento team. The Magento API controllers all route through standard Magento action controller objects, and all these objects inherit from the Mage_Api_Controller_Action class. This class has a preDispatch method
class Mage_Api_Controller_Action extends Mage_Core_Controller_Front_Action
{
public function preDispatch()
{
$this->getLayout()->setArea('adminhtml');
Mage::app()->setCurrentStore('admin');
$this->setFlag('', self::FLAG_NO_START_SESSION, 1); // Do not start standart session
parent::preDispatch();
return $this;
}
//...
}
which includes setting a flag to ensure normal session handling doesn't start for API methods.
$this->setFlag('', self::FLAG_NO_START_SESSION, 1);
So, it sounds like there's code in synchronizePersistentInf that assumes the existence of a session object, and when it uses it the session is initialized, resulting in the error you've seen. Normally, this isn't a problem as every other controller has initialized a session at this point, but the API controllers explicitly turns it off.
As far as fixes go, your best bet (and probably the quick answer you'll get from Magento support) will be to disable the persistant cart feature for the default configuration setting, but then enable it for specific stores that need it. This will let carts
Coming up with a fix on your own is going to be uncharted territory, and I can't think of a way to do it that isn't terribly hacky/unstable. The most straight forward way would be a class rewrite on the synchronizePersistentInf that calls it's parent method unless you've detected this is an API request.
This answer is not meant to replace the existing answer. But I wanted to drop some code in here in case someone runs into this issue, and comments don't really allow for code formatting.
I went with a simple local code pool override of Mage_Persistent_Model_Observer_Session to exit out of the function for any URL routes that are within /api/*
Not expecting this fix to need to be very long-lived or upgrade-friendly, b/c I'm expecting them to fix this in the next release or so.
public function synchronizePersistentInfo(Varien_Event_Observer $observer)
{
...
if ($request->getRouteName() == 'api') {
return;
}
...
}
I'm trying to get to grips with Server-Side Events as they fit my requirements perfectly and seem like they should be simple to implement, however I can't get past a vague error and what looks like the connection repeatedly being closed and re-opened. Everything I have tried is based on this and other tutorials.
The PHP is a single script:
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
function sendMsg($id, $msg) {
echo "id: $id" . PHP_EOL;
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
$serverTime = time();
sendMsg($serverTime, 'server time: ' . date("h:i:s", time()));
?>
and the JavaScript looks like this (run on body load):
function init() {
var source;
if (!!window.EventSource) {
source = new EventSource('events.php');
source.addEventListener('message', function(e) {
document.getElementById('output').innerHTML += e.data + '<br />';
}, false);
source.addEventListener('open', function(e) {
document.getElementById('output').innerHTML += 'connection opened<br />';
}, false);
source.addEventListener('error', function(e) {
document.getElementById('output').innerHTML += 'error<br />';
}, false);
}
else {
alert("Browser doesn't support Server-Sent Events");
}
}
I have searched around a bit but can't find information on
If Apache needs any special configuration to support server-sent events, and
How I can initiate a push from the server with this kind of setup (e.g. can I simply execute the PHP script from CLI to give a push to the already-connected-browser?)
If I run this JS in Chrome (16.0.912.77) it opens the connection, receives the time, then errors (with no useful information in the error object), then reconnects in 3 seconds and goes through the same process. In Firefox (10.0) I get the same behaviour.
EDIT 1: I thought the issue could be related to the server I was using, so I tested on a vanilla XAMPP install and the same error comes up. Should a basic server configuration be able to handle this without modification / extra configuration?
EDIT 2: The following is an example of output from the browser:
connection opened
server time: 01:47:20
error
connection opened
server time: 01:47:23
error
connection opened
server time: 01:47:26
error
Can anyone tell me where this is going wrong? The tutorials I have seen make it look like SSE is very straightforward. Also any answers to my two numbered questions above would be really helpful.
Thanks.
The problem is your php.
With the way your php script is written, only one message is sent per execution. That's how it works if you access the php file directly, and that's how it works if you access the file with an EventSource. So in order to make your php script send multiple messages, you need a loop.
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
function sendMsg($id, $msg) {
echo "id: $id" . PHP_EOL;
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
while(true) {
$serverTime = time();
sendMsg($serverTime, 'server time: ' . date("h:i:s", time()));
sleep(1);
}
?>
I have altered your code to include an infinite loop that waits 1 second after every message sent (following an example found here: Using server-sent events).
This type of loop is what I'm currently using and it eliminated the constant connection drop and reconnect every 3 seconds. However (and I've only tested this in chrome), the connections are now only kept alive for 30 seconds. I will be continuing to figure out why this is the case and I'll post a solution when I find one, but until then this should at least get you closer to your goal.
Hope that helps,
Edit:
In order to keep the connection open for ridiculously long times with php, you need to set the max_execution_time (Thanks to tomfumb for this). This can be accomplished in at least three ways:
If you can alter your php.ini, change the value for "max_execution_time." This will allow all of your scripts to run for the time you specify though.
In the script you wish to run for a long time, use the function ini_set(key, value), where key is 'max_execution_time' and value is the time in seconds you wish your script to run for.
In the script you wish to run for a long time, use the function set_time_limit(n) where n is the number of seconds that you wish your script to run.
Server Sent Events are easy only when it comes to the Javascript part. First of all a lot of tutorials on SSE in the internet are closing their connections in the server part. Be it PHP or Java examples. This is really astonishing because what you get then is just a different way of implementing a "Ajax Polling" system with a strictly defined payload structure (and some minor features like client retry values set by server side). You can easily implement that with a few lines of jQuery. No need for SSE then.
According to the spec of SSE, i would say that the retry shouldnt be the normal way of implementing a client side loop. For me SSE is a one way streaming method which relies on a server backend which does not close the connection after pushing the first data to the client.
In Java its useful to use Servlet3 Async spec in order to free the request thread immediately and do the processing / streaming in a different thread. This works so far but still i dont like the 30 seconds connection lifetime for the EventSource request. Even i am pushing data every 5 seconds, the connection will be terminated after 30 seconds (chrome, firefox). Of course SSE will reconnect per default after 3 seconds but still i dont think this is the way it should be.
One problem is that some Java MVC frameworks dont have the ability to keep the connection open after data sending, so that you end up coding to the bare Servlet API. After on 24hours on coding prototypes in Java, i am more or less dissapointed because the gain over a traditional jQuery-Ajax-loop is not THAT much. And the problem with polyfilling the SSE feature is also existant.
The problem is not a server side issue, this all happens on the client and is part of the spec (I know it sounds weird).
http://dev.w3.org/html5/eventsource/
"When a user agent is to reestablish the connection, the user agent must run the following steps. These steps are run asynchronously, not as part of a task. (The tasks that it queues, of course, are run like normal tasks and not asynchronously.)"
Queue a task to run the following steps:
If the readyState attribute is set to CLOSED, abort the task.
Set the readyState attribute to CONNECTING.
Fire a simple event named error at the EventSource object.
I can't see any need to have an error here, so I have modified your Init function to filter out the error event fired whilst connecting.
function init() {
var CONNECTING = 0;
var source;
if (!!window.EventSource) {
source = new EventSource('events.php');
source.addEventListener('message', function (e) {
document.getElementById('output').innerHTML += e.data + '';
}, false);
source.addEventListener('open', function (e) {
document.getElementById('output').innerHTML += 'connection opened';
}, false);
source.addEventListener('error', function (e) {
if (source.readyState != CONNECTING) {
document.getElementById('output').innerHTML += 'error';
}
}, false);
}
else {
alert("Browser doesn't support Server-Sent Events");
}
}
There is no actual issue with the code, that I can see. The answer selected as correct, is then, incorrect.
This sums up the behavior mentioned in the question (http://www.w3.org/TR/2009/WD-html5-20090212/comms.html):
"If such a resource (with the correct MIME type) completes loading (i.e. the entire HTTP response body is received or the connection itself closes), the user agent should request the event source resource again after a delay equal to the reconnection time of the event source. This doesn't apply for the error cases that are listed below."
The problem lies with the stream. I've successfully kept a single EventStream open before in perl; just send the appropriate HTTP headers, and start sending stream data; never shutdown the stream server side. The issue is that it seems most HTTP libraries attempt to close the stream after its been opened. This will cause the client to attempt to reconnect to the server, which is fully standard compliant.
This means that it will appear that the problem is solved by running a while loop, for a couple of reasons:
A) The code will continue to send data, as if it were pushing out a large file
B) The code (php server) will never have the chance to attempt to close the connection
However, the problem here is obvious: to keep the stream alive, a constant stream of data must be sent. This results in wasteful utilization of resources, and negates any benefits the SSE stream is supposed to provide.
I'm not enough of a php guru to know, but I'd imagine that something in the php server/later in the code is prematurely closing the stream; I had to manipulate the stream at Socket level with Perl to keep it open, since HTTP::Response was closing the connection, and causing the client browser to attempt to re-open the connection. In Mojolicious (another Perl web framework), this can be done by opening a Stream object and setting the timeout to zero, so that the stream never times out.
So, the proper solution here is not to use a while loop; it is to call the appropriate php functions for opening, and keeping open, a php stream.
I was able to do it by implementing a custom event loop. It seems that this html5 feature is not ready at all and has compatibility issues even with the latest version of google chrome. Here it is, working on firefox (can't get the message sent correctly on chrome) :
var source;
function Body_Load(event) {
loopEvent();
}
function loopEvent() {
if (source == undefined) {
source = new EventSource("event/message.php");
}
source.onmessage = function(event) {
_e("out").value = event.data;
loopEvent();
}
}
P.S. : _e is a function that calls document.getElementById(id);
According to the Spec, the 3 second reconnection is by design when the connection is closed. PHP with a loop should theoretically stop this but the PHP script will be running indefinitely and wasting resources. You should try to avoid using apache and php for SSE because of this issue.
The standard http response should close a connection once the response is sent. You can change this with the header "connection: keep-alive" which should tell the browser that the connection is meant to stay open although this can cause problems if you're using proxies.
node.js or something similar is a better engine to use for SSE rather than apache/php and since it's basically JavaScript, its pretty easy to get to grips with.
Server Sent Event as name suggests the data should be traveling from server to client if it has to reconnect every three seconds to retrieve data from server then it is no different than other polling mechanisms.The purpose of SSE is to alert client as soon as there is new data which client is unaware of.Since server closes connection even if header is keep-alive there is no other way than to run php script in infinite loop but with considerable thread sleep to prevent burden on server.Till now i don't see any other way out and its better than spamming server every 3 seconds for new data.
I'm trying the same thing. With varying degrees of success.
Had the same problem with Firefox, running the same js code as mentioned.
Using the Nginx server and some PHP that exited(ie no continual loop), I could get messages back to a "Request" from firefox only once the PHP exited.
Run the PHP as a script in PHP.exe and all is good on the concole, stings are printed when flushed. However, Nginx doesn't send the data until the PHP has completed. Tried adding extra \r\n\r\n and flush() or ob_flush() did not help.
There is no pushing of data, as shown in Wireshark logs, just a delayed response packet to the GET.
Read that I need a "push" module for Nginx that requires a re-build from source.
So this is definitely an Nginx problem.
Using a socket in 'C' I was able to push data to Firefox as expected, and the socket was kept open, and no messages were missed. However this has the disadvantage that I need to server the page.html and the events/stream from the same socket or firefox will not connect due to Cross Site Url problems. There are some ways around this in certain situations, but not for a iframe in a menu system. This approach did prove the point that the SSE does work with firefox and there are pushed packets in the wireshark log. Where option 1 only had request/reply packets.
All this said, I still don't have a solution. I've tried to remove the buffering on the PHP and Nginx. But still nothing until PHP finishes. Tried different header options, eg chunks didn't help either.
I don't feel like writing a full blown http server in 'C' but this seems to be the only option that is working for me at the moment.
I'm about to try Apache, but most write ups suggest that this is worse than Nginx at this job.