Log4Net Backup Appender [duplicate] - asp.net-core

I have a program that uses log4Net using both text and smtp appenders.
When the program runs several logging are captured by the smtp appenders and are buffered waiting for the program end before sending mail.
When the program is nearly completed I might discover that I do not need to send emails, and actually I log something like "Nothing to do".
I know that it is possible to manipulate by code the appenders configuration so I could suppress the mail setting the Threshold to off.
I would like to know if it is possible to get the same result using just log4net configuration: the smtp appender should not send an email if a specific string is logged at any time. If instead that string is not logged it should behave normally and send all the lines that match the defined filters.

TL;DR;
create a custom ITriggeringEventEvaluator
configure your smtp appender with this evaluator: decide what must happen when the buffer is full (discard messages, send them)
use the auto-flush of your appender to send the log events forward
There are two properties in a BufferingAppenderSkeleton in log4net that may interest you here. When you configure such an appender to be Lossy, Evaluator and LossyEvaluator are used to determine if messages should be sent to the next appenders.
The Evaluator property lets you define a class inheriting from ITriggeringEventEvaluator; when a message is logged and the buffer is full, the evaluator is called and if it returns true on the IsTriggeringEvent method the event buffer is processed.
The LossyEvaluator works in the same way, except it is called to decide whether the oldest event that will disappear from the buffer because it is full must be logged or not.
Since your SmtpAppender is a BufferingAppenderSkeleton, you can use the Evaluator property in order to define what triggers or not the email being sent (ie a logging event where you say "no log", or whatever)
However if you expect your appender to decide on its own whether or not to send the event logs when it is closed (ie should it auto-flush or not) it is the LossyEvaluator that is used.
Now for the bad news: there is only one instance of a ITriggeringEventEvaluator implementation in log4net, which evaluates log events based on their level. So you will have to code your own trigger in order to recognize special messages sent to the appender by your application.

Related

RabbitMQ/Spring: Will another exclusive consumer register itself, if the current exclusive deregisters?

I have a spring application which runs in multiple instances on cloudfoundry.
These instances share a database. They have a RabbitListener configured like so:
#RabbitListener(queues = "${items.updated.queue}", exclusive = true)
The queue gets a message if a reimport of items from a certain source is required.
I only want one instance to perform the import. To my understanding this can be accomplished by the exclusive flag.
Now, what would happen if the current exclusive consumer crashes?
Would another currently running instance register itself as the new exclusive consumer? Or does the registration only take place when the application starts up?
Yes, another consumer will be granted access.
Consumers will re-attempt to consume every recoveryInterval milliseconds (default 5000 - 5 seconds).
You can change this by setting the interval or a recoveryBackoff in the listener container.
Note that you will get a WARN log from the container about the failure and an INFO log from the connection factory that the channel was closed due to a failure.
You can either adjust the log levels to reduce these logs, or you can inject custom ConditionalExceptionLogger s into both the container and factory.
See the documentation.
If a consumer fails because one if its queues is being used exclusively, by default, as well as publishing the event, a WARN log is issued. To change this logging behavior, provide a custom ConditionalExceptionLogger in the SimpleMessageListenerContainer's exclusiveConsumerExceptionLogger property. See also the section called “Logging Channel Close Events”.

NServicebus handler with custom sqlconnection

I have an NServiceBus handler that creates a new sql connection and new sql command.
However, the command that is executed is not being committed to the database until after the whole process is finished.
It's like there is a hidden sql transaction in the handler itself.
I moved my code into a custom console application without nservicebus and the sql command executed and saved immediately. Unlike in nservicebus where it doesn't save until the end of the handler.
Indeed every handler is wrapped in a transaction, the default transaction guarantee is relying on DTC. That is intentional :)
If you disable it then you might get duplicate messages or lose some data, so that must be done carefully. You can disable transactions using endpoint configuration API instead of using options in connection string.
Here you can find more information about configuration and available guarantees http://docs.particular.net/nservicebus/transports/transactions.
Unit of work
Messages should be processed as a single unit of work. Either everything succeeds or fails.
If you want to have multiple units of work executed then
create multiple endpoints
or send multiple messages
This also has the benefit that these can potentially be processed in parallel.
Please note, that creating multiple handlers WILL NOT have this effect. All handlers on the same endpoint will be part of the same unit of work thus transaction.
Immediate dispatch
If you really want to send a specific message when the sending of the message must not be part of the unit of work then you can immediately send it like this:
using (new TransactionScope(TransactionScopeOption.Suppress))
{
var myMessage = new MyMessage();
bus.Send(myMessage);
}
This is valid for V5, for other versions its best to look at the documentation:
http://docs.particular.net/nservicebus/messaging/send-a-message#dispatching-a-message-immediately
Enlist=false
This is a workaround that MUST NOT be used to circumvent a specific transactional configuration as is explained very well by Tomasz.
This can result in data corruption because the same messsage can be processed multiple times in case of error recovery while then the same database action will be performed again.
Found the solution.
In my connection string I had to add Enlist=False
As mentioned by #wlabaj Setting Enlist=False will indeed make sure that a transaction opened in the handler will be different from transaction used by the transport to receive/send messages.
It is however important to note that it changes the message processing semantics. By default, when DTC is used, receive/send and any transactional operations inside a handler will be commited/rolled-back atomically. With Enlist=False it's not the case so it's possible that there will be more than one handler transaction being committed for the same message. Consider following scenario as a sample case when it can happen:
message is received (transport transaction gets started)
message is successfully processed inside the handler (handler transaction committed successfully)
transport transaction fails and message is moved back to the input queue
message is received second time
message is successfully processed inside the handler
...
The behavior with Enlist-False setting is something that might a be desirable behavior in your case. That being said I think it's worth clarifying what are the consequences in terms of message processing semantics.

RabbitMQ def callback(ch, method, properties, body)

Just want to know the meaning of the parameters in worker.py file:
def callback(ch, method, properties, body):
print " [x] Received %r" % (body,)
What do ch, method, and properties mean?
ch
"ch" is the "channel" over which the communication is happening.
Think of a RabbitMQ connection in two parts:
the TCP/IP connection
channels within the connection
the actual TCP/IP connection is expensive to create, so you only want one connection per process instance.
A channel is where the work is done with RabbitMQ. a channel exists within an connection, and you need to have the channel reference so you can ack/nack messages, etc.
method
i think "method" is meta information regarding the message delivery
when you want to acknowledge the message - tell RabbitMQ that you are done processing it - you need both the channel and the delivery tag. the delivery tag comes from the method parameter.
i'm not sure why this is called "method" - perhaps it is related to the AMQP spec, where the "method" is meta-data about which AMQP method was executed?
properties
the "properties" of the message are user-defined properties on the message. you can set any arbitrary key / value pair that you want in these properties, and possibly get things like routing key used (though this may come from "method")
properties are often uses for bits of data that your code needs to have, but aren't part of the actual message body.
for example, if you had a re-sequencer process to make sure messages are processed in order, the "properties" would probably contain the message's sequence number.

Conditional logging with log4net

I have a program that uses log4Net using both text and smtp appenders.
When the program runs several logging are captured by the smtp appenders and are buffered waiting for the program end before sending mail.
When the program is nearly completed I might discover that I do not need to send emails, and actually I log something like "Nothing to do".
I know that it is possible to manipulate by code the appenders configuration so I could suppress the mail setting the Threshold to off.
I would like to know if it is possible to get the same result using just log4net configuration: the smtp appender should not send an email if a specific string is logged at any time. If instead that string is not logged it should behave normally and send all the lines that match the defined filters.
TL;DR;
create a custom ITriggeringEventEvaluator
configure your smtp appender with this evaluator: decide what must happen when the buffer is full (discard messages, send them)
use the auto-flush of your appender to send the log events forward
There are two properties in a BufferingAppenderSkeleton in log4net that may interest you here. When you configure such an appender to be Lossy, Evaluator and LossyEvaluator are used to determine if messages should be sent to the next appenders.
The Evaluator property lets you define a class inheriting from ITriggeringEventEvaluator; when a message is logged and the buffer is full, the evaluator is called and if it returns true on the IsTriggeringEvent method the event buffer is processed.
The LossyEvaluator works in the same way, except it is called to decide whether the oldest event that will disappear from the buffer because it is full must be logged or not.
Since your SmtpAppender is a BufferingAppenderSkeleton, you can use the Evaluator property in order to define what triggers or not the email being sent (ie a logging event where you say "no log", or whatever)
However if you expect your appender to decide on its own whether or not to send the event logs when it is closed (ie should it auto-flush or not) it is the LossyEvaluator that is used.
Now for the bad news: there is only one instance of a ITriggeringEventEvaluator implementation in log4net, which evaluates log events based on their level. So you will have to code your own trigger in order to recognize special messages sent to the appender by your application.

How do I get a list of worker threads of nservicebus

How do I get a list of worker threads of nservicebus. I need to register workerThread ids in to db and then bind some type of messages to the exact workerthread. Real idea is handling poison messages. Want to block all the threads not to handle poison messages except specified ones. There will be a seperate service that will manage threads through database.
I would not try to do that. It is almost sure to run into problems.
Of course, in order to get some sort of "identity" for each thread, you could place something like this in your message handler:
[ThreadStatic]
private static readonly Guid ThreadId = Guid.NewGuid();
But again, I wouldn't do that! The guids would change every time the endpoint was restarted, for one.
You could also query the list of threads direct from .NET and try to determine which ones were the message handling threads, but that sounds so scary I don't even want to go into it.
The real issue: Poison Message Handling
As your comment states, the real problem is that a poison message is REALLY poison. Not only is it failing, but it's taking so long to do so that it's really screwing up all the other threads!
Since you are able to identify these messages based on certain properties of the message, I would detect and throw an exception before the operation that times out. All the time.
If you want to be able to test periodically to see if the issue has been fixed, you have a few options:
Test via other means, and return the messages to the source queue when it has been fixed.
Add an appSetting so that the quick-throw behavior is skipped when the config setting is enabled. Then periodically you can edit the config, restart the endpoint, see if it's fixed, and then switch it back if it isn't.
Create another message handler that maintains a thread-locked increment value of zero. Send it a control message to say "Hey, try one now." Then your quick-throw behavior can decrement that value and allow one through to see what happens. This is also dangerous of course. Make sure your locking is tight since you are now sharing this state between different message processing threads.