Nservicebus Audit Service - nservicebus

I am trying to create a audit log service using nservicebus.Since i need to make the application without recompiling i found (http://tech.dir.groups.yahoo.com/group/nservicebus/message/9416)it is good to hook OnTransportMessageReceived event.Also i need to store the message body of the incoming message as well.
Could you please let me know how can i achieve?
I tried the following now Create a handler which Handles IMessage
public class AuditLogMessagehandler : IHandleMessages<IMessage>
{
public IBus Bus { get; set; }
public ITransport Transport { get; set; }
public AuditLogMessagehandler()
{
}
public void Handle(IMessage message)
{
string returnAddress = Bus.CurrentMessageContext.ReturnAddress;
string id = Bus.CurrentMessageContext.Id;
string messageType = message.GetType().Name;
IMessage[] messages = new IMessage[1];
messages[0] = message;
MessageSerializer ser = new MessageSerializer();
Stream memoryStream = new MemoryStream();
ser.Serialize(messages, memoryStream);
memoryStream.Flush();
memoryStream.Close();
}
}
It required the dll to be copied to the bin folder.But i am not getting the message bdoy threre,Also please let me know at what point or how can i hook to OnTransportMessageReceived
Thanks in advance,
Ajai

NServicebus has auditing out of the box. See the following link:
http://docs.particular.net/nservicebus/operations/auditing
This will take a copy of the message and move it to an audit Q. From there you can read the audit q and copy the messsage to file, move into a DB. what ever you want.
Does that make sense?
Dave

Related

Streaming objects from S3 Object using Spring Aws Integration

I am working on a usecase where I am supposed to poll S3 -> read the stream for the content -> do some processing and upload it to another bucket rather than writing the file in my server.
I know I can achieve it using S3StreamingMessageSource in Spring aws integration but the problem I am facing is that I do not know on how to process the message stream received by polling
public class S3PollerConfigurationUsingStreaming {
#Value("${amazonProperties.bucketName}")
private String bucketName;
#Value("${amazonProperties.newBucket}")
private String newBucket;
#Autowired
private AmazonClientService amazonClient;
#Bean
#InboundChannelAdapter(value = "s3Channel", poller = #Poller(fixedDelay = "100"))
public MessageSource<InputStream> s3InboundStreamingMessageSource() {
S3StreamingMessageSource messageSource = new S3StreamingMessageSource(template());
messageSource.setRemoteDirectory(bucketName);
messageSource.setFilter(new S3PersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
"streaming"));
return messageSource;
}
#Bean
#Transformer(inputChannel = "s3Channel", outputChannel = "data")
public org.springframework.integration.transformer.Transformer transformer() {
return new StreamTransformer();
}
#Bean
public S3RemoteFileTemplate template() {
return new S3RemoteFileTemplate(new S3SessionFactory(amazonClient.getS3Client()));
}
#Bean
public PollableChannel s3Channel() {
return new QueueChannel();
}
#Bean
IntegrationFlow fileStreamingFlow() {
return IntegrationFlows
.from(s3InboundStreamingMessageSource(),
e -> e.poller(p -> p.fixedDelay(30, TimeUnit.SECONDS)))
.handle(streamFile())
.get();
}
}
Can someone please help me with the code to process the stream ?
Not sure what is your problem, but I see that you have a mix of concerns. If you use messaging annotations (see #InboundChannelAdapter in your config), what is the point to use the same s3InboundStreamingMessageSource in the IntegrationFlow definition?
Anyway it looks like you have already explored for yourself a StreamTransformer. This one has a charset property to convert your InputStreamfrom the remote S3 resource to the String. Otherwise it returns a byte[]. Everything else is up to you what and how to do with this converted content.
Also I don't see reason to have an s3Channel as a QueueChannel, since the start of your flow is pollable anyway by the #InboundChannelAdapter.
From big height I would say we have more questions to you, than vise versa...
UPDATE
Not clear what is your idea for InputStream processing, but that is really a fact that after S3StreamingMessageSource you are going to have exactly InputStream as a payload in the next handler.
Also not sure what is your streamFile(), but it must really expect InputStream as an input from the payload of the request message.
You also can use the mentioned StreamTransformer over there:
#Bean
IntegrationFlow fileStreamingFlow() {
return IntegrationFlows
.from(s3InboundStreamingMessageSource(),
e -> e.poller(p -> p.fixedDelay(30, TimeUnit.SECONDS)))
.transform(Transformers.fromStream("UTF-8"))
.get();
}
And the next .handle() will be ready for String as a payload.

Saga error nservicebus using raven db persistence

I have two messages , clientChangeMessage( responsible for creating the client) and clientContractChangeMEssage( responsible for the booking details of the client). Now in my database a client cannot be created until it has the client contract and vice-versa. On my local system everything is working fine i.e. if get a client change message first i store it in the saga and wait for the client contract message and when that arrives the saga executes both the messages. But on my testers machine when the client change message comes it gets stored in the saga but when a client contract change comes the saga does not find the client change saga and hence creates another saga. I have tried it with the exact same messages that my tester has tried ,it works on my machine, and am unable to figure out what might be going wrong. I am using raven db persistence. (Sorry i could not think of pasting any code for this)
ClientSagaState
public class ClientSagaState:IContainSagaData
{
#region NserviceBus
public Guid Id { get; set; }
public string Originator { get; set; }
public string OriginalMessageId { get; set; }
#endregion
public Guid ClientRef { get; set; }
public ClientMessage ClientChangeMessage { get; set; }
public ClientContractChangeMessage ClientContractChange { get; set; }
}
public class ClientSaga:Saga<ClientSagaState>,
IAmStartedByMessages<ClientChangeMessage>,
IAmStartedByMessages<ClientContractChangeMessage>
{
public override void ConfigureHowToFindSaga()
{
ConfigureMapping<ClientChangeMessage>(s => s.ClientRef, m => m.EntityRef);
ConfigureMapping<ClientContractChangeMessage>(s => s.ClientRef, m => m.PrimaryEntityRef);
}
public void Handle(ClientChangeMessage message)
{
if (BusRefTranslator.GetLocalRef(EntityTranslationNames.ClientChange, message.EntityRef.Value) != null)
{
GetHandler<ClientChangeMessage>().Handle(message);
CompleteTheSaga();
return;
}
HandleServiceUserChangeAndDependencies(message);
//MarkAsComplete();
CompleteTheSaga();
}
public void Handle(ClientContractChangeMessage message)
{
var state=this.Data;
//Some handling logic
//Check if client is not in database then store the state
state.ClientContractChange=message;
state.ClientRef =message.PrimaryEntityRef;
//if client is in the data base then
MarkAsComplete();
}
Thanks,
Because you are mapping to the saga data via the ClientRef property, you need to tell the persistence (Raven in this case) that this property is unique. What is probably happening is that, in some cases (it comes down to a race condition) the query done on the Raven index by the second message retrieves stale data, assumes there is no saga data, and creates new.
This should fix your issue:
[Unique]
public Guid ClientRef { get; set; }
With this information, the Raven saga persister will create an additional document based on this property (because loading by Id in Raven is fully atomic) so that the second message will be sure to find it.
If you were using another persistence medium like NHibernate, the same attribute would be used to construct a unique index on that column.
Edit based on comment
The unique constraint document and your saga data will be fully consistent, so depending on timing of incoming messages, one of 3 things will happen.
The message is truly the first message to arrive and be processed, so no saga data is found, so it is created.
The message is the second to arrive, so it looks for the saga data, finds it, and processes successfully.
The 2nd message arrives very close to the first message, so they are both processing in separate threads at the same time. Both threads look in the saga data and find nothing, so they both begin to process. The one that finishes first commits successfully and saves its saga data. The one that finishes second attempts to save the saga data, but finds that while it's been working the other thread has moved its cheese, so Raven throws a concurrency exception. Your message goes back on the queue and is retried, and now that the saga data exists, the retry acts like Scenario #2.

Configuring Fault Contract Exception Handlers in Enterprise Library 6 for WCF

How do you map additional properties of an exception to your custom fault contract when using Enterprise Library 6's Exception Handling Application Block?
This article describes the FaultContractPropertyMapping the same way this one does. If you have a fault contract like so:
[DataContract]
public class SalaryCalculationFault
{
[DataMember]
public Guid FaultID { get; set; }
[DataMember]
public string FaultMessage { get; set; }
}
How do you add another property and map it to the original exception? Lets say I want to show the Stored Procedure name to the client using a new property:
[DataMember]
public string StoredProcedureName { get; set; }
I try editing the mapping shown on page 90 of the "Developer's Guide to Microsoft Enterprise Library-Preview.pdf" which can be found here but it does not seem to work. My new mapping looks like this:
var mappings = new NameValueCollection();
mappings.Add("FaultID", "{Guid}");
mappings.Add("FaultMessage", "{Message}");
mappings.Add("StoredProcedureName", "{Procedure}"); //SqlException has a Procedure property
And here is the policy.
var testPolicy = new List<ExceptionPolicyEntry>
{
{
new ExceptionPolicyEntry(typeof(SqlException),
PostHandlingAction.ThrowNewException,
new IExceptionHandler[]
{
new FaultContractExceptionHandler(typeof(SalaryCalculationFault), mappings)
})
}
};
var policies = new List<ExceptionPolicyDefinition>();
policies.Add(new ExceptionPolicyDefinition(
"TestPolicy", testPolicy));
exManager = new ExceptionManager(policies);
ExceptionPolicy.Reset();
ExceptionPolicy.SetExceptionManager(exManager);
When I do this and catch the FaultException on the client and inspect it, the StoredProcedureName is always empty. Why doesn't it map from the SqlException to the new property in my fault exception?
It turns out you shouldn't actually place the code you expect an exception for inside of the ExceptionManager.Processs() method. I was doing this before:
exManager.Process(() => wimDAL.Execute_NonQueryNoReturn(sc), "TestPolicy");
Insead, just execute the code as normal.
wimDAL.Execute_NonQueryNoReturn(sc);
This does not follow what the "Developer's Guide to Microsoft Enterprise Library-Preview.pdf" says but I guess the documentation is still a work in progress. I hope this helps someone else.

Newtonsoft.Json.JsonSerializationException' occurred in Newtonsoft.Json.DLL in windows phone 7

Created a WCF service and published in IIS. I tried to access this service in windows phone7 so i implement it by installing json.net from Nuget package. Got Serialization of json in correct format.But Deserialization of json fails in webClient_OpenReadCompleted method. I given my code template here
private void webClient_OpenReadCompleted(object sender, DownloadStringCompletedEventArgs e)
{
string s = e.Result.ToString();
Customer deserCustomers = JsonConvert.DeserializeObject<Customer>(s);
int id=deserCustomers.CustomerId;
string n = deserCustomers.CustomerName;
lstCustomer.ItemsSource = deserCustomers.ToString();
}
While reaching the below code got exception as follows:
Customer deserializedCustomers = JsonConvert.DeserializeObject(s);
An exception of type 'Newtonsoft.Json.JsonSerializationException' occurred in Newtonsoft.Json.DLL but was not handled in user code.
Give me suggestions to solve this error
Actually is quite simple , you should just make your class interface of a list, because your json is array something like:
public class Customer:List<object>
{
public int CustomerId{get; set;}
public string CustomerName{get; set;}
}
than everything is pretty basic
var deserCustomers = JsonConvert.DeserializeObject<Customer>(s);
foreach (var cust in deserCustomers)
{
....
}
hope it's work (:

Can the KnowledgeAgent be used to automatically write the KnowledgeBase to a file so it can be used externally?

i'm working at a little drools project and i have following problem:
- when i read the knowledgepackages from drools via the knowledgeAgent it takes a long time to load((now i know that building the knowledgeBase in general and especially when loading packages from guvnor is very intense ))
so I'm trying to serialize the KnowledgeBase to a file which is located locally on the system - on the one hand because loading the kBase from a local file is much much faster - and for the other so that i can use the KnowledgeBase for other applications The Problem with this is, that while using the KnowledgeAgent to load the KnowledgeBase the first time, the base will be updated by the Agent automatically
BUT: whilst the Base is updated, my local file will not be updated too
So I'm wondering how to handle/get the changeNotification from my KnowledgeAgent so i can call a method to serialize my KnowledgeBase ?
Is this somehow possible? basically i just want to update my local knowledgeBase file, everytime someone edits a rule in governor, so that my local file is always up to date.
If it isn't possible, or a really bad solution to begin with, what is the recommended / best way to go about it?
Please endure my english and the question itself, if you cant really make out what i want to accomplish or if my request is actually not a good solution or the question itself is redundant, im rather new to java and a total noob when it comes to drools.
Down below is the code:
public class DroolsConnection {
private static KnowledgeAgent kAgent;
private static KnowledgeBase kAgentBase;
public DroolsConnection(){
ResourceFactory.getResourceChangeNotifierService().start();
ResourceFactory.getResourceChangeScannerService() .start();
}
public KnowledgeBase readKnowledgeBase( ) throws Exception {
kAgent = KnowledgeAgentFactory.newKnowledgeAgent("guvnorAgent");
kAgent .applyChangeSet( ResourceFactory.newFileResource(CHANGESET_PATH));
kAgent.monitorResourceChangeEvents(true);
kAgentBase = kAgent.getKnowledgeBase();
serializeKnowledgeBase(kAgentBase);
return kAgentBase;
}
public List<EvaluationObject> runAgainstRules( List<EvaluationObject> objectsToEvaluate,
KnowledgeBase kBase ) throws Exception{
StatefulKnowledgeSession knowSession = kBase.newStatefulKnowledgeSession();
KnowledgeRuntimeLogger knowLogger = KnowledgeRuntimeLoggerFactory.newFileLogger(knowSession, "logger");
for ( EvaluationObject o : objectsToEvaluate ){
knowSession.insert( o );
}
knowSession.fireAllRules();
knowLogger .close();
knowSession.dispose();
return objectsToEvaluate;
}
public KnowledgeBase serializeKnowledgeBase(KnowledgeBase kBase) throws IOException{
OutputStream outStream = new FileOutputStream( SERIALIZE_BASE_PATH );
ObjectOutputStream oos = new ObjectOutputStream( outStream );
oos.writeObject ( kBase );
oos.close();
return kBase;
}
public KnowledgeBase loadFromSerializedKnowledgeBase() throws Exception {
KnowledgeBase kBase = KnowledgeBaseFactory.newKnowledgeBase();
InputStream is = new FileInputStream( SERIALIZE_BASE_PATH );
ObjectInputStream ois = new ObjectInputStream( is );
kBase = (KnowledgeBase) ois.readObject();
ois.close();
return kBase;
}
}
thanks for your help in advance!
best regards,
Marenko
In order to keep your local kbase updated you could use a KnowledgeAgentEventListener to know when its internal kbase gets updated:
kagent.addEventListener( new KnowledgeAgentEventListener() {
public void beforeChangeSetApplied(BeforeChangeSetAppliedEvent event) {
}
public synchronized void afterChangeSetApplied(AfterChangeSetAppliedEvent event) {
}
public void beforeChangeSetProcessed(BeforeChangeSetProcessedEvent event) {
}
public void afterChangeSetProcessed(AfterChangeSetProcessedEvent event) {
}
public void beforeResourceProcessed(BeforeResourceProcessedEvent event) {
}
public void afterResourceProcessed(AfterResourceProcessedEvent event) {
}
public void knowledgeBaseUpdated(KnowledgeBaseUpdatedEvent event) {
//THIS IS THE EVENT YOU ARE INTERESTED IN
}
public void resourceCompilationFailed(ResourceCompilationFailedEvent event) {
}
} );
You still need to handle concurrently accesses on your local kbase though.
By the way, since you are not using 'newInstance' configuration option, the agent will create a new instance of a kbase each time a change-set is applied. So, make sure you serialize the kagent's internal kbase (kagent.getKnowledgeBase()) instead of the reference you have in your app.
Hope it helps,