Intercepting outgoing Exchange Server email and modifying it - notifications

I want to be able to intercept outgoing email on a specific domain in Exchange Server and modify the headers before it is actually delivered.
Basically, my company has been bought by another and where we were using MDaemon and signing all our emails with DKim and DomainKeys, the new company uses Exchange Server which cannot and will not do this. This appears to be a major oversight I would have thought so I think I will need to do it myself. I have already written a COM component that can sign given message files which I use on my personal mail server using hMailServer, so wanted to do a similar thing for Exchange.
Is this possible, and if so how would you do it?
I have looked but could not find an obvious way of doing this. Some of the things I looked at included:
Transport Agents
Event Sinks
Store Events
Any help would be appreciated. Thanks.

For Exchange 2007 and later: It seems that a TransportAgent is the right way of doing it.
A very basic sample:
public class TestAgent : SmtpReceiveAgent
{
public TestAgent()
{
this.OnEndOfData += new EndOfDataEventHandler(MyEndOfDataHandler);
}
private void MyEndOfDataHandler(ReceiveMessageEventSource source, EndOfDataEventArgs e)
{
// The following line appends text to the subject of the message that caused the event.
e.MailItem.Message.Subject += " - this text appended by MyAgent";
}
}
You can change the actual message via the GetContentWriteStream() and just append or replace existing content.
More samples can be found here.
I know... it's a late answer, but I stumbled over this question and just want to leave some helpful links that I found.

Maybe you can use the Generic Exchange Transport Agent (open source, link goes to GitHub) for this. It provides an abstraction layer above the Exchange transport agent and is specifically designed to handle events for incoming/outgoing email. You can call custom batch scripts to rewrite the whole e-mail, e.g. for adding custom headers and so on.

Related

DDD read text messages from file

I have my text validation messages in a properties file. How would you consider getting the message from the key? A domain service interface implemented by the infraestructure? Or would the implementation live in the domain too?
In the red book, text messages are literal, when passed to exceptions for example. They belong to domain.
But what if we deal with keys of a messages.properties file? How would you do it?
Thank you.
It is unlikely that the messages are part of the domain itself. Think about who will drive the changes to the messages. Will it be domain experts or marketing/UX people indicating that the particular message should be altered to convey the meaning better to customers?
If you had to create, for example, a new B2B client, do you anticipate that some messages would have to be changed? Does it mean that the domain has changed?
I would advise keeping the exceptions in the domain as independent from the external representation as possible. So, instead of saying throw new PaymentProcessingException("Insufficient funds") consider throw new InsufficientFundsPaymentProcessingException(). Then you could translate the specific exception into a proper message using the infrastructure service at the representational edge of your application.
I had the same problem: at the end I've used a mixture of answer from Oleg and message keys.
What I've done are custom exceptions (DomainException) where I've added a custom field with the key of the message (at the end they're domain exceptions, they could/must be customized). Every exception has its own value. Where I handle the exception I just add the code as a message response. Somewhere later, where the response is handled, I ask a service to translate the key.
Suggestions are welcome if this reveals not as good as it looks (for me).

Cleanup files when using NServiceBus FileShareDataBus

My question is kind of similar to this question but I think the response is not answering the question at all.
To elaborate,
I have the following code piece:
Configuration:
BusConfiguration busConfiguration = new BusConfiguration();
busConfiguration.EndpointName("Samples.DataBus.Sender");
busConfiguration.UseSerialization<JsonSerializer>();
busConfiguration.UseDataBus<FileShareDataBus>().BasePath(BasePath);
busConfiguration.UsePersistence<InMemoryPersistence>();
busConfiguration.EnableInstallers();
using (IBus bus = Bus.Create(busConfiguration).Start())
....
Message:
[TimeToBeReceived("00:01:00")]
public class MessageWithLargePayload : ICommand
{
public string SomeProperty { get; set; }
public DataBusProperty<byte[]> LargeBlob { get; set; }
}
This works fine (it creates queues, sends messages in the queue, creates a file for the LargeBlob property and stores it in the base path, receiver takes the message and handles it).
My question is: Is there any way to remove the created files (LargeBlob) after the message has been handled or taken out of the queue or after it lands in the error-queue.
The Documentation clearly states that files are not cleaned up, but I think this is kind of a messy behaviour, can anyone help?
Is there any way to remove the files after the message has been handled or taken out of the queue or after it lands in the error-queue.
After message handled
After taken out of the queue
After it is in error queue
I'm not really sure what you're after? You want to remove the files, but you're not sure when?
NServiceBus has no way to figure out when the file should be deleted. Perhaps you're deferring a message for the file to be processed later. Or you're giving the task to another handler. Which means that if the file is removed, there's no way that other handler can process the file. So removing the file depends on your functional needs.
When the message is in the error queue, it is most likely that you want to try and process it again. Why else put the message in an error queue, instead of just removing the message altogether?
Besides that, the file system isn't transactional. So there's no way for any software to tell if messages got processed correctly and the file should be deleted. And when the outbox has been enabled in NServiceBus, the message is removed from the queuing storage, but it's not been processed yet. If the file would've been removed by then, it also can't be processed anymore.
As you can tell, there are a large number of scenarios where removing the file can pose a problem. The only one who actually knows when which file could be removed, is you as a developer. You'll have to come up with a strategy to remove the files.
The sample has a class Program with a static field BasePath. Make it public so your handler can access it. Then in the handler you can obtain the file location like this:
public void Handle(MessageWithLargePayload message)
{
var filename = Path.Combine(Program.BasePath, message.LargeBlob.Key);
Console.WriteLine(filename);
UPDATE
Added some documentation about a possible cleanup strategy. We have some plans for a really good solution, but it'll take time. So for now perhaps this can help.
I solved the file cleanup challenge by creating a service that removes the files after a configurable number of hours. I will say that with a large number of bus files, you will be more successful if you do a rolling delete vs trying to do it once a day.
There are two options out on GitHub that are already coded that can be used:
PowerShell Script:
https://gist.github.com/smartindale/50674db76bd2e36cc94f
Windows Service: https://github.com/bradjolicoeur/VapoDataBus
The PowerShell script worked with low volumes, but I had issues with running reliably with larger volumes of files.
The Windows Service is reliable on large volumes and is more efficient.

Is it possible to write a plugin for Glimpse's existing SQL tab?

Is it possible to write a plugin for Glimpse's existing SQL tab?
I'm trying to log my SQL queries and the currently available extensions don't support our in-house SQL libary. I have written a custom plugin which logs what I want, but it has limited functionality and it doesn't integrate with the existing SQL tab.
Currently, I'm logging to my custom plugin using a single helper method inside my DAL's base class. This function looks takes the SqlCommand and Duration in order to show data on my custom tab:
// simplified example:
Stopwatch sw = Stopwatch.StartNew();
sqlCommand.Connection = sqlConnection;
sqlConnection.Open();
object result = sqlCommand.ExecuteScalar();
sqlConnection.Close();
sw.Stop();
long duration = sw.ElapsedMilliseconds;
LogSqlActivity(sqlCommand, null, duration);
This works well on my 'custom' tab but unfortunately means I don't get metrics shown on Glimpse's HUD:
Is there a way I can provide Glimpse directly with the info it needs (in terms of method names, and parameters) so it displays natively on the SQL tab?
The following advise is based on the fact that you can't use DbProviderFactory and you can't use a proxied SqlCommand, etc.
The data that appears in the "out-of-the-box" SQL tab is based on messages of given types been published through our internal Message Broker (see below on information on this). Because of the above limitations in your case, to get things lighting up correctly (i.e. your data showing up in HUD and the SQL tab), you will need to simulate the work that we do under the covers when we publish these messages. This shouldn't be that difficult and once done, should just work moving forward.
If you have a look at the various proxies we have here you will be above to see what messages we publish in what circumstances. Here are some highlights:
DbCommand
Log command start - here
Log command error - here
Log command end - here
DbConnection:
Log connection open - here
Log connection closed - here
DbTransaction
Log Started - here
Log committed - here
Log rollback - here
Other
Command row count here - Glimpses calculates this at the DbDataReader level but you could do it elsewhere as well
Now that you have an idea of what messages we are expecting and how we generate them, as long as you pass in the right data when you publish those messages, everything should just light up - if you are interested here is the code that looks for the messages that you will be publishing.
Message Broker: If you at the GlimpseConfiguration here you will see how to access the Broker. This can be done statically if needed (as we do here). From here you can publish the messages you need.
Helpers: For generating some of the above messages, you can use the helpers inside the Support class here. I would have shifted all the code for publishing the actual messages to this class, but I didn't think there would be too many people doing what you are doing.
Update 1
Starting point: With the above approach you shouldn't need to write your own plugin. You should just be able to access the broker GlimpseConfiguration.GetConfiguredMessageBroker() (make sure you check if its null, which it is if Glimpse is turned off, etc) and publish your messages.
I would imagine that you would put the inspection that leverages the broker and published the messages, where ever you have knowledge of the information that needs to be collected (i.e. inside your custom lib). Normally this would require references inside your lib to glimpse (which you may not want), so to protect against this, from your lib, you would call a proxy (which could be another VS proj) that has the glimpse dependency. Hence your ado lib only has references to your own code.
To get your toes wet, try just publishing a couple of fake connection and command messages. Assuming the broker you get from GlimpseConfiguration.GetConfiguredMessageBroker() isn't null, these should just show up. Then you can work towards getting real data into it from your lib.
Update 2
Obsolete Broker Access
Its marked as obsolete because its going to change in v2. You will still be able to do what you need to do, but the way of accessing the broker has changed. For what you currently need to do this is ok.
Sometimes null
As you have found this is really dependent on where in the page lifecycle you are currently at. To get around this, I would probably change my original recommendation a little.
In the code where you are currently creating messages and pushing them to the message bus, try putting them into HttpContext.Current.Items. If you haven't used it before, this is a store which asp.net provides out of the box which lasts the lifetime of a given request. You could have a list that you put in there, still create the message objects that you are doing, but put them into that list instead of pushing them through the broker.
Then, create a HttpModule (its really simple to do) which taps into the PostLogRequest event. Within this handler, you would pull the list out of the context, iterate through it and push the message into the message broker (accessing the same way you have been).

Where is the right place to add the user and time a command was issued in NServiceBus?

Some of my NServiceBus commands will need to track who issued the command and when. I'm very unsure as to the recommended way to implement this:
Should I create a base class MessageBase, add public Dictionary<string, string> Headers;, and implement IMutateOutgoingMessages?
Should it be added to the MessageContext? If so, how do I ensure the Bus adds it before every message (which needs the headers) is sent?
Is it already done and I just don't know how to access it? (It looks like the user is in the raw MSMQ message...)
NServiceBus already gives you the time the message was sent using the "NServiceBus.TimeSent" header.
Use the builtin NServiceBus headers dictionary and skip the MessageBase
Attaching user id is best done in a outgoing message mutator. Just grab the ID from eg the HttpContext and add it as a header.
http://support.nservicebus.com/customer/portal/articles/860492
To get the time (in your handler/saga):
Bus.CurrentMessageContext.TimeSent

WorkflowCreationEndpoint ResumeBookmark with a filled response

I've spend days trying to find a solution the problem i'm going to try to describe, i've googled alot and even looked at the .NET 4 reference source for SendReply and InternalSendReply activity. But until now i'm stuck.
To make the life of our end customers simpler i want to replace the Receive and SendReply activities with custom activites and use bookmarks instead.
I'm implementing a central webservice which can route to a correct workflow instance, that workflow modifies the bookmark value and finaly it creates a new bookmark while returning the modified bookmark value. It's rather complex already with a WorkflowServiceHostFactory which adds Behaviours and Attach a DataContractResolver to the endpoint.
The endpoint is derived from WorkflowHostingEndpoint which resolves a bookmark created in a custom activity (instead of a receive). And i want another activity instead of a sendreply. Those 2 should correlate and the custom sendreply does send a response on the open channel through the endpoint while creating a new bookmark.
The problem is that i didn't find a way yet to access the endpoint responseContext from within my custom send activity. On the other side, at the workflowcreating endpoint side, it seems that i'm not able to be notified whenever the workflow becomes Idle and as well i don't seem to be able to access the WorkflowExtensions from the host. i'm missing something?
A possible solution i've in mind might be not using a WorkflowServiceHost, but then i loose alot of AppFabric functionaly.
The workflowapplication in platform update 1 has some extension methods called RunEpisode with an overload Func called idleEventCallback. There it's possible to hook into the OnIdle and get a workflowextension to get the object to send back as response.
To answer my own question, i ended up in a workaround using the servicebroker functionality of sql server. The SqlDependency class where the workflow listens for the event to be fired whenever the workflow reach the activity that creates a new bookmark in another state.