I have an Outlook COM add-in (C#, Visual Studio 2012) that extends the standard form with additional message properties. The add-in works with Outlook 2010, 2013 and 2016.
The requirements for the add-in dictate that it supports reading and writing a set of properties in namespace PS_INTERNET_HEADERS. I follow the guidelines in http://blogs.technet.com/b/exchange/archive/2009/04/06/3407221.aspx to have such headers on incoming messages promoted to MAPI properties.
But as I understand it, all such headers will become MAPI string properties, right?! But several of these headers actually have more natural types. One the headers is an RFC5322 date-time header and will have a value like 'Wed, 28 Sep 2016 06:27:00 GMT'. Having such a header mapped to a MAPI property of type PT_UNICODE is not optimal as you cannot sort messages based on it, you cannot really use it in searches, etc.
Is there a good solution to this problem?
The only idea I have is to do some kind of mapping from properties in namespace PS_INTERNET_HEADERS to properties in namespace PS_PUBLIC_STRINGS. That would would also have the nice side-effect that properties will be included when printing messages. But if I have to go down that road, I need some kind of hook for doing the mapping. I can of course loop through all messages in a message store, listen for new messages coming in, listen for changed messages, etc - but it does not feel like a good solution. I guess I could also write an Exchange Transport Agent, but I would really like to keep logic on the client side.
Any suggestions?
Edit after Dmitry's comment:
For outgoing messages, I have to use properties in namespace PS_INTERNET_HEADERS since such messages are eventually transported by SMTP (outside Exchange) to other systems. In detail, I have to adhere to https://www.rfc-editor.org/rfc/rfc6477. As a side-effect, Exchange will for incoming messages promote such headers to properties in namespace PS_INTERNET_HEADERS. And that's all working fine.
But even in that case, I would like to follow your suggestion to extract properties explicitly in my code and write some new ones in namespace PS_PUBLIC_STRINGS. The challenge as I see it is which hook to use for running that code. Users should be able to use the mapped properties as columns in views, for sorting, for filtering, for search, inbox rules, etc. I can sweep entire message stores to do that mapping, I can listen for various Outlook object model events, but in the end I have a hard time seeing how I can avoid users to temporarily see messages that my code haven't treated yet.
I have an old add-in written in C++ using Extended MAPI with a similar challenge. On start-up, for each inbox in each IMsgStore it sweeps the entire inbox (potentially a quite expensive operation) and then subscribes to changes using IMAPIFolder::GetContentsTable and then IMAPITable::Advise. But my experience is that I will get table notifications TABLE_ERROR or TABLE_RELOAD now and then and will have to do another sweep. For IMsgStore::Advise I guess similar challenges are present?! In a C# context, I could use the events in Redemption class RDOStore (e.g. OnMessageModified) , but I assume that class uses IMsgStore::Advise?!
No, the property type will never to converted. It always stay as a string. Why not read the internet headers from the PR_TRANSPORT_MESSAGE_HEADERS property (DASL name http://schemas.microsoft.com/mapi/proptag/0x007D001F) and extract the properties explicitly in your code? You will have full control over which properties are extracted and how they are converted.
Related
Background
We have very strict auditing requirements and want to be able to correlate every action our system takes on behalf of the user to a specific authentication operation (sign-on). In addition to these strict auditing requirements, we also have some complex authorization requirements unsolvable by simple claims based authorization.
Considering both of these together led me to wondering the feasibility of using an 'envelope' type design, where messages sent stemming from a user request are wrapped in an envelope containing the necessary information, such as their auth token and info about the sending machine. Now, it would be fairly simple enough to add a property for this token to every message, but that seems tacky and since its a rather cross-cutting concern, I would rather it not pollute every protocol in the system, which is why I'm thinking the envelope idea is worth considering. This approach would also require the cooperation of every actor in the system and my goal is to have this be transparent to actors who don't need any of this information, but also make the information available in case actors do need it. In the case of actors needing it, it's also OK if they just accept the envelope type directly.
Imagined Solution
Overview
Wrap each Tell operation in an envelope used to transport required contextual information
Perhaps implemented w/ a custom actor ref provider and actor ref wrapping the ones configured
Unwrap envelope, if exists, on each receive operation
Custom mailbox
Would also handle sending a message to the auditing service
How to make the contextual information available to the actor?
Can we add to the actor's Context object somehow?
Also acceptable for actor to accept the envelope type/not use custom mailbox in this case
Discussion
In order to make this all transparent, my initial thinking is to 'intercept' the send/receive operations. I understand enough akka.net to implement a custom mailbox and I think this would be the way to go for this kind of approach, but I'm open to other ideas. The mailbox would perform the unwrap and make the contextual information available to the actor in case it's required (99% of the time it's not, likely better to just accept the envelope directly when it is required to be explicit). The mailbox would also handle fulfilling the auditing requirement by sending a message to the auditing service w/ the required information, which not only includes the contextual information from the request, but also local machine information to know where/who did the processing.
The part I'm second guessing myself on is intercepting the send operation (Tell). Since IActorRef instances are created via a configured IActorRefProvider and since this guy handles the Tell operation (via it's created IActorRef instances), I think it makes sense to write a custom IActorRefProvider and a custom IActorRef. Both would wrap the implementations that are configured (decorator pattern), and the custom IActorRef would provide the required behavior in it's Tell method. For webapi apps (only entry point for users), it would pull the required contextual information from HttpContext (one custom ref provider) and for backend apps (another custom ref provider), it would pull the required contextual information from the current message's context. I'm not sure how to add data to the actor's Context property, but I'm assuming it is possible.
With these two pieces in place, the contextual information would effectively be passed along, from actor to actor, and service to service. So even if a message is 20 actors down the line, if it was initially instigated by a user via the REST API, it would still have that contextual information, thereby allowing a full and complete audit and tracing of each action our system takes back to a specific sign on.
What I'm Hoping For
The primary thing I'm hoping for with this post is validation that this is a reasonable approach to take, and if not, why not and alternate suggestions for achieving the desired behavior. Also very welcomed would be any code samples for custom mailboxes/actor ref/actor ref providers and extra cookies if they're doing something similar to what I'm trying to accomplish here. Another welcomed tidbit is how to do the mailbox configuration so I don't need to manually update all of my Props with the custom mailbox implementation. Akka.net configuration is definitely a weaker point of mine, particularly the deployments section, so any core knowledge/articles/advice here is greatly appreciated!
Thanks for taking the time to read this! Any and all help is much appreciated!
Other StackOverflow Issues:
The answers provided in these issues require the cooperation of every actor. Ideally this is all transparent and actors that don't need to use this contextual information can be written as if it didn't exist.
Passing Contextual Information
How to elegantly log contextual information along with every message
There were a couple others I viewed [can't find them right now for some reason], but they all either required cooperation or global shared state [isn't that what akka avoids? :p]
Phobos, a proprietary observability library for Akka.NET, wraps all messages inside a distributed tracing context - which can be aggregated back together again in an off-the-shelf tracing system that supports OpenTracing, such as Jaeger / Zipkin / Azure Application Insights.
You can append custom data to each of the traces that are captured inside your actors via the Context.GetInstrumentation() method inside any of your actors' - custom data can include tags that might include a unique userId, a transaction Id, and so on. That's all part of the OpenTracing specification.
Disclosure: I work for Petabridge, the makers of Phobos. It's proprietary and costs money to use, but it's purpose built to offer this type of decentralized, but complete tracing out of the box.
Alternatively, if you didn't want to use Phobos you might be able to accomplish this using a custom messaging protocol for context propagation and structured logging with the Akka.Logger.Serilog library.
In our BizTalk application we would like all internal messages to have the same structure, with a Header element with routing and logging information (this header is the same for all messages), all properties of which are promoted, and a Body element which is different for each specific message. When I create a new message based on the above (by setting the schema's DataStructure or BaseType), I would like the promotions to be kept as well.
I tried getting this to work by creating a Header message with the required fields and promotions, and also by creating a "complete" BaseMessage with a Header and Body element (again with all properties in the header promoted), but either way in a schema using this DataStructure the property promotions are not kept (which I guess makes sense; the XPaths indicated in the PropertySchema are different, because the BaseMessage namespace is different from the derived message).
Is there a way to have a shared schema including property promotions? Or can you copy the structure in a derived message, but you always have to redo the promotions?
Thanks for any insights!
We have a similar header structure that is imported and always have to redo the promotions.
My recommendation would be to solve this problem by not doing what you're describing. While it sounds good in theory, you will find eventually that it's over-engineering with little practical benefit.
What will matter is the routing information, meaning, the Properties, not the Header section. So, it's fine to have shared Property Schemas (deployed separately) but don't try to shoehorn the messages into a 'common' wrapper.
I have inherited the responsibility for maintaining our Sabre client, and have a need to update our use of the TravelItineraryReadRQ (and, maybe TravelItineraryReadLLSRQ) Actions. I am still very new to the Sabre APIs (and relatively inexperienced with WCF and SOAP), and there is one detail that I am seeing in our codebase that concerns me.
Generated from the API's WSDL, our existing code contains the classes TravelItineraryReadService, TravelItineraryReadRQ and TravelItineraryReadRS (and, of course, many others). That's fine.
My predecessor, however, extended TravelItineraryReadService by adding a constructor, in which he sets the MessageHeader property. I cannot find any code which consumes this property (and it is not an override of a virtual or abstract property defined in SoapHttpClientProtocol, the base class). I might ignore this code, therefore, (a) if I didn't suspect that somehow a SOAP wrapper used the values set in the message header and (b) if my predecessor hadn't set it as follows:
MessageHeaderValue = MessageHeader.Create(connection, "TravelItineraryReadLLS", "TravelItineraryReadLLSRQ");
You'll see that he is using the 'LLS' variant of the API and Action Code, yet the TravelItineraryReadService methods consume / return the 'non-LLS' request and result objects.
Our code logs the RQ and RS packets it sends and receives, and we're sending / receiving the 'non-LLS' variants - so perhaps I am worrying over nothing. But, the deadline is looming and I am in the dark about how this code might be influencing things.
If you have any information that would help me understand how MessageHeaderValue is used (and, its equivalent is present on many other Sabre XxxService WSDL-Generated classes) that would be very helpful.
If, at the same time, you have similar information about the SecurityValue property, that would be good, too.
I'm about to begin writing a suite of WCF services for a variety of business applications. This SOA will be very immature to begin with and eventually evolve into a strong middle-ware layer.
Unfortunately I do not have the luxury of writing a full set of services and then re-factoring applications to use them, it will be a iterative process done over time. The question I have is around evolving (changing, adding, removing properties) business objects.
For example: If you have a SOA exposing a service that returns obj1. That service is being consumed by app1, app2, app3. Imagine that object is changed for app1, I don't want to have to update app2 and app3 for changes made for app1. If the change is an add property it will work fine, it will simply not be mapped but what happens when you remove a property? Or change a property from a string to an int? How do you manage the change?
Thanks in advance for you help?
PS: I did do a little picture but apparently I need a reputation of 10 so you will have to use your imagination...
The goal is to limit the changes you force your clients to have to make immediately. They may eventually have to make some changes, but hopefully it is only under unavoidable circumstances like they are multiple versions behind and you are phasing it out altogether.
Non-breaking changes can be:
Adding optional properties
Adding new operations
Breaking changes include:
Adding required properties
Removing properties
Changing data types of properties
Changing name of properties
Removing operations
Renaming operations
Changing the order of the properties if explicitly specified
Changing bindings
Changing service namespace
Changing the meaning of the operation. What I mean by this, for example, is if the operation always returned all records but it was changed to only return certain records. This would break the clients expected response.
Options:
Add a new operation to handle the updated properties and logic. Modify code behind original operation to set new properties and refactor service logic if you can. Just remember to not change the meaning of the operation.
If you are wanting to remove an operation that you no longer want to support. You are forcing the client to have to change at some point. You could add documentation in the wsdl to let client know that it is being deprecated. If you are letting the client use your contract dll you could use the [Obsolete] attribute (it is not generated in final wsdl so that's why you can't just use it for all)
If it is a big change altogether, a new version of the service and/or interface and endpoint can be created easily. Ie v2, v3, etc. Then you can have the clients upgrade to the new version when the time is right
Here is also a good flowchart from “Apress - Pro WCF4: Practical Microsoft SOA Implementation” that may help.
I am designing an application that creates, uses and deletes MSMQ message queues. Each queue has custom properties which I am currently storing in a file.
I find this messy however and the whole system could go down if this file were to dissappear.
Is there a way I can bind custom properties (e.g. a property xml string) to the actual message queues which I am using?
Cheers,
Shane
While I don't know if that is possible you many not want your configuration to go down with the queue either. I would suggest some other kind of external storage mechanism. You could use another queue that holds messages for each queue configuration(just make sure it's a durable one). You could also look into using a database to hold your configuration and make sure that is backed up.
The queues are either defined in Active Directory or as text files (in the system32\msmq\storage\LQS folder), for public and private respectively.
In theory you may be able to add custom properties to the public queue object in AD.
Similarly, you may be able to add text to the private queue text file (although it may get stripped out should the queue properties be changed).