I have to make an app that wants to be able to modify some mail content items from an email server that is implemented with Microsoft Exchange Server.
On the PST files approach I'm ok as I know how to deal with this.
The problem is I don't know how does Microsoft Exchange Server dealing with mail content and PST files. As far as I know PST files are only a way to backing up mail content and structure, something like an sql dump file. I heard that Microsoft Exchange Server would internally actually use a SQL database for storing these content items. As I need to make permanent modifications to email content from client perspective I need to know the following:
- How does Microsoft Exchange Server stores it's mailbox content items? By database or PST or both? If both, how does he do the syncing ? (in here I am partially referring to the concept called "CachedExchangeMode")
- How is the data flow on the server perspective in the content of Microsoft Exchange Server ?
- How can I communicate with the Microsoft Exchange Server as a client for content manipulation ?
Any info about this topics are welcome, as I'm stuck in finding any documentations on these.
Thanks in advance guys !
Thanks Dmitry ! I need to develop a solution that makes sure permanently deleted mailbox items can't be restored by dedicated server or client tools (like scanpst for the client host, which works on PST files). On the client perspective I managed to get a close solution on modifying the PST files so that by wiping the free memory blocks the files won't get corrupted and those informations would be really lost. But on the server side, mainly on the Exchange Server side (because Exchange is somehow more special than otehr servers) I don't have any data on how to make this data really lost. I must mention I'm starting with the hypothesis that I have access on the server host from the server host and from client too. From my hunch Exchange could store it's mailbox items on the database but they would permanently delete those items only by logical flags in the respective records. Or they can make use of a PST server file that has the ability to store the permanently deleted items just like Windows's recycle bin, providing the means to recover some of the deleted items (in this case the database could really delete those records as the recovery method should lie in the PST file construct). Maybe Exchange uses both methods. In all cases, I need to make a solution that provides 100% confirmation that those items are really lost. This is why I need specific documentation or confirmed "hunches". Did I described more clearly the context of my question Dmitry? Thanks !
I now had read some info related to the way Exchange manages it's mailbox items deletion. It moves the soft-deleted items to the Dumpster platform on each deletion stage (firstly into the "Deletions" folder, and then intro the "Purges" folder, and on litigation hold activated, it additionally preserves original mailbox messages to the "Versions" folder). I then found a way to use the Exchange Power Shell to delete those items, and even read about the Remote Power Shell way to programatically acess the Exchange Power Shell to do this programatically. THis is as far as I gone. Would this be a solution for what I have to do ? Would this assure those items aren't recoverable by any means ? Do you know another solution for this or is there something I am missing ?
Exchange stores the data in its internal data base. It format is not documented.
On the client side (cached mode) the data is stored in an OST file (you can think of it as a glorified PST file). Outlook periodically syncs the OST data with the online version of the mailbox.
What are you trying to do? From the client point of view, if your code works with a PST store, it should work just fine with an Exchange store, either cached or online.
Can you be more specific?
Related
I am developing a library to edit contacts on a CardDAV Server and I wonder what is the proper way to sync contacts.
So when I find an etag for a specific contact changed: How do I sync both?
Do I just combine the changed data, e.g. phone numbers? Or must one side (Server or client) win? And how to I detect if a number changed or was added?
The Building a CardDAV client document explains all this very well.
But to address your questions:
So when I find an etag for a specific contact changed: How do I sync both?
You load the vCard from the server. Then it depends on the logic of your client. Do you want to auto merge? Do you want to prompt the user whether he wants to merge? Etc.
Usually you want to auto-merge. So do this. After you have the merged vCard, PUT that again to the server, but make sure to use the If-Match header to ensure that it didn't change again on the server side.
Do I just combine the changed data, e.g. phone numbers?
What you consider useful is entirely up to your application. But just combining fields may not be what you want. For example you wouldn't be able to detect deletes.
So in most cases this is going to be a three-way merge:
old version of the server (stored locally)
new version of the server (that you just fetched)
current version of the local application
Or must one side (Server or client) win?
Some clients do it like that, but this is not required. However, if you modify after a change, you need to be VERY careful with sync cycles!
And how to I detect if a number changed or was added?
You store the old copy you know and diff.
In general it is a good idea to store the (last known) opaque server copy locally and just pick out the fields your client cares about. Then when uploading the item again, you just patch the ones again. (and preserve the rest of what the server sent you).
Summary: A proper vCard diff and local cache is non-trivial. Many clients fail on that and loose or dupe user data.
So unless you plan to put the necessary work and testing into this, an easier way is to detect the changes and ask the user what he wants to do (let server win, force user copy, merge).
On my BizTalk server I use several different credentials to connect to internal and external systems. There is an upcoming task to change the passwords for a lot of systems and I'm searching for a solution to simplify this task on my BizTalk server.
Is there a way that I could adjust the File/FTP adapters to extract the information from an XML file so that I can change it only in the XML file and everything will be updated or is there an alternative that I could use such as PowerShell?
Did someone else had this task as well?
I rather don't want to create a custom adapter but if there is no alternative I will go for that one. Using dynamic credentials for the send port can be solved with Orchestration but I need this as well for the receive port.
You can export the bindings of all your applications. All the passwords for the FTP and File Adapter will be masked out with a series off * (asterisks).
You could then edit your binding down to just those ports you want to update, replace the masked out passwords with the correct passwords, and when you want the passwords changed, import them.
Unfortunately unless you have already prepared tokenised binding files the above is a manual effort.
I was going to recommend that you take a look at Enterprise Single Sign-On, but on second thoughts, I think you probably just need to 'bite the bullet' and make the change in the various Adapters.
ESSO would be beneficial if you have a single Adapter with multiple endpoints/credentials, but I infer from your question that isn't the case (i.e. you're not just using a single adapter). I also don't think re-writing the adapters to include functionality to read usernames/passwords from file is feasible IMHO - just changing the passwords would be much faster, by an order of weeks or months ;-)
One option that is available to you however, depending on which direction the adapter is being used: if you need to change credentials on Send Adapters, you should consider setting usernames/passwords at runtime via the various Adapter Property Schemas (see http://msdn.microsoft.com/en-us/library/aa560564.aspx for the FTP Adapter Properties for example). You could then easily create an encoding Send Pipeline Component that reads an Xml file containing credentials and updates the message context properties accordingly, the message would then be send with the appropriate credentials to the required endpoint.
There is also the option of using ESSO as your (encrypted) config store instead of Xml files / database etc. Richard Seroter has a really good post on this from way back in 2007 (its still perfectly valid tho.)
I logged the following question in server fault, and it was suggested I log a dev question related to that question, so here it is.
I have a Lotus Domino DB being archived using the LS method CopyToDatabase. I am about to implement DAOS on the database and need to ensure that attachments are preseved when copied to the archive.
The person who answered the first question suggested that this would not work and that I would lose the attachments. Can anyone advise how to code the archiving (I can only think of using CopyToDatabase) to ensure that the attachment is not lost?
I had assumed Domino would:
move the attachment data from DOAS into Domino when the CopyToDatabase was run.
then move the attachment data back into DOAS if the DB it is copied to also has DAOS enabled.
Thanks,
A
It really is an admin question, but the reasoning does involve understanding things from a developer's perspective, so it's pretty reasonable to ask here.
DAOS is 100% invisible to Notes code at all levels. It doesn't matter whether it is LotusScript, Java, or the Notes C API. The code is 100% unaware of DAOS. You actually cannot write special code that deals directly with DAOS objects.
So, your assumption is basically correct. You just didn't mention the actual part where the attachment is deleted, and a couple of other miscellaneous details.
I.e., if the archive database that you are copying to exists on the same server as the source database, and both are enabled for DAOS, then the attachment will remain in DAOS even after you delete it from the source database.
But if the archive database that you are copying to is on a different server, or if it is on a users's local hard drive, and if the attachment does not also exist in some other DAOS-enabled database on the server, then the attachment will be removed from DAOS. This will occur at the next DAOS purge after the time that you delete the document (or just the attachment) from the source database.
Currently we have a lot of mailfiles in different directories on the same server, some are located on the server in data\mail and others are located in data\mail\DK... or data\mail\USA...
These mailfiles are also replicated to other servers and we have noticed on the other servers the mailfiles have another file structure.
This makes administration very difficult so we would like to move all our maiLfiles to the data\mail... directories on all servers.
(Some clients have local replicas)
What is the best practise for doing this?
can the admin process do this, move the file, update person record and update clients?
AdminPs "Move to Another Server" functionality works fine for that job (watch out for the delete requests, though).
My guess is that the original administrator set up the system so that the users mailfile on the home mail server is in the root of the \mail directory and that the sub directories contain replicas of mailfiles from others servers as a means of cheap backup.
I'd suggest looking at the NAb and seeing if this is indeed the case and if it is then you are in luck. All you will need to do in this case is bring the server down, move all the mailfiles in the subdirectories into the main mail directory and restart the server. Once the server comes back up it will continue to replicate these mailfiles with the other server.
I would check the replication connection documents to see if any special replication schedules have been setup for those subdirectories, if so you'll have to adjust them to ensure proper replication.
If the users home mailserver is not using the root mail directory as their mailfile storage area then it is a longer process. You can use AdminP to do it but it COULD cause you issues if you accidently come back in at a later date and approve the deletion requests or if the server doesn't have enough diskspace to double all the mailfiles, also having two replicas of a mailfile on a single server is not a good idea either.
If you need to do the long process I'd look at doing it manually. Down the server, move the mailfiles, bring the server up and edit each person doc to set the correct location for the mailfile and then visit each user machine to edit their location document to point to correct location. It is the only safe way to do it.
The last option is to buy a new server and then use adminp to move all the users to that server, making sure the mailfiles is stored in the /mail directory, no risk of duplicate replicas on a single server, adminP looks after adjusting the settings on all the users machines and you end up with a nice clean, new server ( on which you could implement things like transaction logs and daos )
As for the safe way to go on this one:
Correcting the physical mailfile location should be quite easy (bring server down, move mailfiles, start server again), but modifying all those person documents could be quite complicated if you go one by one.
I would encourage you to use Ytria's ScanEZ software, so that you can mass-modify all person docs in the NAB using a simple Replace Substring formula to correct the Mailfile path information at once.
This is an incredibly fast process, should not take more than 10-20 seconds to go.
This is more of a strategy question instead of a 'help with code' question.
I have a content server and need to supply content to various shared hosting and would love to know how you guys would set this up. We would like the content to be 'cached' on the servers so that is appears as static (and to reduce the load on the content server).
Any help would be appreciated. The basic question is regarding how you would deliver and then handle the content for the shared hosting.
The usual way is to use rsync to keep the "slave" servers in sync with the "master" server. The slaves are then configured to simply display the local files. rsync will make sure that files are copied efficiently and correctly (including time stamps and permissions). It will also make sure that clients don't see partial files and it will return useful error codes, and many more issues that only someone who has been doing this for decades would think of.