How do I detect changes in vCards? - vcf-vcard

I am developing a library to edit contacts on a CardDAV Server and I wonder what is the proper way to sync contacts.
So when I find an etag for a specific contact changed: How do I sync both?
Do I just combine the changed data, e.g. phone numbers? Or must one side (Server or client) win? And how to I detect if a number changed or was added?

The Building a CardDAV client document explains all this very well.
But to address your questions:
So when I find an etag for a specific contact changed: How do I sync both?
You load the vCard from the server. Then it depends on the logic of your client. Do you want to auto merge? Do you want to prompt the user whether he wants to merge? Etc.
Usually you want to auto-merge. So do this. After you have the merged vCard, PUT that again to the server, but make sure to use the If-Match header to ensure that it didn't change again on the server side.
Do I just combine the changed data, e.g. phone numbers?
What you consider useful is entirely up to your application. But just combining fields may not be what you want. For example you wouldn't be able to detect deletes.
So in most cases this is going to be a three-way merge:
old version of the server (stored locally)
new version of the server (that you just fetched)
current version of the local application
Or must one side (Server or client) win?
Some clients do it like that, but this is not required. However, if you modify after a change, you need to be VERY careful with sync cycles!
And how to I detect if a number changed or was added?
You store the old copy you know and diff.
In general it is a good idea to store the (last known) opaque server copy locally and just pick out the fields your client cares about. Then when uploading the item again, you just patch the ones again. (and preserve the rest of what the server sent you).
Summary: A proper vCard diff and local cache is non-trivial. Many clients fail on that and loose or dupe user data.
So unless you plan to put the necessary work and testing into this, an easier way is to detect the changes and ask the user what he wants to do (let server win, force user copy, merge).

Related

Best way to export data from other company's SAP

I need to extract some data from my client's SAP ECC (the SUIM -> Users by Complex Selection Criteria -program RSUSR002)
Normally I give them a table of values that I they have to fill some field to extract what I need.
They have to make 63 different extractions (with different values of objects, for example - but inside the same transaction - you can see in the print) from their SAP, to later send to me all extracted files.
Do you know if there is an automated way to extract that, so they don't have to make 63 extractions?
My biggest problem is that every time they make mistakes. It's a lot of things to fill..
Can I create a variant and send it to them? Is it possible to export my variant so they can import it without the need to fill 63x different data?
Thank you.
When this is a task which takes considerable effort by multiple people each year, then it is something which might be worth automatizing.
First you need to find out where that transaction gets its data from. If you spend some time analyzing and debugging the program behind the transaction, you will surely find which SELECT's on which database table(s) provide that data. If you are lucky, there might even be a function module for it.
Then you just need to write an own ABAP program which performs the same selections.
Now about the interesting part: How to get that data to you. There are several approaches here. The best one depends on your requirements and your technical infrastructure. Some possibilities are:
Let users run the program in foreground, use the method cl_gui_frontend_services=>gui_download to save the data to a file on the user's PC and ask them to send it to you via email
Run the program in background and save the file on the application server. Then ask your sysadmins how to get that file from their application server to you. The simplest way would be to just map a network fileserver so they all write to the same place, but there might be some organizational hurdles in the way which prevent that. (Our security people would call me crazy if I proposed to allow access to SMB shares from outside of our network, but your mileage may vary)
Have the program send the data to you directly via email. You can send emails from an SAP system using the function module SO_NEW_DOCUMENT_ATT_SEND_API1. This of course requires that the system was configured to be able to send emails (which you can do with transaction code SCOT). Again, security considerations apply. When it's PII or other confidential data, then you should not send it in an unencrypted email.
Use an RFC call to send the data to your own SAP system which aggregates the data
Use a webservice call to send the data to your own non-SAP system which aggregates the data
You can create a recording in transaction SM35.
There you fill a tcode (SUIM), start recording, make some input in transaction SUIM and then press 'Execute'. Then you can go back to recording (F3 multiple times) and the system will generate some table with commands (structure is BDCDATA). You can delete unnecessary part (i.e. BACK button click) and save it to use as a 'macro'. Then you can replay this recording and it will do exactly what you did.
Also it's possible to export/import the recording to text file, so you can explore it's structure, write some VBA script to create such recording from your parameters and sent it to users. But keep in mind that blanks are meaningful.
It's a standard tools so there's no any coding in the system.
You can save the selection as a variant.
Fill in the selection criteria and press Save.
It can be reused.
You can also transport Variants if the they have a special name

Send very large file (>> 2gb) via browser

I have a task to do. I need to build a WCF service that allow a client to import a file inside a database using the server backend. In order to do this, i need to communicate to the server, the setting, the events needed to start and set the importation and most importantly the file to import. Now the problem is that these files can be extremely large (much bigger then 2gb), so it's not possible to send them via browser as they are. The only thing that comes into my mind is to split these files and send them one by one to the server.
I have also another requirement: i need to be 100% sure that this file are not corrupted, so i need to implement also a sort of policy for correction and possibly recover of the errors.
Do you know if there is a sort of API or dll that can help me to achieve my goals or is it better to write the code by myself? And in this case, which would be the optimal size of the packets?

CoreApplication.Id vs. ASHWID vs Package.Current.Id

Can anyone tell me the difference between these 3 IDs?
I want to identify a particular installation (per purchase/device and not counting minor updates) of my win8 app, which of them will best fit my need?
You'll probably need two of them:
App Specific Hardware ID (ASHWID) contains information about the device (it does not necessarily match the purchase since the latter is attached to a user account). Also you will probably need to account for the hardware drift as explained here.
Package.Current.Id identifies your application, including its complete version, so you'll need to ignore the parts which you allow to change with minor updates.
EDIT:
I don't think CoreApplication.Id will be any use to you. It is only unique within a package to differentiate between multiple application within a single package. I don't think you van edit it through VS designer, but you can see the value if you open the appxmanifest file in editor, e.g.:
<Package>
<Applications>
<Application Id="App" />
It's probably best documented here.
If you want to identify a device, ASHWID is the way to go (you might want to use it in combination with a Device Guid generated on the user local storage to ease server lookup of the known device ASHWID)
If you want to identify a purchase you definitely want to take a look at CurrentApp.GetAppReceiptAsync (and similar methods) . http://msdn.microsoft.com/fr-fr/library/windows/apps/jj649137.aspx
You can also use combination of the two depending on your scenario.

Dealing with Microsoft Exchange Server

I have to make an app that wants to be able to modify some mail content items from an email server that is implemented with Microsoft Exchange Server.
On the PST files approach I'm ok as I know how to deal with this.
The problem is I don't know how does Microsoft Exchange Server dealing with mail content and PST files. As far as I know PST files are only a way to backing up mail content and structure, something like an sql dump file. I heard that Microsoft Exchange Server would internally actually use a SQL database for storing these content items. As I need to make permanent modifications to email content from client perspective I need to know the following:
- How does Microsoft Exchange Server stores it's mailbox content items? By database or PST or both? If both, how does he do the syncing ? (in here I am partially referring to the concept called "CachedExchangeMode")
- How is the data flow on the server perspective in the content of Microsoft Exchange Server ?
- How can I communicate with the Microsoft Exchange Server as a client for content manipulation ?
Any info about this topics are welcome, as I'm stuck in finding any documentations on these.
Thanks in advance guys !
Thanks Dmitry ! I need to develop a solution that makes sure permanently deleted mailbox items can't be restored by dedicated server or client tools (like scanpst for the client host, which works on PST files). On the client perspective I managed to get a close solution on modifying the PST files so that by wiping the free memory blocks the files won't get corrupted and those informations would be really lost. But on the server side, mainly on the Exchange Server side (because Exchange is somehow more special than otehr servers) I don't have any data on how to make this data really lost. I must mention I'm starting with the hypothesis that I have access on the server host from the server host and from client too. From my hunch Exchange could store it's mailbox items on the database but they would permanently delete those items only by logical flags in the respective records. Or they can make use of a PST server file that has the ability to store the permanently deleted items just like Windows's recycle bin, providing the means to recover some of the deleted items (in this case the database could really delete those records as the recovery method should lie in the PST file construct). Maybe Exchange uses both methods. In all cases, I need to make a solution that provides 100% confirmation that those items are really lost. This is why I need specific documentation or confirmed "hunches". Did I described more clearly the context of my question Dmitry? Thanks !
I now had read some info related to the way Exchange manages it's mailbox items deletion. It moves the soft-deleted items to the Dumpster platform on each deletion stage (firstly into the "Deletions" folder, and then intro the "Purges" folder, and on litigation hold activated, it additionally preserves original mailbox messages to the "Versions" folder). I then found a way to use the Exchange Power Shell to delete those items, and even read about the Remote Power Shell way to programatically acess the Exchange Power Shell to do this programatically. THis is as far as I gone. Would this be a solution for what I have to do ? Would this assure those items aren't recoverable by any means ? Do you know another solution for this or is there something I am missing ?
Exchange stores the data in its internal data base. It format is not documented.
On the client side (cached mode) the data is stored in an OST file (you can think of it as a glorified PST file). Outlook periodically syncs the OST data with the online version of the mailbox.
What are you trying to do? From the client point of view, if your code works with a PST store, it should work just fine with an Exchange store, either cached or online.
Can you be more specific?

How to store configuration data so that to not copy it during database copy?

There are parameters that I would not want to be transferred from production environment to QA system. Staff like network path and url's. The problem is that in ABAP everything is in the database and when the database is copied to the QA system you have to manually change those parameters. And this is prone to errors.
Is there a way to store configuration information in a way that won't get transferred with the database?
Thanks.
In short: no - at least that would be very unusual in a SAP environment.
If your QA system is set up as a system copy of your production environment (which is the usual path), there are quite a few steps to do to make the system work correctly. This includes some configuration, which can be as simple as filepaths such as you mention, but also the addresses and names of "partner systems". For example, one of my customers is a bank, so when copying his production system, he makes triply sure that no activity on the QA side accidentally trickles to the production side. Some other changes are made as well, for example obscuring peoples names and addresses so no mail gets accidentally sent etc.
There are a few ways to make applying these changes as easy as possible (look for some SAP documentation or books on SAP Transport and Change management, I had one by Sue McFarland Metzger or so that was quite good). From what I've seen, there is usually a set of transports that change the configuration and customizing etc. on the QA system to the
appropriate values.
Hope that helps.
You cannot prevent the configuration stored in the database from being copied to the cloned instance. However, you can design the configuration storage in a way that will prevent the copied entries from being used. You should check with your basis administrators if they can guarantee that the cloned system will get a new system ID (SID). If this is the case, then you can simply use the SID as key field in your configuration table. After the system copy, the SID will be changed and the cloned system will no longer access the original entries.
your question is not clear, are you talking about standard or custom config ?
Greetings, assuming you are storing these paths in a Z table, then some shops put the sy-sysid ( system id ) as one of the columns. Maintain all systems in your dev and transport to production. This becomes painful after a while, so I would only suggest this for information that does not change a lot ( file paths might be good ).
T.