BizTalk 2010 - Using external source for credentials - dynamic

On my BizTalk server I use several different credentials to connect to internal and external systems. There is an upcoming task to change the passwords for a lot of systems and I'm searching for a solution to simplify this task on my BizTalk server.
Is there a way that I could adjust the File/FTP adapters to extract the information from an XML file so that I can change it only in the XML file and everything will be updated or is there an alternative that I could use such as PowerShell?
Did someone else had this task as well?
I rather don't want to create a custom adapter but if there is no alternative I will go for that one. Using dynamic credentials for the send port can be solved with Orchestration but I need this as well for the receive port.

You can export the bindings of all your applications. All the passwords for the FTP and File Adapter will be masked out with a series off * (asterisks).
You could then edit your binding down to just those ports you want to update, replace the masked out passwords with the correct passwords, and when you want the passwords changed, import them.
Unfortunately unless you have already prepared tokenised binding files the above is a manual effort.

I was going to recommend that you take a look at Enterprise Single Sign-On, but on second thoughts, I think you probably just need to 'bite the bullet' and make the change in the various Adapters.
ESSO would be beneficial if you have a single Adapter with multiple endpoints/credentials, but I infer from your question that isn't the case (i.e. you're not just using a single adapter). I also don't think re-writing the adapters to include functionality to read usernames/passwords from file is feasible IMHO - just changing the passwords would be much faster, by an order of weeks or months ;-)
One option that is available to you however, depending on which direction the adapter is being used: if you need to change credentials on Send Adapters, you should consider setting usernames/passwords at runtime via the various Adapter Property Schemas (see http://msdn.microsoft.com/en-us/library/aa560564.aspx for the FTP Adapter Properties for example). You could then easily create an encoding Send Pipeline Component that reads an Xml file containing credentials and updates the message context properties accordingly, the message would then be send with the appropriate credentials to the required endpoint.
There is also the option of using ESSO as your (encrypted) config store instead of Xml files / database etc. Richard Seroter has a really good post on this from way back in 2007 (its still perfectly valid tho.)

Related

how to connect multiple Parse servers to the same mongodb?

I would like to have two separate Parse servers (configured with a different app ID) connect to the same mongodb, so they can see the same set of users, so that I can create 2 different apps that share the same userbase.
Is this something Parse would support? Are there any expected conflicts or config caveats? I was unable to find info about this on Parse's github..
thanks
There's nothing to do, besides setting the database URL option to the same value on both servers, and that your database is accessible from both servers.
I'm not sure why you would need two different applicationId's as you want the same data and likely, logic running on both apps.
No, Parse Server does not support sharing classes between applications.
What you could do is have one of the instances or maybe a third one handle authentication and store your user information. I am pretty sure this would mean you will have to manually set user info on your requests and objects to save on the other two instances.
Another option is for each of the instances have an afterSave hook on the user class that saves and updates the info at the other instance. This seems easier to do and maintain.
I would choose the second option.

Send very large file (>> 2gb) via browser

I have a task to do. I need to build a WCF service that allow a client to import a file inside a database using the server backend. In order to do this, i need to communicate to the server, the setting, the events needed to start and set the importation and most importantly the file to import. Now the problem is that these files can be extremely large (much bigger then 2gb), so it's not possible to send them via browser as they are. The only thing that comes into my mind is to split these files and send them one by one to the server.
I have also another requirement: i need to be 100% sure that this file are not corrupted, so i need to implement also a sort of policy for correction and possibly recover of the errors.
Do you know if there is a sort of API or dll that can help me to achieve my goals or is it better to write the code by myself? And in this case, which would be the optimal size of the packets?

Merge two Endeca Servers (Endeca 3.1) into one. Including their current data

Let me explain in more detail:
1st: I'm running endeca 3.1, so Endeca Server here refers to 3.0's Data Domain.
I'm required to use an Endeca Server currently present on Endeca (Downloaded a Demo VM). All the info on it, including, groups, attributes and data, must be merged into out Endeca Server. (It can also be the other way around, i could merge my Endeca Server into this one.)
So far, i've tried to do the following:
1) Clone the Endeca Server
2) use the putCollection sconfig operation to create a collection on it with the same name i have on mine.
3) Load configurations using the LoadCollection & LoadAttributes graphs from OEID POC Template 3.1. I point to the new collection on the Configuration.xls file.
This is where i encounter an issue. The LoadAttributes graph gets a T/O message from the server's WS. Then the config WSDL becomes inaccesible for a while. I can't go beyond this point.
I've been able to load data into the collection, but i need to load the attributes first.
THanks in advance for your replies.
Regards
There are a few techniques.
Have you tried exporting the data domain and then importing it?
You can use the endeca-cmd tools to export to a file, and then import from that file. This would enable you to add 2 datastores into one server.
If you want to combine 2 datastores then that is a different question.
The simplest approach in 3.1 if the data collections are small. Extract then as CSV (via a data-table), convert to XLS and add them via self provisioning into separate collections within a single data store. If you are running in the VM this is potentially the easiest approach.
This can also be done using Integrator.
You don't need to load the attributes unless you are using multi-value types. You can call against the conversation web-service to extract data and then load it using 'bulk-load' I would not worry too much about creating the attributes unless this becomes essential due to their type or complexity. If you cannot call against the conversation web-service, then again extract as csv and load using Integrator.

Need suggestions on which option will be efficient to store data on iPad

This is my first time that I am working on a big project for a client. So I was not sure how to solve this problem. However I have come up with two different ideas but I need professionals opinion about which one is better :)
Situation :
There is an application which runs on different client's iPad. Application data is stored by using giant XML file. This XML file is shared among all client by a server. So a server has a centralised copy and each client has their own copy. Once client made changes to their XML copy they updates server copy in and other client updates their copy by updated server copy.
Now only one client can make changes at one time, To fix this I have logic by which before client starts editing XML they need to get ownership from server and server will only allow one client to edit at one time.
Visual Representation :
Now on client side I have to think of a logic by which I will update my client copy and upload it to server. There are two options,
Option 1 :
In option 1, I can directly manipulate XML file by using GDataXML parser and upload that copy to server. For persistence I can save client copy on my iPad in document directory.
Option 2 :
In option 2, I can read XML file create a CoreData representation for local storage. When ever I update data inside core data it will I will change XML file too and than upload that file on server. Double work but I guess better persistence.
Now which one more robust and advisable? Personally I was planning to do option 2 because it seems more robust as I am persisting application data in core data. But option 1 seems more easy work but I don't know how good persistency will remain.
Sorry for lengthy question,
Thanks for any input given.
There are a number of factors which would influence selecting the second option over the first.
How big is the XML file? If you need to work with very large documents, you may need to incrementally parse the XML (SAX) into core data. This will allow you to access the document's contents without loading it all into memory at once.
Do you need to run complex queries in the data? If so, you may be better off using core data fetch predicates, rather than xpath or XSL.
Are you already using core data? Depending on how the XML data is structured, it might be simpler overall to import the data into your existing persistent store.
Otherwise, you can probably make due with parsing the entire document and either traversing the resulting tree or querying with xpath.
If you need to create an object graph based on what you get from server and show it to user (which you most probably need to do), you should stick up to second option, since it allows easy and robust data persistence.
If you do not need to present user with any data from the XML file you can, of course, store it in the Documents directory.
So, if this is a client application and it has at least some visual representation of the data from an XML file you should use CoreData.
If you want a regular update of data , then use CoreData

How can i access and manipulate a mdb file available online (on web) using VB

I have a mdb file hosted on my site http://www.simplyfy.co.in/db/dbfile.mdb. I am developing an application which will be running on multiple machines and will contact the mdb file via internet. I am not sure how do i go about it as building the connection string for an online connection. Any help?
You do not - not at all, not even a little bit, want to expose a .MDB file directly over the internet. You really, really do not want to do this.
There are two reasons and I'll start with second, even if it works - and since it needs to be able to create a .ldb file if its not read only I'm not sure it will - it is liable to be horribly slow. Multi-user MDB can be bad enough over a local network.
The other reason is security, assuming it works at all you're going to really struggle to make this even vaguely safe.
Broadly speaking what you need to do is to create a web service that runs on your site that provides an secured API that your client applications can use to access your database - this gives you two benefits: 1) its much more secure (you're not exposing webspace with write permissions) and 2) it gives you the ability to change the back end data store if required without affecting the clients. There are various possibilities for implementing this but it will depend on the tools you have/are comfortable with.
I think it is possible to access the same way that access a local file, simply using the URL as Data Source. That is, the connection string looks like:
Provider=Microsoft.Jet.OLEDB.4.0;User ID=...;Data Source=http://www.simplyfy.co.in/db/dbfile.mdb;Mode=..., etc
HTH