In BizTalk 2016, we are planning something where there is a set of send ports. One going to our backup. The other is going to archive. The backup send port is already created. Its file name has to be “ExampleText%MessageID%.xml” without quotes. Unfortunately, this won’t give us the exact name in both file locations, because %MessageID% is a randomized generated information.
We are going to make a code to compare if the file name of each file in the backup is the same as in the archive folder to check to see if everything is there. Is there any way that we can go this route of using two send ports and getting the exact name in both locations?
Along with that, we have another set where the backup send port has to have the file name “%SourceFileName%_%datetime%” without quotes. My fear is that one send port will be delayed where the %datetime% is different. My boss wants to go this direction instead of creating an automated script to move files over. Any advice would be very helpful.
You need to set the desired filename before it hits the Send Ports via the BTS.ReceivedFileName property and then you can use the %SourceFileName% macro on both ports.
You can either set this property in an Orchestration, or you need to set it in a pipeline component in the Receive Location. I would tend to use the BRE Pipeline Framework for this, but you can write your own custom component.
Related
Sorry if this seems stupid but I wonder if it's possible to add a database entry after an ftp upload.
To be more clear, thanks to winSCP I have several folders sending everything I put in there automatically to my server.
However, I would like to create a mysql entry for each uploaded files and once again, automatically. Is it possible to do that? How?
To gives the full details of what I need to do, you can read the following.
I have several folders with pictures and each folders are uploaded automatically.
Each of those folders belong to one user and the goal is to give them an account and allow them to see and download those files through a web interface. Since one account = one folder, that's kinda easy.
And I think a simple .htaccess can simply secure things so one user can only see and download the file in his own repository, no?
However if I want them to be able to see what's new (=something they didn't download or simply mark as read) I think I need a table to manage those files.
Something like id | file (string) | read (bool).
If you think this way to proceed is bad, they I'm open to change how to do things, but to be clear uploading the file need to work this way. Not using any kind of formulary.
Thanks for reading that, sorry for my english.
Your problem contains three steps:
Folders/Files been automatically uploaded to your server directory, as you say, this been efficiently handled by winSCP.
You need to update your database with all the files and folders present in your server directory.
You need to update whether or not it is been read/downloaded by the user.
Since your first step is in place, we don't need anything there. For second step, you should write a script and schedule that script to run at a fixed time interval using CRON (if using LINUX or UNIX, or WINDOWS). The script would be responsible to create a list of file(s) present in the directory, and simply insert the file(s) information that are not present in your database.
EDIT:
This edit is to describe how your script file should work. As I explained, the cron jobs would simply help you run your script file in fixed set of interval (which can be every minute, or every hour, or every day, and so on). Lets say your database table has following columns:
fileid (varchar[20])
filepath (varchar[20])
status (boolean)
Your script file should do following things:
Create a list of existing filepaths in your server directory
Run a select query, create a list of existing filepaths from database table.
Compare list1 with list2, and find the ones that doesn't exist in list2 (This would give you a list of filepath that needs to be inserted into table)
Just insert the list of file paths you got above, and set there status to be false (which means the file is not read/downloaded yet)
NOTE: Please keep in mind that I am not advising right now that how your database table should look like. It can be what you have proposed or can even differ depending on your will or requirements.
For the third step, simply keep the status of your file to be unread when creating entries in your table from the second step, and then when user click on the file link in your application whether to view or download it, send a POST request to your server updating the file status to be marked as read.
Let me know if this helps!
I'm pretty new to both Taverna and Abaqus but I am trying to run an Abaqus model using a "Tool" in Taverna remotely on a HPC. This works fine if I already have my model file and inputs on the HPC but I need a way of uploading the files dynamically in Taverna (trying to generically wrap Abaqus models).
I've tried adding a input port that takes a file list but I don't know how I can copy it to the "location" that I've set for the tool. Could a beanshell service be the answer or can I iterate through the file list and copy them up before executing the abaqus model?
Thanks
When you say that you created an input port that takes a file list, I guess you mean an input to the tool service.
Assuming the input port is called my_file_list, when the tool service is run, it will take a list of data values on port my_file_list. As an example, say it has "hello", "hi" and "hola" is the three values in the list.
On the location where the tool service is run, it executes in a temporary directory - a different directory for each execution of the service. It is normally something like /tmp/usecase-2029778474741087696
Three files will be created in the temporary directory; those files contain the (in this example) three values the tool service received on port my_file_list. The files could be called
/tmp/usecase-2029778474741087696/tempfile.0.tmp containing hello
/tmp/usecase-2029778474741087696/tempfile.1.tmp containing hi
/tmp/usecase-2029778474741087696/tempfile.2.tmp containing hola
There will also be a file called my_input_list. That file will contain
/tmp/usecase-2029778474741087696/tempfile.0.tmp
/tmp/usecase-2029778474741087696/tempfile.1.tmp
/tmp/usecase-2029778474741087696/tempfile.2.tmp
The script of your tool service would normally read the contents of my_input_list line by line and do something with the contents of the listed file(s).
I have also seen some scripts that 'cheat' and iterate directly over tempfile*.tmp but that would be "a bad thing". The problem with that trick, is that if you want to add a second list of files to the tool service then the file my_input_list could contain
/tmp/usecase7932018053449784034/tempfile.4.tmp
/tmp/usecase7932018053449784034/tempfile.5.tmp
/tmp/usecase7932018053449784034/tempfile.6.tmp
as other temporary files were used for the other file list port.
I hope that helps
The tool service allows you to upload files - but if you are using the HPC through a job submission node, then you would have to modify your command line tool to then use the job file staging command to further push the files as part of the job. The files would be available in the current (temporary) directory of the specified tool script.
I would try to do it through the Tool service and not involve the beanshell - then you can keep your workflow simpler.
A good thing to remember is that you can write multiple shell commands in the box.
Similarly you would probably want to retrieve back the results so that you can process them further in the workflow (unless they are massive - in which case you should just output their remote filenames and send them in again to the next HPC job)
The exact commands to use for staging files and retrieving them depends on the HPC job submission system. Which one are you using?
Thanks for the input guys.
It was my misunderstanding of how Taverna uses the File list. All the files in the list are copied to the temp "sandbox" and are therefore available for use.
Another nice easy way is to zip the directory and pass the zipped files into an input port for the service. Then just unzip the files inside the command.
Thanks again
When we enter our credentials, the mail box opens which is sent from the server. But we are able to manipulate it like deleting messages, or moving messages from one folder to another.
And next time we re-login it is in the same status, although the page again came from the server.
It means that the changes we made were saved on the server.
Then why do we say that we can't make changes to dynamic web-pages sent by the server to the client ?
I'm not sure if I understood your question right.
When you move E-Mails in another folder or do something else on the website, your changes will take effect on the server not on your client. So you don't really change the dynamic web-page, you change content properties on the server. For example a Message could have an ID and then theres a Table which will Map Messages by Id to Folders. When you move a Message to another Folder, Database Entries on the server will be updated.
On my BizTalk server I use several different credentials to connect to internal and external systems. There is an upcoming task to change the passwords for a lot of systems and I'm searching for a solution to simplify this task on my BizTalk server.
Is there a way that I could adjust the File/FTP adapters to extract the information from an XML file so that I can change it only in the XML file and everything will be updated or is there an alternative that I could use such as PowerShell?
Did someone else had this task as well?
I rather don't want to create a custom adapter but if there is no alternative I will go for that one. Using dynamic credentials for the send port can be solved with Orchestration but I need this as well for the receive port.
You can export the bindings of all your applications. All the passwords for the FTP and File Adapter will be masked out with a series off * (asterisks).
You could then edit your binding down to just those ports you want to update, replace the masked out passwords with the correct passwords, and when you want the passwords changed, import them.
Unfortunately unless you have already prepared tokenised binding files the above is a manual effort.
I was going to recommend that you take a look at Enterprise Single Sign-On, but on second thoughts, I think you probably just need to 'bite the bullet' and make the change in the various Adapters.
ESSO would be beneficial if you have a single Adapter with multiple endpoints/credentials, but I infer from your question that isn't the case (i.e. you're not just using a single adapter). I also don't think re-writing the adapters to include functionality to read usernames/passwords from file is feasible IMHO - just changing the passwords would be much faster, by an order of weeks or months ;-)
One option that is available to you however, depending on which direction the adapter is being used: if you need to change credentials on Send Adapters, you should consider setting usernames/passwords at runtime via the various Adapter Property Schemas (see http://msdn.microsoft.com/en-us/library/aa560564.aspx for the FTP Adapter Properties for example). You could then easily create an encoding Send Pipeline Component that reads an Xml file containing credentials and updates the message context properties accordingly, the message would then be send with the appropriate credentials to the required endpoint.
There is also the option of using ESSO as your (encrypted) config store instead of Xml files / database etc. Richard Seroter has a really good post on this from way back in 2007 (its still perfectly valid tho.)
Currently we have a lot of mailfiles in different directories on the same server, some are located on the server in data\mail and others are located in data\mail\DK... or data\mail\USA...
These mailfiles are also replicated to other servers and we have noticed on the other servers the mailfiles have another file structure.
This makes administration very difficult so we would like to move all our maiLfiles to the data\mail... directories on all servers.
(Some clients have local replicas)
What is the best practise for doing this?
can the admin process do this, move the file, update person record and update clients?
AdminPs "Move to Another Server" functionality works fine for that job (watch out for the delete requests, though).
My guess is that the original administrator set up the system so that the users mailfile on the home mail server is in the root of the \mail directory and that the sub directories contain replicas of mailfiles from others servers as a means of cheap backup.
I'd suggest looking at the NAb and seeing if this is indeed the case and if it is then you are in luck. All you will need to do in this case is bring the server down, move all the mailfiles in the subdirectories into the main mail directory and restart the server. Once the server comes back up it will continue to replicate these mailfiles with the other server.
I would check the replication connection documents to see if any special replication schedules have been setup for those subdirectories, if so you'll have to adjust them to ensure proper replication.
If the users home mailserver is not using the root mail directory as their mailfile storage area then it is a longer process. You can use AdminP to do it but it COULD cause you issues if you accidently come back in at a later date and approve the deletion requests or if the server doesn't have enough diskspace to double all the mailfiles, also having two replicas of a mailfile on a single server is not a good idea either.
If you need to do the long process I'd look at doing it manually. Down the server, move the mailfiles, bring the server up and edit each person doc to set the correct location for the mailfile and then visit each user machine to edit their location document to point to correct location. It is the only safe way to do it.
The last option is to buy a new server and then use adminp to move all the users to that server, making sure the mailfiles is stored in the /mail directory, no risk of duplicate replicas on a single server, adminP looks after adjusting the settings on all the users machines and you end up with a nice clean, new server ( on which you could implement things like transaction logs and daos )
As for the safe way to go on this one:
Correcting the physical mailfile location should be quite easy (bring server down, move mailfiles, start server again), but modifying all those person documents could be quite complicated if you go one by one.
I would encourage you to use Ytria's ScanEZ software, so that you can mass-modify all person docs in the NAB using a simple Replace Substring formula to correct the Mailfile path information at once.
This is an incredibly fast process, should not take more than 10-20 seconds to go.