Hot folder - How to check the status of ingested files into Hybris? - sap

In our current production system, we have several files that will be processed by Hybris hotfolder from external system on a daily / hourly basis. What is the best way to check the status of each file that is being processed by hot folder? Is there any OOTB dashboard functionality available for hotfolder? or is it a custom development?
So far, I'm following to check see backoffice cronjob logs. But it is very cumbersome process - by monitoring logs, finding out unique cron job id etc..any other best approaches?
I'm looking something similar to jenkins jobs status.
Appreciate your inputs.

There is a workaround. Please check this link :
https://help.sap.com/viewer/d0224eca81e249cb821f2cdf45a82ace/1808/en-US/b8004ccfcbc048faa9558ae40ea7b188.html?q=CronJobProgressTracker
Firstly, you need to implement the CronJobProgressTracker class to your current cronjob. And you can see the progress of cronjob in either hac or Backoffice ;
hac : execute flexible search
Backoffice : you can add a setting for the CronJobHistory menu. Then
just click the refresh button to see the last state of progress.
As I know , not possible to track file progress state in OOTB hotfolder. Also you can write custom code in your uploading process .BTW , to be honest my last sentence is not so meaningful . Because need to know your hotfolder xml context to give more hints ..

Hot-folder ingests a file in a series of steps specified by the beans in the hot-folder-spring.xml.Add loggers in each of the bean, eg- batchFilesHeader, batchExternalTaxConverterMapping
Then you can see the status in the console logs.

Related

Multiple users executing the same workflow

Are there guidelines regarding how to share a Snakemake workflow among multiple users on the same data under Linux, or is the whole thing considered bad practice?
Let me explain in case it's not clear:
Suppose user A executes a workflow in directory dir/. Assume the workflow terminates successfully, and he/she then properly sets file/directory permissions recursively on all output and intermediate files and the .snakemake/ subdirectory for other users to read/write, of course.
User B subsequently navigates to dir/, adds input files to the workflow, then executes it. Can anything go wrong?
TL;DR: I'm asking about non-concurrent execution of the same workflow by distinct users on the same system, and on the same data on disk. Is Snakemake designed for such use cases?
It's possible to run snakemake --nolock which will prevent locking of the directory, so multiple runs can be made from inside the same directory. However, without lock, there's now an opening for errors due to concurrent runs trying to modify the same files. It's probably OK, if you are certain that this will be avoided, e.g. if you are in constant communication with another user about which files will be modified.
An alternative option is to create a third directory/path, and put all the data there. This way you can work from separate directories/path and avoid costly recomputes.
I would say that from the point of view of snakemake, and workflow management in general, it's ok for user B to add or update input files and re-run the pipeline. After all, one of the advantages of a workflow management system is to update results according to new input. The problem is that user A could find her results updated without being aware of it.
From the top of my head and without more detail this is what I would suggest. Make snakemake read the list of input files from a table (pandas comes in handy for this) or from some configuration file. Keep this sample sheet under version control (with git/github) together with the Snakefile and other source code.
When users update the working directory with new files, they will also need to update the sample sheet in order for snakemake to "see" the new input and other users will know about it via version control. I prefer this setup over dumping files in a directory and letting snakemake process whatever is in there.

How to separate the latest file from Multiple files in Mule

I have 5000 files in a folder and on daily basis new file keep loaded in same file. I need to get the latest file only on daily basis among all the files.
Will it be possible to achieve the scenario in Mule out of box.
Tried keeping file component inside Poll component( To make use of waterMark) but not working.
Is there any way we can achieve this. If not please suggest the best way ( Any possible links).
Mule Studio: 5.3, RunTime 3.7.2.
Thanks in advance
Short answer: Not really any extremely quick out of the box solution. But there are other ways. Im not saying this is the right or only way of solving it, but I've earlier implemented a similar scenario in this way:
A Normal File inbound with a database table as file-log. Each time a new file is processed, a component checks if its name appears in the table. By choice or filter I only continue if it isn't in there already - and after processing I add the filename to the table.
This is a quite "heavy" solution though. A simpler access would be to use an idempotent filter with a object store. For example a Redis server: https://github.com/mulesoft/redis-connector/blob/master/src/test/resources/redis-objectstore-tests-config.xml
It is actually very simple if your incoming file contains timestamp........you can configure the file inbound connector by setting file:filename-regex-filter pattern="myfilename_#[function:timestamp].csv". I hope this helps
May be you can use a quartz scheduler( mention the time in cron expression), followed by a groovy script in which you can start the file connector . Keep the file connector in another flow.

Where is the action log file physically saved in microsoft test manager?

I want to know where actually these log files got saved in test manager?
As per my understanding, it's saved under SQL server. But I dont know how to get it from it (may be using SharePoint reporting service). If you look the logs, you ll find the link like mtm://<tfsSeverName>:8080/tfs/defaultcollection/p:trunk/Testing/testrun/open?id=118.
I'm trying to find more info. If i get it, I ll post here.
In the mean time, I have created my customized logs with the test case, so that I can use it to debug somewhat.

How to tracking process's status in Tibco?

I hope you show me resolve in my case.
When I define many process, how to get status data's tracking of that process. In other word, I want to get process's history. My purpose to show for my client checking.
I have defined a process communicate 3 applications and i deploy it to client.but unfortunately, my client would like to add more an application ( up to 4 apps) in the future. i wonder if how to do that? i perhaps open process again and edit it. Have a way create dynamic process.
Thanks very much.
PVA.
You get a very limited "history" in TIBCO Administator (more or less which process instances completed with success/failure; in case of failure it will also provided the exception and where in the process it failed). However that doesn't show you any tracking of the individual steps/activities that the process passed through. For this, you'd either have to put lots of logging steps into your process (and need to build something that parses this information from log files). Or you could use BusinessWorks ProcessMonitoring, which gives you a full history trail for each process automatically. However it not included with BW and you'll probably need a separate license.
Change the process in TIBCO Designer, build a new ear file, re-deploy the new EAR file in TIBCO Administrator.

How to prevent Trac to show some commits in the Timeline?

I'm trying to configure a trac server we are using in my team, in order to avoid an undesired behaviour. We are mainly developing free and open-source software in the team, but we sometimes need to be able to build our early prototypes as completely private.
Because of our first constraint, we want our timeline to be visible for anonymous users. But because of the seconde constraints, we want some commits to be completely hidden from the external world, i.e. we don't want anybody else than us to be able to read the message and content of some commits in the timeline.
Unfortunately, I've been unable to configure Trac the proper way to reach this behaviour untli now. I wan't find a configuration that would let me manage the Timeline content with enough accuracy.
Consequently, I would like to know if such a configuration is possible with trac.
For information, I'm using Trac 0.12.2. The installed plugins are :
Trac 0.12.2
TracAccountManager 0.2.1dev-r7731
TracNav 4.1
The only permission I can see that is related to Timeline is TIMELINE_VIEW.
EDIT :
I have forgot to mention something. We don't want to loose the private commits. And we want them to display for registered users. Consequently, it's not a solution for us to remove them from the database.
EDIT 2 :
Ideally, we would like the commits' message to be displayed according to the right to read the content of our Subversion repository. The idea is that, if a commit is made on a part someone can't access, this person is not supposed to be able to read the message of the commit either.
EDIT 3 :
If we have a look in the configuration file of trac, we already can find :
permission_policies = AuthzSourcePolicy, DefaultPermissionPolicy, LegacyAttachmentPolicy
and the authz_file variable is properly set too. Moreover, svn access to the private folders of the svn repositories can't be accessed by anonymous users.
You should set up authz checking for both your Subversion repository and your Trac installation. You can use the same permission file for both. For Subversion, see Path-based authorization in the SVN book. For Trac, enable and configure the trac.versioncontrol.svn_authz.AuthzSourcePolicy component.
This will allow you to have a very fine-grained control over who can access which part of the repository. Note that the implementation of AuthzSourcePolicy in Trac 0.12.2 has a few bugs that will be fixed in 0.12.3.
There are two ways of going about this :
1) You can directly edit the plugins that are running in trac, and add a module that helps you to filter these out at the code level (i.e. you can edit the behavior of the script to , say, only include commits which exclude certain key words). The timeline script is here (trac 2.4) : /usr/local/lib/python2.4/site-packages/trac/Timeline.py (here is an online diff snapshot of the source code : http://trac.edgewall.org/attachment/ticket/890/Timeline.py.diff)
2) You can remove the commits entirely - trac commits are derived from the sqlLite database (the schema is here http://trac.edgewall.org/wiki/TracDev/DatabaseSchema).
Of course, there also might be some fancy tools out there that provide a nice interface for editing the way the timeline looks.
Finally - temporarily, you can remove the timeline/roadmap entirely from the trac.ini file : http://www.gossamer-threads.com/lists/trac/users/28079
I confess that I've virtually no experience with the repository part of Trac, even less with using a repository with a variety of permissions across it's contents.
On the subject: Configuration is certainly not enough, see rblanks answer. While I've never seen the code for that functionality, I was wrong to suggest it doesn't exist. Because it is a central place and developed/supported in Trac core this is definitely the way to go.