Pentaho Server: Uploaded KTR files not syncing up in browser - pentaho

I started my Pentaho Server
Created a Folder
Uploaded some .ktr files to that folder
Prob 1: When I refresh, it says there are no files. When I view the same folder from Kettle it does show me the files
Prob 2: Also when I delete a .ktr file from Kettle view, browser still says its present, and browser allows me to delete those files (meaning kettle is not deleting files?).
Prob 3: From browser I can create folders, but from Kettle I cannot create folders.

Related

How to use "Copy Files to:" task two time in same pipeline in Azure Devops

I have created one Azure pipeline in which i have added two "Copy Files to" task to copy reports from two different folders. But second one is not working it copy previous task only.
First you need to check if the reports files are generated and which directory the reports are located from the build log.
If Copy files task failed to copy the files. Most likely the Source Folder or the File patterns in the Contents are wrongly specified, which causes the files cannot be found.
So you need to check if the files you want to copy resides in Source Folder you specified. And make sure the file paths match the patterns you specified in the Contents field. See below screenshot example.
Check here to learn more about the usage of Copy files task.

How to prevent file from access if any file failed to write to a required folder

I have 3 xml file to be written to a folder for client. while writing the 2 files got written perfectly but 3rd file failed. what are the ways by which I can prevent the client to open any file or all the files got deleted or locked if anything failed?
If your application has delete privileges on the system, keep a record of the filenames to the files you're writing. If a file fails for whatever reason, go through the list of file names and delete the files from the directory. A simple string list with a for loop should do it.

Downloading only new files in GoodData

How can I use the "Download File" component to only download new files or files that have been updated remotely?
Consider a graph like the following:
where File Download is defined as:
I have many *.csv files in ${S3_OR_DATA_DIR_LOCATION}; I'm adding one every day).
How can I make sure GoodData only downloads new files AND files that have been updated? Would making the option "Overwrite existing files" False do it? Or would that only download new files and not update existing files that have been updated?
The File Download CloudConnect component by itself does not support action as downloading only a new file(s), which appeared in the source folder as it does not have any previous state remembering mechanism implemented, but as it has input port, it is possible to implement such mechanism yourself with using of File List CloudConnect component with a little help of Reformat, some Joiner and CSV Writer CloudConnect components. This way you can determine the content of the source folder and write it there in a plain text file. The mechanism can be designed the way, that the next processing would read the state file from the previous run and determine, what a new files are and then sends a list of a new files to the File Download CloudConnect component’s input port.
The another way how to process only a new files, which is way simpler then the process described in the previous article and therefore commonly used, is based on taking advantage of folder structures in the source folder, where there would be a dedicated folder for a new files and another dedicated folder for already processed files. The CloudConnect ETL process itself would then read a new files from its dedicated source folder and the last stage of the ETL process would contain File Copy/Move CloudConnect component used for transferring of the already processed new files from its dedicated folder to folder containing all already processed files.

publish selenium reports from jenkins

I run selenium scripts through maven.Please let me know how to publish selenium reports under post build actions in jenkins.I tried several options
Publish selenium report
publish selenium html report
publish HTML report
publish test ng reports
I tried giving full path from C drive where the surefire reports exist
C:\proj1\target\surefire-reports*.html
it says file *.html doesn't exist
and for html report-it says no report exist
tried giving workspace relative path as well but it provides empty report
In test results folder in excel sheet,I get the report the status of each testcase.But its not published in jenkins.
Could anyone please send me the exact tried out steps to publish reports from jenkins.
Publushing report with the publishHTML reports works with the repository. You should not type the file path
For example, I have on my projects the following as the endpoint for reports.
WebContent/build/Selenium/resultsTests/
Where of course resultsTests is my directory where I publish my reports.
Here is a screenshot of my config
You need to create a folder inside your job/workspace folder.
Example my job folder is C:\Program Files (x86)\Jenkins\jobs\Yahoo\workspace.
I created a folder named results
C:\Program Files (x86)\Jenkins\jobs\Yahoo\workspace\results.
I set Test Report HTMLs as results/.
You can check on the link right below the folder field:
Basedir of the fileset is the workspace root. <--- "The workspace root" is a link to your workspace so you can see your new folder there.
Follow below steps:
install html publisher plugin in your jenkins and restart post done
from your framework copy the path of the folder where reports are getting saved.
Example my path is : D:\workspace\rule\CurrentTestResult
In your Jenkins job, navigate to Post -build actions
In section "HTML directory to archive" give above path
Example : "D:\workspace\rule\CurrentTestResult"
And in index page[s], mention your file name, Or simply add "*.html"
In report title give any desired name
Apply and check

Empty files on S3 prevent from downloading using s3cmd and s3sync

I am trying to setup a backup/restore using S3. The upload sync worked well using s3sync. However, next to each folder there is an empty file with matching name. I read somewhere that this is created to define the folder structure but I am not sure about that as it doesn't happen if I create a folder using a different method s3fox etc.
These empty files prevent me from restoring the directories/files. When I do s3cmd sync, I get an error message "can not make directory: File exists" as it first creates that empty file and that fails when trying to create the directory. Any ideas how I can solve this problem?