I can't see my transformation files (PDI) in Pentaho user console - pentaho

I have csv files in the local file system that contain output results of a program. I want to use Pentaho CDE to visualize those data in graphs (Pie charts ...)
I can do that if I upload my csv files directly as a datasource. But I would like to read from my file in real time as data come.
I saw that I have to use PDI and I created a transformation from the input csv file to an output file. I can visualize the output.
I saved the transformation file .ktr in pentaho-solutions directory But I can't see it in Pentaho user Console. I think that I have to use a kettle over kettleTransFromFile but I can't see my transformation file to load it ! I refreshed the cache but still can't see it ...
Am I doing this wrong ?
Thank you for your time

Related

How to add the date to file name while uploading a file to s3 bucket using alteryx

I have a workflow in alteryx where I am downloading two files from two different urls. After making the required modifications I want to upload them to the s3 bucket as well as save a copy locally. I want to add the current date while saving the file in both cases. I was successful in using a formula tool to rename the file saved locally. But unable to do for the copy being uploaded to s3. Can anyone help me with this? PS: Since it's the company data I cant share the screenshot of the workflow.

Is it possible to automate updating Tableau extract for Tableau Reader?

Situation now:
I have a data warehouse job profile that publishes .txt file in Data folder every day in the morning. I open Tableau workbook which automatically updates data visualisations because of union I made. I save this workbook as extract and collages without Tableau Desktop can view it via Tableau Reader.
What I need:
This reporting format is heavily dependent on me and I need to automate this.
Is this even possible without Tableau Server?
Since Tableau Viewer can only use packaged workbooks with extracted data, you may not directly achieve this.
However, you may automate the packaging process using Tableau's command line parameters and the process will not be dependent on anyone anymore.
You may check the .PDF file on below link. Using that help document, you may create a .BAT file and get that .BAT file periodically started using Task Scheduler on your computer. The users then may open the packaged file from the network location you have saved. Or else (If all user computers have Tableau Desktop installed) you may put the file opening line at the end of the .BAT file, so the user can run the .BAT when they want to see the report.
https://community.tableau.com/docs/DOC-5209
Bernardo was correct in saying the Extract API can be used to programatically create extracts, and thus "refresh" an extract by simply recreating it (the point about Tableau Server is only relevant if you want to publish the extract that you create with the Extract API).
Where you might have trouble is that there is no currently supported way to programatically replace an extract within a .twbx file. That said, it should be possible to do this by simply renaming the .twbx to .zip (it is after all just an archive) and then using something like Python's zip module to manipulate the archive to replace the extract with your new extract.
NB: The Extract API can only be used to create .hyper files. If you want to work with .tde files, then you'll need to use the Tableau SDK instead

getting some extra files without any extension on Azure Data Lake Store

I am using Azure data Lake Store for files Storage. I am using operations like
Creating a main file
Creating part files
Appending these part files to main file (for Concurrent append)
Example:
There is main log file (eventually will contain logs from all
programs)
There are part log file that each program creates solely and then
append to the main log file
The workflow runs really file but i have noticed some unknown file getting uploaded onto the store directory. These files name is a GUID an has no extension, moreover these unknown files are empty.
Does anyone knows what might be the reason for these extra files.
Thanks for reformatting your question. This looks like some processing artefacts that probably will disappear shortly after. How did you upload/create your files?

What is the need of uploading CSV file for performance testing in blazemeter?

I am new in Testing world and just started working on performance and load testing, I just want to know that while we record script in Blazemeter and upload a jmx file for test then why we upload a CSV file with it and What data we need to enter in that CSV file. pls help.
Thank you
You can generate data for testing (e.g. user names and passwords etc.) and save in CSV format and then read them from CSV file and use in your test scenarios if it's needed. Please refer to the Blazemeter documentation - Using CSV DATA SET CONFIG

Load multiple csv files from a directory using dataloader

I have data in multiple CSV files(having same header name) in a directory. I want to create vertices from those CSV files. How can i load all files with a single load using dse graph loader. because i have almost more then 600 csv file ?
You are correct that graph loader the tool for the job. The docs have a good example here.