I could not see any reference to the Pentaho on the Skybot documentation. Is there a way to schedule Pentaho transformations and jobs on the Skybot? I have tried creating agents and referring to the file path but nothing is working! Any pointers?
For executing or scheduling in Pentaho, you need to have Pentaho installed in your system. If you are using Linux system, first of all install Pentaho DI in your system. Once you have done that make use of the Skybot schedular. Point the Kitchen.sh or Pan.sh file of pentaho DI and the files you need to schedule/execute. You can take help of this link:
How to schedule Pentaho Kettle transformations?
If all is done you can execute a transformation. Skybot needs OS and Pentaho to execute/schedule a job. The same goes with Windows schedular or any other scheduling tool.
Hope it helps :)
This should be pretty simple. Just use the CLI tools to start your job. Kitchen I believe runs jobs. Pan will run transformations.
Here's the documentation for kitchen. It's very straight forward.
http://wiki.pentaho.com/display/EAI/Kitchen+User+Documentation
Related
I have to constantly manually download .rdl files from my git repository and then upload them manually to PowerBI. I don't have much knowledge of PowerBI but I'm only responsible for these deployments. This I feel is a time consuming process and I feel this could be done by automating the deployment process through a DevOps pipeline in Bamboo or similar. I research a bit but couldn't find anything relevant on this topic.
If anyone could help me out by suggesting potential solutions to the problem that would be awesome! Thank you!
I'm looking at a migration project to migrate a client from Datamanager to Datastage. I can see that IBM have helpfully added in the Migration Assistant tool, but less helpfully, I cannot find any details on how to actually use it.
I did use it some years ago, and I'm aware that you need to do it in a command line interface and it works by taking an extract file and creating a Datastage job out of it, which is then reinstalled. However I haven't got my notes any more from that process.
If there is a user guide out there for this tool, I'd love to see it.
Cheers
JK
Your starting point would be the IBM support page and then see "How to download the tool" and ensure you have the required version (10.2.1 + Fix Pack 13 + Interim Fix) installed. The user guide PDF is part of the install in the sub-folder "datamanager/migration".
I am trying to use pentaho which I downloaded from sourceforge (pentaho files). I run the schema-workbench shell correctly and a window opens with the interface, but I still haven't been able to connect to the admin console on http://localhost:8080/pentaho.
Any ideas on which this doesn't seem to work for me?
Best regards
You have a start-pentaho.sh which launches (after a long the first time) the pentaho server on port 8080.
That is, if you have downloaded the correct package, because Pentaho contains many packages: one is the server, another one is the client-tools which contains the schema-workbench as well as the pdi (Pentaho Data Integrator), and the prd (Pentaho Report Designer) as well as few others.
You are running the wrong file. To open the pentaho console, you need to download the PNTAHO SERVER and run 'start-pentaho.sh'
Pentaho by default will start PuC Pentaho USer Console on http://localhost:8080/pentaho once server is up and running. For getting the data integration i.e Spoon interface go to
For Windows : Pentaho install directory>> design-tools>> data-integration>>spoon.bat
For Linux/Mac:Pentaho install directory>> design-tools>> data-integration>>spoon.sh
I hope this helps.
I am using a pentaho community edition 7. I want to schedule a JOB with two sub transformations in it.
I want to schedule it to run on every monday. Can anyone please guide me in saving the files using the correct filepaths and schedule them from BI server.
You can find a guidance for your question in an accepted answer here:
How to deploy scheduled Kettle jobs on Pentaho BI server v6 CE
Basically:
Add job file and xaction file (triggers job file) to Pentaho server
Add transformation files to server's filesystem
Schedule xaction file in Pentaho using "Schedule..." file action (Weekly, Monday, preferred time can be set up in pop-up dialog)
I have some pig batch jobs in .pig files I'd love to automatically run on EMR once every hour or so. I found a tutorial for doing that here, but that requires using Amazon's GUI for every job I setup, which I'd really rather avoid. Is there a good way to do this using Whirr? Or the Ruby Elastic-mapreduce client? I have all my files in s3, along with a couple pig jars with functions I need to use.
Though I don't know how to run pig scripts with the tools that you mention, I know of two possible ways:
To run files locally: you can use cron
To run files on the cluster: you can use OOZIE
That being said, most tools with a GUI, can be controlled via the command line as well. (Though setup may be easier if you have the GUI available).