Creating trigger file using pentaho kettle - pentaho

I have a log table where it captures log each and every time the main table is loaded, and what i need is that i need to create a trigger file using pentaho kettle each and every time the log table gets updated. And the log table is in teradata.
Any examples or approach to proceed with will be very helpful.
Thanks

Related

Create a Trigger in Task Scheduler based on the addition of a row in an remote SQL DB

I am looking to trigger the execution of a sync program based on the additon of a new row in a remote SQL database. This will start a project ispac file that syncs our databases across our company. Is there a way to create this as a custom event? Right now we are triggering this on a timer which is not sufficent. Thank you.

Is There A Way To Append Deleted and Updated Data To History Table

Alright, so I am working on a project at work and I need to append data to a new history table every time the data in our other table is updated or deleted. However, we are getting access to our sql tables from another company and they only gave us read-only privileges and we can only view them through Microsoft Power BI and Excel.
So I wanted to see if there was any way of creating a trigger of some sorts.
Thank You
From your question, you are trying to do an incremental load of data, to be able to append new data to a table. Also you are looking to have some sort of archive process to a history table, via a trigger. Incremental loads are a Power BI Premium feature only. However for the way you want to move the data based on a trigger, this is not supported in Power BI.
I would recommend trying to get better access to the SQL, or use Excel to get the data, dump it into Excel/CSV files, then create a process to load the new file(s) and figure out the changes, using some other database/etl process, then output to a file/table the results that PBI can read from.
Hope that helps

Creating a View on table create in SQL Server

I have been looking into how to create a pre defined view on a table every time it is created within SQL Server.
A nightly job truncates the database and recreates the schema so I would be looking to create the view then. I know the table name and the structure so I though this is possible.
The way I have been trying to achieve this is though DDL triggers but I cant seem to get it working. Any help or steer on what to look at next would be greatly appreciated.

Loading data regulary from ServiceNow to Pentaho Kettle

I'm working on a BI project and I want to retrieve data from ServiceNow and load it to Pentaho Data Integration so I can record it in my data warehouse, and I want to do this regulary, in other words I want to retrieve the new records regulary from servicenow , only the new ones that haven't been loaded yet to the data warehouse, someone knows how can I acheive my goal? Help me please
The question is too vague.
You need to set up an ETL job that incrementally loads data. That will require you to define a timestamp or incremental key to identify which records are more recent than the ones already loaded.
You will need to schedule that job, e.g., using crontab and calling kitchen from the command line.
Your question pretty much translates to "please develop my ETL project". Too wide in scope.

SSIS 2012 package to import excel data to database table

I want to make a SSIS package that will load data from an excel file to a database table.
I have already made a package that completes the task, but the table needs to be recreated every time the excel data is loaded because the excel data and its column definition changes every month. If the table is not created with every execution there will be errors and my task will not be complete because excel data will be loaded under a wrong column definition.
Is there any way to dynamically drop and create the table every time?
SSIS generates a lot of metadata behind the scenes which describe your source and destination fields and mappings. If it feels it isn't talking to the same data source/destination it will often throw a validation error and refuse to start.
To an extent you can mitigate this by setting DelayValidation to True on the connection manager properties. However this is unlikely to help in your case as your data spec genuinely is changing.
A further option to get around the tight controls on data format is to write your own custom source/transformation/destination logic in script tasks. You can write a script task to read the format of the incoming Excel file, and save those details to variables and pass them into an Execute SQL Task to create the table. Then you can write a script task to dynamically map your data to some generic intermediate columns that exist within your package. Finally you can write a script task destination to load that data into the newly created table.
Basically you're bypassing all out of the box SSIS functionality and writing your own integration solution from the ground up, but it seems you don't have much choice.