I have a scenario where I would like to run an update script after a table input and table output job, can anyone assist? I have tried these four but I can't seem to figure out how to make them work.
My Current Transformation
Here's the scenario...
Table Input: MySQL Database Table1 (*Select * from Table1*)
Table Output: Oracle Database (Create Table 1)
(this runs well to completion but then I have to execute the update script manually. I am looking for a way to automate this)
The update query I would like to run:
*update odb.table1 set colum1='New Value1' where column1='Old Value1'
update odb.table1 set colum1='New Value2' where column1='Old Value2'*
Thank you in advance.
I used the Execute SQL Script tool. I just added the two update queries separated by a semicolon ;.
I created two transformations. One for the table input and table output and another for the Execute SQL Script Tool.
I then created a Kettle Job and placed my query after the table output transformation.
Related
I am trying to parameterize the pre-copy script to DROP existing tables of the same schema and table name from a SQL Server sink. I have tried variations of the above, what's the correct syntax to use the dataset properties in the pre-copy script?
Additionally, is there a good resource on using the dynamic content in ADF?
Reviewing other questions with parameterized query steps, the parameters are incorrectly prefixed, the following was executed successfully:
DROP TABLE IF EXISTS #{item().TABLE_SCHEMA}.#{item().table_name}
If I have 2 SQL queries in Bigquery and I want them to run one after another. How should I build this data pipeline and automate it?
Select
a,
b
INTO Table2
From Table1;
Select
a,
b
INTO Table3
FROM Table2
You can simply use BigQuery DDL command to create table2 and then use it in the next query to create table3:
CREATE OR REPLACE TABLE `YOUR_PROJECT.YOUR_DATASET.table2` AS
SELECT a, b FROM `YOUR_PROJECT.YOUR_DATASET.table1`;
CREATE OR REPLACE TABLE `YOUR_PROJECT.YOUR_DATASET.table3` AS
SELECT a, b FROM `YOUR_PROJECT.YOUR_DATASET.table2`;
NOTE: Change YOUR_PROJECT and YOUR_DATASET to what you are using.
Depends on the kind of automation needed. For example, you may create tables using multiple create table statements and then schedule them to run at a certain frequency.
A quick route to schedule queries is to use Google cloud console, select your project and open the BigQuery editor. Type in the multiple SQL statements each ending with a semicolon and use the schedule query option to schedule them.
More at:
https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language#creating_a_new_table
https://cloud.google.com/bigquery/docs/scheduling-queries
First, probably it's better to provide your use cases first. There's a lot of automation tools that can be used. Suppose you want to make it automated you can:
Check whether the table already exist first, and then create if necessary
Do create or replace.
The 1st option usually works if you want to do update, for example you're doing daily update to your table, by keep appending the data. Second option, works if you only wants to save the latest state of your table.
I have a table where multiple SQL scripts. I would like to make an ETL that runs these scripts on another bank line by line. We built a SQL editor that is used by multiple users, what we need to do now is that they run at a certain time of day by ETL.
You can use a Table Input step in a transformation to read this table with these scripts. You can then hop the Table Input step to an Execute Row SQL Script step to execute the SQL scripts from the "script" column row-by-row.
I am writing an ETL to extract data from HANA table and load into SQL Server in BODS.
My job is to create a new table on SQL Server every time I run my job with name as date of that day. I know we can do that for flat files by using global variable but not sure how we can declare similar variable in template table to get desired results?
Why you want to use template tables. You can do the same as below:
Load the data in a standard staging table using BODS
Using DS scripting mechanism generate a query to create a table
Execute the query using SQL transform
Generate another query to copy data from staging table to the table created above
Several other ways also like you can write a DB procedure to create a table with the desired name and copy over the data from stage to that table. This procedure you can call from DS.
Hope this helps.
Cheers.
Shaz
I have this scenario i have a staging table that contains all the record imported from a XML file .Now i want to move this data based on verification like if the record is already in the other table update the record other wise insert the new record. i want to create a job or scheduler in SQL Server that do this for me every night without using any SSIS packages.
Have you tried using the MERGE statement?
SSIS really is an easy way to go with something like this, but if necessary, you can set up a a SQL server agent job. Take a look at this MSDN Article. Basically, write your validation code in a stored procedure, then create a job with a TSQL job step which calls that stored procedure.