How to Replicate DDL Changes from Salesforce to Snowflake using IICS - dynamic

We need to pull data from Salesforce into Snowflake ODS Layer. For this we are using IICS.
As we need to replicate the Salesforce table into snowflake (as it is) we are trying to use the option of "Dynamic Schema Change Handling" in IICS where we are setting it to "Alter and Apply DDL changes". But, my job is trying to Alter the table and create all the fields from source table even if they are present in the target table.
All we want to do is "if source is adding a new column, we want to get that added automatically to the target and if source drops a column we should drop the same in target table".
Can some one please help, if there is an option to achieve this?
Thank you in advance!
Regards,
Edu

Related

Bigquery - Updating GCS path of Bigquery external tables

I have a requirement where I might have to update the Bigquery External tables on a periodic basis.
The GCS location has timestamp for every incremental run, I would like to update to the latest timestamp folder as the path of External table.
One way i see is only dropping the table and creating again by pointing it to latest folder. But, is there any other way to update it without dropping the table
As suggested by #Samuel , you can use the SQL statement CREATE or REPLACE EXTERNAL TABLES for your requirement. Scheduled queries support DML and DDL statements which can be used to create the new tables. You can use the below mentioned query parameter to create the table according to your schedule :
My_database_name.my_table_name.my_results_{run_date}
For more information you can refer to this documentation.

How to create a database, table and Insert data into it and use it as a source in another data flow in SSIS?

I have a need to create a SQL database and a table and Insert data into the table from another SQL database . And also to use this newly created database as a oledb source in another dataflow in the same SSIS package. The table and database name are fixed.
I tried using script task to create database and tables. But when I have to insert data , I am not able to give database name in the connection manager as the database is created only in runtime.
I have tried setting ValidExternalMetaData to false, but that doesnt seems to help as well.
Any idea or suggestions on how to accomplish this will be of great help. Thanks
I think you just need two things to make this work:
While developing the package, the database and table will need to exist.
Set DelayValidation to true on the connection manager and dataflow tasks in order to avoid failures with connection tests before they are created.
use a variable to hold the new table name create and populate the using the variable then use the variable name in the source object.

How to stop Big Query using old schema when creating new table with the same name as a deleted one from Google Sheets

I am using a Google Sheet as the source of a table in Big Query. Since I am unable to rename fieldnames in the schema of an existing table I deleted the table and attempted to re-create it after amending the column names in the source Google Sheet. I need to keep the table name the same as I already have analysis files connecting to the table, however when I create the new table as ask Big Query to auto-detect the schema it uses the schema of the previous table. Even if I enter the new schema as text when creating the table it ignores what I enter and use the schema from the old table.
Any ideas how I get Big Query to detect the new schema from the Google Sheet whilst using the same table name as the deleted table?
Thanks in advance!
After trying this multiple times and it not working - with several tables - randomly it worked and let me create a table with the new scheme (manually). Not sure why this didn't work before as I'm pretty sure I didn't do anything differently. If anyone has any insight on what might have caused the initial errors I'd love to hear it for future reference but my current problem is solved.

How to get history of table structure change in SQL Server

I have a table in SQL Server 2012. The table has few columns. Now I need to check for which column and what is the DDL script has been executed on this table. Or if it is possible to get what is the latest ALTER has been executed for this table.
Thanks in advance.
You cannot achieve this if you do not have a historical or archive table that store this data (using a DDL Trigger) or using a source control.
OR
You have to use a third-party log reader (if log is not shrinked) like ApexSQL LOG

Change the database's table in hive or hcatalog

Is there a way to change the database's table in hive or Hcatalog?
For instance, I have the table foo in the database default, and I want to put this table in the database bar. I try this, but it doesn't work:
ALTER TABLE foo RENAME TO bar.foo
Thanks in advance
AFAIK there is no way in HiveQL to do this. A ticket was raised long back though. But the issue is still open.
An alternate could be to use the EXPORT/IMPORT feature provided by Hive. With this feature we can export the data of a table to a HDFS file along with the metadata using the EXPORT command. The data is stored in JSON format. Data once exported this way could be imported back to another database (even another hive instance) using the IMPORT command.
More on this can be found on the IMPORT/EXPORT MANUAL.
HTH
thanks for your response. I found an other mean to change the database
USE db1; CREATE TABLE db2.foo like foo