how can I use a source column for dynamically partitioning the data in target data lake in azure data factory Copy activity? - azure-data-factory-2

I am using the ADF copy activity and source is SQL server.
Now I may decide to pull whole data or incremental data based on a date field - so when I do that - I wan to create/overwrite the folders in my data lake based on this column.
Source data:
col1
Col2
Col3
FilterColumn(Date)
Target Lake:
e.g. If I pull 1 year of data - the folder structure in lake should be created as below (based on FilterColumn):
entity/2020/03/01/abc.csv
entity/2020/03/02/abc.csv
entity/2020/03/03/abc.csv
entity/2020/03/04/abc.csv
..
..
entity/2021/02/28/abc.csv
where folders are created dynamically based on source filter column - also coming in as a part of the select query.
Suggestions on how can I achieve this within the same copy activity.

I think it can't be achieved with only one copy activity. You can do this by Lookup activity and For Each activity.
Steps:
use Lookup activity to get distinct col3 value and change format to yy/mm/dd.
loop Lookup activity's output by For Each activity.
copy different date data to corresponding CSV file within For Each activity.
Source:
Sink:
Sink dataset:

Related

Azure Data Factory Copy Activity for JSON to Table in Azure SQL DB

I have a copy activity that takes a bunch of JSON files and merges them into a singe JSON.
I would now like to copy the merged single JSON to Azure SQL DB. Is that possible?
Ok, it appears to be working however the output in SQL is just countryCode and CompanyId
However, I need to retrieve all the financial information in the JSON as well
Azure Data Factory Copy Activity for JSON to Table in Azure SQL DB
I repro'd the same and below are the steps.
Two json files are taken as source.
Those files are merged into single file using copy activity.
Then Merged Json data is taken as source dataset in another copy activity.
In sink, dataset for Azure SQL db is created and Auto create table option is selected.
In sink dataset, edit checkbox is selected and sink table name is given.
Once the pipeline is run, data is copied to table.

Add created on while copying data from SQL to azure data lake gen 2

I want to copy the data to SQL from csv file in ADLS gen2. In Sql table, there is a column called created on. But csv file doesn't have that column. How can I copy the current date in created on along with other columns?
You can add a column in source settings of copy activity and give the dynamic value as #utcnow()
or
Add a derived column transformation in dataflow and add the new column and give the data as currentUTC()
Method:1 [Using Copy Activity]
Copy activity is taken and in source settings, source dataset is taken.
Then Additional columns +New is clicked. Created_on is given in Name and #utcnow() is given as dynamic content.
After adding new column, preview data of source dataset looks as in below image.
After this, file can be copied to sink.
Method:2 [Using Dataflow]
Source data is taken as in below image in dataflow.
Derived column transformation is added and +ADD is selected in columns. currentUTC() is given in the expression.
By this way, you can add the column dynamically whenever the data is copied to SQL.

Azure Data Factory Incremental Load data by using Copy Activity

I would like to load incremental data from data lake into on premise SQL, so that i created data flow do the necessary data transformation and cleaning the data.
after that i copied all the final data sink to staging data lake to stored CSV format.
I am facing two kind of issues here.
when ever i am trigger / debug to loading my dataset(data flow full activity ), the first time data loaded in CSV, if I load second time similar pipeline, the target data lake, the CSV file loaded empty data, which means, the column header loaded but i could not see the any value inside file.
coming to copy activity, which is connected to on premise SQL server, i am trying to load the data but if we trigger this pipeline again and again, the duplicate data loaded, i want to load only incremental or if updated data comes from data lake CSV file. how do we handle this.
Kindly suggest.
When we want to incrementally load our data to a database table, we need to use the Upsert option in copy data tool.
Upsert helps you to incrementally load the source data based on a key column (or columns). If the key column is already present in target table, it will update the rest of the column values, else it will insert the new key column with other values.
Look at following demonstration to understand how upsert works. I used azure SQL database as an example.
My initial table data:
create table player(id int, gname varchar(20), team varchar(10))
My source csv data (data I want to incrementally load):
I have taken an id which already exists in target table (id=1) and another which is new (id=4).
My copy data sink configuration:
Create/select dataset for the target table. Check the Upsert option as your write behavior and select a key column based on which upsert should happen.
Table after upsert using Copy data:
Now, after the upsert using copy data, the id=1 row should be updated and id=4 row should be inserted. The following is the final output achieved which is inline with expected output.
You can use the primary key in your target table (which is also present in your source csv) as the key column in Copy data sink configuration. Any other configuration (like source filter by last modified configuration) should not effect the process.

Table name is getting appending with column names in resultent file in azure datafactory

I was trying to get data from On-prem hive Source to Azure data lake gen 2 using azure data factory.
As I need to get data for multiple tables I have created and file(ex: tnames.txt) with all my table names and stored in data lake gen 2.
In Azure Data Factory created a lookup activity and passed tnames.txt file to it.
Then added a foreach activity to that lookup actvity and in foreach activity added a copy activity.
In copy activity in source, I was giving query to extract data.
Sink is datalake gen 2.
Example code:
select * from tableName
Here table is dynamically passed from tnames.txt.
But after data is copied into data lak,e I am getting headers in copied data are like:
"tablename.columnname".
For example: Table name is Employee and few columns are ID, Name, Gender,....
My resultent file columns are like Employee.ID,Employee.Name,Employee.Gender, but my requirement is just column name.
Basically tabe name is append to column name.
How to solve this issue/Is there any other way to get data for multiple tables in single pipeline/copy activity?
Check the mapping tab of your copy activity . If the mapping is enabled, clear it and use auto-create table . It will auto-generate the schema according to the source schema. No need to explicitly create the table with defined schema. Let it be auto create table. It will generate required mapping automatically.

Using ADF Data Flow Derived Column transform against nested Delta structures

I'm trying to use a derived column transform within an ADF (Gen 2) Data Flow where I've ingested a Delta table with nested structures. I'm struggling with the syntax needed to flatten out these structures and no column info is displayed despite me being able to preview the data.
Such a structure would be:
{
"ContactId":"1002657",
"Name":{
"FirstName":"Donna",
"FullName":"Donna Brittain",
"LastName":"Brittain"
}
}
Data Preview working OK:
Data Preview
The structure of my Delta table:
Delta Table Struct
The error I'm getting trying to reference a nested column:
Derived Column Task
How can I reference a nested column such as Name.FirstName to flatten it out to FirstName and why is it not showing up in any of the mappings?
There is a easy way to flatten the nested structures. We can use Copy activity in ADF firstly, it will automate flatten the nested column.
Copy the data into Azure Storage such as data lake(here I used Azure Data Lake Storage Gen 2), then we can use it as data source in the Data Flow.
We can create a txt or csv file with headers in data lake.
Then we can define a Copy activity in ADF and set the mapping.
After run debug, we can see the result. We can use it as data source in data flow.
Update:
In the sink,we can set the value of the Max rows per file option like follows:
ADF will divide the file into several files