How can we pass a static value in Azure Data Factory Copy acitvity mapping? - azure-data-factory-2

I want to pass a hard coded value into a column while using copy activity - how can this be done?

You can add an addition column in Copy activity and pass your hard coded to it. Then use that column to do mapping.
Reference:
https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview#add-additional-columns-during-copy

Related

SSIS Mapping and Transformation

I'm new to building SSIS packages, in fact this is my first package. I need to pull data from one DB view on Azure managed instance to an SQL on prem. I have built out the data flow and all. I'm moving data from a database view into a another database table but the destination table has a column that the source doesn't have hence my destination mapping view looks like (See attached image) How do I fix this or what are my options?
If this columns needs to stay empty in the source and you don't have it in source your best and only option is leave it like this. It basically needs to ignore it so no information will be fed. That will work.
In case you need information as current date you can add derivied column box in between your source and destination in your Data Flow where you can add current date or more columns that come from variable for example.
Its self explanatory that ignore(optional) means mapping for those columns can be ignored and if you want columns to be mapped with any calculated column you can do it by using derived column SSIS component Reference
As per your use case,try to use OLD DB component instead of ADO.NET component
to optimize performance for a relatively large data set

How to update existing data in Apache Druid

The problem was to add a new field to existing datasource and fill it with some default value.
I have tried so via this aticle
But the actual result is that new column was added, but it filled with a null value.
Where I was wrong and can I fix it in the same way?
It would be hard to tell without looking at how you have added the new column in your ingestions spec.
I will suggest using druid unified console data loader UI > and parse your input data > define the additional column under transform section. The advantage of data loader UI is that you can preview the transformed result immediately and once the workflow is completed you will get an ingestions spec and can submit it from there itself.
Eg-
Transformation example
From the documentation here: https://druid.apache.org/docs/latest/design/segments.html#different-schemas-among-segments
Maybe the result is related to the fact that existing segments don't have the new field and therefore show null.

Adding a dynamic column in copy activity of azure data factory

I am using Data flow in my Azure Data factory pipeline in order to copy data from one cosmos db collection to another cosmos db collection. I am using cosmos SQL Api as the source and sink datasets.
Problem is when copying the documents from one collection to other,I would like to add an additional column whose value will be same as one of the existing key in json. I am trying with Additional column thing in Source settings but i am not able to figure out how can I add an existing column value in there. anyone with any help on this_
In case of copy activity, you can assign the existing column value to new column under Additional column by specifying value as $$COLUMN and add the column name to be assigned.
If you are adding new column in data flow, you can achieve this using derived column

how to link db2 table column to another object?

I am new to DB2 and I'm working with an IBM i series system. I have an object with a special column "X". I want to store data for X in another object whenever some criteria satisfied and I can get data from the original object for both special records(which data have been stored in another object) and normal data (which values stored in the original object).
How can I link this column to another object? can DB2 data link: data-link help me in this situation? if yes, how can I implement that? I can't find a complete tutorial on how to do this.
I will appreciate any help.

Adding two extra columns to input data - Pentaho Kettle

I am working on a transformation step for Pentaho Kettle. It selects several input columns and based on that adds two new columns during transformation. I am unable to understand (based on code from other plugins), how I can add the two new columns so that 1) steps downstream are aware of these columns and 2) i can push the transformed data into these columns.
Thanks in advance.
You might need to override meta.getStepFields() to add new ValueMetaInterface objects to the RowMetaInterface passed in. This is the standard way to add columns at runtime; however, the row's metadata (i.e. list of ValueMetaInterface objects) must be the same from row to row or else the next step in your transformation will complain.
Often when doing data-driven custom plugins, you consume as many rows as you need (using getRow()) in order to figure out what the outgoing row format/metadata will be, then you can construct a RowMetaInterface (usually using meta.getStepFields()) that will be passed into the putRow() call. If you intend to pass through the incoming fields, do something like:
RowMetaInterface outputRowMeta = getInputRowMeta().clone();
If you're creating new rows use this:
RowMetaInterface outputRowMeta = new RowMeta();
Either way when you call meta.getStepFields(outputRowMeta, ...) it should populate outputRowMeta with the appropriate fields, by adding/changing/removing ValueMetaInterface objects from outputRowMeta.
I've got a blog post using Groovy to add/replace fields in the incoming rows here:
http://funpdi.blogspot.com/2014/10/flatten-json-to-key-value-pairs-in-pdi.html
Not sure if that is similar to your use case or not. If you have more questions, feel free to find me on IRC at ##pentaho (my nick is usually mburgess_pdi)
IF i have understood your question correctly, i think you are trying to create an output file with dynamic column. So you can do this by checking on the "fast dumping" option in Text File Output Step. While doing so , donot define any column names in the "Fields" tab
Check my image below:
Hope it helps :)