How to split values in Azure Data Factory: CSV to SQL - sql

Is it possible to split the column values in Azure Data Factory? I am wanting to split a value in a column from a CSV into a SQL table. I am wanting to keep the second value "Training Programmes Manager" in the same column deleting the 1st and 3rd and the 4th value "Education" moved to an already made column in SQL
Value separated by |
Image of value below:
Value in CSV
Thanks James

Since you need to work with a particular column value, you'll need to use a Data Flow.
Source: Create a DataSet for your CSV file.
In the Data Flow, use Derived Column to parse the | delimited column into new columns.
Sink to SQL, referencing the new column names.

Related

How to split column value using azure data factory

I have Source csv file, In which there is one column which have multiple values (data sep.by comma (,)) so I want extract that particular one column using data factory and store that multiple records into table (in database) with different column name
Could you please suggest how should I design that azure data factory pipeline ?
You can use the split function in the Data flow Derived Column transformation to split the column into multiple columns and load it to sink database as below.
Source transformation:
Derived Column transformation:
Using the split() function, splitting the column based on delimiter which returns an array.
Derived Column data preview:
Here 2 new columns are added in the derived column which stores the split data from the source column (name).
Select transformation (optional):
In Select transformation, we can remove columns which are not used in sink and only select required columns.
Sink:
Connect sink to the database and map the column to load the data.

How do I load entire file content as a text into a column AzureSQLDW table?

I have a some file in an azure data lake 2 and I want to load them as a column value nvarchar(max) in AzureSQLDW. The table in AzureSQLDW is heap. I couldn't find any way to do it? All I see is column delimited when load them into multiple rows instead of one row in single column. How I achieve this?
I don't guarantee this will work, but try using COPY INTO and define non-present values for row and column delimiters. Make your target a single column table.
I would create a Source Dataset with a single column. You do this by specifying "No delimiter":
Next, go to the "Schema" tab and Import the schema, which should create a single column called "Prop_0":
Now the data should come through as a single string instead of delimited columns.

Trouble uploading single column csv to bigquery with split columns

I'm trying to upload a dataset to bigquery so that i can query the data. The dataset is currently in a csv, with all the data for each row in one column, split by commas. I want to have the data split into columns using the comma as a delimiter.
When trying to upload using autodetect schema, the 10 columns have been detected, but are called 'string_0, string_1, string_2 etc' and the rows still have all the data in the first column.
When trying to upload by manually inputting the schema, i get these errors:
CSV table encountered too many errors, giving up. Rows: 1; errors: 1.
CSV table references column position 9, but line starting at position:117 contains only 1 columns.
On both occasions I set header rows to skip = 1
Here's an image of the dataset.
Any help would be really appreciated!
I see here a three potential reasons for the error you're hitting:
Source data CSV file structural problem - the file does not correspond to the RFC 4180 specification prerequisites, i.e. used untypical line-breaks(line delimiters);
Bigquery sink table schema mismatch - i.e. missing a
dedicated column for a particular input data;
Bigquery schema type mismatch - parsing a table column that owns a
type that differs from input one.
Please find also more particularities for Bigquery auto-detect schema method, loading CSV format data, that can help you to solve above mentioned issue.

How to extract partial column name and pass the value to variable in SSIS

I am working on Excel file to load the data into sql table using SSIS VS2013.
How do I extract the column names into a temptable?
In the image attached,there is the "2017 Marketing Sales - Actual" column in the first row, I want to extract the YYYY from the column name and pass that value to a variable and update the table field with YYYY info.
Can you anyone help me how to do this?
In your Excel Connection Manager, uncheck the "First row has column names" check box. This should allow you to access that first row. You'll need to setup a Data Flow Task using this Excel Connection Manager, followed by a derived column to extract the left 4 characters from that 1st row. Unless you somehow limit the rest of the Excel data source, you'll probably also get a lot of extraneous rows importing into your destination. Might need to do some clean up to get it down to just that year.

Use google-refine on csv without headers and with various number of columns per record

I'm attempting to import in open-refine a csv extracted from a NoSQL database (Cassandra) without headers and with different number of columns per record.
For instance, fields are comma separated and could look like below:
1 - userid:100456, type:specific, status:read, feedback:valid
2 - userid:100456, status:notread, message:"some random stuff here but with quotation marks", language:french
There's a maximum number of columns and there aren't cleansing required on their names.
How do I make up a big excel file I could mine using pivot table?
If you can get JSON instead, Refine will ingest it directly.
If that's not a possibility, I'd probably do something along the lines of:
import as lines of text
split into two columns containing row ID and fields
split multi-valued cells on fields column using comma as a separatd
split fields column into two columns using colon as a separate
use key/value on these two columns to unfold into columns