SSIS - Using flat file as a Parameter/Variable - variables

I would like to know how to use a flat file (with only one value, say datetime) as a Parameter/Variable. Instead of feeding a SQL query value from Edit SQL task into a variable I want to save them as a flat file and then load them again as a Parameter/Variable.

This can be done using Script Task .
1 Set ReadonlyVeriable == file name
2 select ReadWriteveriable name = Variablename you have to populate.
3 Script write logic to find the value ( read file and get value)
set the value
this.Dts.Variables("sFileContent").Value = StreamText ;

Related

Transforming JSON data to relational data

I want to display data from SQL Server where the data is in JSON format. But when the select process, the data does not appear:
id
item_pieces_list
0
[{"id":2,"satuan":"BOX","isi":1,"aktif":true},{"id":4,"satuan":"BOX10","isi":1,"aktif":true}]
1
[{"id":0,"satuan":"AMPUL","isi":1,"aktif":"true"},{"id":4,"satuan":"BOX10","isi":5,"aktif":true}]
I've written a query like this, but nothing appears. Can anyone help?
Query :
SELECT id, JSON_Value(item_pieces_list, '$.satuan') AS Name
FROM [cisea.bamedika.co.id-hisys].dbo.medicine_alkes AS medicalkes
Your Path is wrong. Your JSON is an array, and you are trying to retrieve it as a flat object
SELECT id, JSON_Value(item_pieces_list,'$[0].satuan') AS Name
FROM [cisea.bamedika.co.id-hisys].dbo.medicine_alkes
Only in the case of data without the [] (array sign) you could use your original query '$.satuan', but since you are using an array I change it to retrieve only the first element in the array '$[0].satuan'

Azure Data Factory - How to find total count of objects with more than 1 files with same "prefix" in an ADF expression?

Let's say have bunch of random sample files in a Blob which I want to copy into datalake as .parquet using ADF copy.
abc.1.txt,
abc.2.txt,
abc.3.txt,
def.1.txt,
ghi.1.txt,
xyz.1.txt,
xyz.2.txt
All abc & xyz object files should be merged/appended into their respective single .parquet file and remaining def, ghi as its individual .parquet file in data lake.
Need the output something like:
[
{
name: abc
count: 3
},
name: def
count: 1
},
name: ghi
count: 1
},
name: xyz
count: 2
}
]
Pipeline flow would look something similar:
GetMetadata -->Filter if only 1 file -->Run ForEach file -->Copy activity(without merge)
GetMetadata -->SetVariable-->Filter if >1 file -->Run ForEach file -->Copy (with merge)
However, how do I get the count() of total files with same prefix in the Filter activity ?
A quick thought,
You could get the file details in get meta data activity and push that to a SQL table and do a group by there and return count.
And loop over the result set in foreach and use if condition to check the count.
Here is what I'm doing.
My data lake:
Firstly, get the childitems in getmetadata.
I'm then writing it to a string variable (optional)
From the string variable write to a text file
From the text file to Azure SQL server table.
Use the below query in the script activity
select count(1), fileprefix from( select
substring(name,0,charindex('.',name)-1) fileprefix,type from
[dbo].[temptest] CROSS APPLY OPENJSON(json) WITH(
name varchar(200)
, type varchar(60)
) as my_json_array)a group by fileprefix
My script output:
Thanks.

import a txt file with 2 columns into different columns in SQL Server Management Studio

I have a txt file containing numerous items in the following format
DBSERVER: HKSER
DBREPLICAID: 51376694590
DBPATH: redirect.nsf
DBTITLE: Redirect AP
DATETIME: 09.03.2015 09:44:21 AM
READS: 1
Adds: 0
Updates: 0
Deletes: 0
DBSERVER: HKSER
DBREPLICAID: 21425584590
DBPATH: redirect.nsf
DBTITLE: Redirect AP
DATETIME: 08.03.2015 09:50:20 PM
READS: 2
Adds: 0
Updates: 0
Deletes: 0
.
.
.
.
please see the source capture here
I would like to import the txt file into the following format in SQL
1st column 2nd column 3rd column 4th column 5th column .....
DBSERVER DBREPLICAID DBPATH DBTITLE DATETIME ......
HKSER 51376694590 redirect.nsf Redirect AP 09.03.2015 09:44:21 AM
HKSER 21425584590 redirect.nsf Redirect AP 08.03.2015 01:08:07 AM
please see the output capture here
Thanks a lot!
You can dump that file into a temporary table, with just a single text column. Once imported, you loop through that table using a cursor, storing into variables the content, and every 10 records inserting a new row to the real target table.
Not the most elegant solution, but it's simple and it will do the job.
Using Bulk insert you can insert these headers and data in two different columns and then using dynamic sql query, you can create a table and insert data as required.
For Something like this I'd probably use SSIS.
The idea is to create a Script Component (As a Transformation)
You'll need to manually define your Output cols (Eg DBSERVER String (100))
The Src is your File (read Normally)
The Idea is that you build your rows line by line then add the full row to the Output Buffer.
Eg
Output0Buffer.AddRow();
Then write the rows to your Dest.
If all files have a common format then you can wrap the whole thiing in a for each loop

Execute SQL Task in SSIS string parameter

I created two string variables (tot and tot_value) and assigned a value (tot = MyVal) for testing. Then I created an Execute SQL Task which takes (tot) as parameter and the value returned is saved in tot_value.
In the General Tab, i set:
ResultSet to Single row. SQL Source Type to Direct input Query is listed below.
In Parameter Mapping I selected my tot variable with Input DirectionSQL_VARCHAR Data Type 1 as Parameter Name (Since i am using ODBC)Size set to default -1.
In Result SetResult Name to 1Variable Name to tot_value.
If in the query I hard code 'MyVal' i get the correct result, however when I use ? to use my variable as a parameter, I always get a 0 returned.
Note that my tot variable is set to MyVal
Any clue of what I might be missing? Thanks in advance
select TOP 1 CAST('' + ISNULL((SELECT distinct type_of_transfer_code
FROM SYSTEM.history_program_transfer
WHERE type_of_transfer_value = ?),'') AS VARCHAR(100)) as type_of_transfer_code
FROM SYSTEM.table_facility_defaults

AS400 SQL Dynamic Delete issue

Some background...
I have 20 + Files.
I read these file names from a prebuilt table building a subfile screen.
I select 1 file then build another screen with the contents of file selected.
I then select the record I want to delete, so far so good...
eval MySQL = stat3 + %trimr(scrwcrd) + STAT3B
my SQL Statement which reads in debug
MySQL = DELETE FROM FILESEL WHERE K00001 = ? with NC
PREPARE STAT3 from :MYSQL
EXECUTE STAT3 using :PROD
where :prod is the variable supplied from Screen selection
My sqlcod ends up at 100 with sqlstt = 2000 after the EXECUTE indicating ROW not found for Delete.
Now for a fact this is not the case. I see the record on the file selected and I see the value of PROD using debug any ideas...
What datatypes and length are the K00001 field and :PROD host variable?
Equality could be an issue. If they are character fields you may need to TRIM/%TRIM the values in order to match.