Transferring several similar named tables in SSIS - sql

I want to create an interface between 2 databases on SQL Server 2008+ to copy several similar named tables into one.
I have n tables that all have the same naming convention, for example:
SalesInvoicePlanning2014ver1
SalesInvoicePlanning2015ver1
SalesInvoicePlanning2015ver2
etc.
The numbers can vary and do not have a set start (or end), but are always of the "int"-Datatype.
I also have a table "tabledir" that contains all table names as list. (one field) There are a total of 30-40 entries in that list with (for me) undesired entries. In the above example I would need 3 of the 30 tables.
The plan is to use a loop container to
select Top 1([name]) from [tabledir] where name like 'SalesinvoicePlanning%'
and then use the result as variable in the following SSIS Data transfer task:
Select * from [variable]
However, I'm stuck with the SQL statement to give me the desired tablename on each iteration.
Performance is not really an issue. Any advice? Am I wrong trying to use a loop-container?

You can follow below steps -
Step 1 - You can first create SQL task to get all table names into one variable lets say, TableNames of type Object(recordset) using you query.
e.g. select ([name]) as TableName from [tabledir] where name like 'SalesinvoicePlanning%'
Step 2 - Add foreach loop container to iterate over this variable TableNames to take single table name into new variable current_table and add data flow into the container to import data to destination table. Your source query will be expression like -
Select column_names from current_table

Related

Query a database based on result of query from another database

I am using SSIS in VS 2013.
I need to get a list of IDs from 1 database, and with that list of IDs, I want to query another database, ie SELECT ... from MySecondDB WHERE ID IN ({list of IDs from MyFirstDB}).
There is 3 Methods to achieve this:
1st method - Using Lookup Transformation
First you have to add a Lookup Transformation like #TheEsisia answered but there are more requirements:
In the Lookup you Have to write the query that contains the ID list (ex: SELECT ID From MyFirstDB WHERE ...)
At least you have to select one column from the lookup table
These will not filter rows , but this will add values from the second table
To filter rows WHERE ID IN ({list of IDs from MyFirstDB}) you have to do some work in the look up error output Error case there are 2 ways:
set Error handling to Ignore Row so the added columns (from lookup) values will be null , so you have to add a Conditional split that filter rows having values equal NULL.
Assuming that you have chosen col1 as lookup column so you have to use a similar expression
ISNULL([col1]) == False
Or you can set Error handling to Redirect Row, so all rows will be sent to the error output row, which may not be used, so data will be filtered
The disadvantage of this method is that all data is loaded and filtered during execution.
Also if working on network filtering is done on local machine (2nd method on server) after all data is loaded is memory.
2nd method - Using Script Task
To avoid loading all data, you can do a workaround, You can achieve this using a Script Task: (answer writen in VB.NET)
Assuming that the connection manager name is TestAdo and "Select [ID] FROM dbo.MyTable" is the query to get the list of id's , and User::MyVariableList is the variable you want to store the list of id's
Note: This code will read the connection from the connection manager
Public Sub Main()
Dim lst As New Collections.Generic.List(Of String)
Dim myADONETConnection As SqlClient.SqlConnection
myADONETConnection = _
DirectCast(Dts.Connections("TestAdo").AcquireConnection(Dts.Transaction), _
SqlClient.SqlConnection)
If myADONETConnection.State = ConnectionState.Closed Then
myADONETConnection.Open()
End If
Dim myADONETCommand As New SqlClient.SqlCommand("Select [ID] FROM dbo.MyTable", myADONETConnection)
Dim dr As SqlClient.SqlDataReader
dr = myADONETCommand.ExecuteReader
While dr.Read
lst.Add(dr(0).ToString)
End While
Dts.Variables.Item("User::MyVariableList").Value = "SELECT ... FROM ... WHERE ID IN(" & String.Join(",", lst) & ")"
Dts.TaskResult = ScriptResults.Success
End Sub
And the User::MyVariableList should be used as source (Sql command in a variable)
3rd method - Using Execute Sql Task
Similar to the second method but this will build the IN clause using an Execute SQL Task then using the whole query as OLEDB Source,
Just add an Execute SQL Task before the DataFlow Task
Set ResultSet property to single
Select User::MyVariableList as Result Set
Use the following SQL command
DECLARE #str AS VARCHAR(4000)
SET #str = ''
SELECT #str = #str + CAST([ID] AS VARCHAR(255)) + ','
FROM dbo.MyTable
SET #str = 'SELECT * FROM MySecondDB WHERE ID IN (' + SUBSTRING(#str,1,LEN(#str) - 1) + ')'
SELECT #str
If the column has string data type you should add quotation before and after values as below:
SELECT #str = #str + '''' + CAST([ID] AS VARCHAR(255)) + ''','
FROM dbo.MyTable
Make sure that you have set the DataFlow Task Delay Validation property to True
This is a classic case for using LookUp Transformation. First, use a OLE DB Source to get data from the first database. Then, use a LookUp Transformation to filter this data-set based on the ID values from the second data-set. Here is the steps for using a LookUp Transformation:
In the General tab, select Full Cash, OLE DB Connection Manager and Redirect rows to no match output as shown in the following picture. Notice that using Full Cash provides great performance for your package.
General Setting
In the Connection tab, use OLE DB Connection Manager to connect to your second server. Then, you can either directly select the data-set with ID values or (as is shown in the picture below) you can use SQL code to select the IDs from the filtering data-set.
Connection:
Go to Columns tab and select ID columns from the both datasets. For each record from your first data-set, it will check to see if its ID is in the Available LookUp Column. If it is, it will go to the Matching output, else to No Matching output.
Match ID columns:
Click on OK to close the LookUp. Then you need to select the LookUp Match Output.
Match Output:
The "best" answer depends on data volumes and source systems involved.
Many of the other answers propose building out a list of values based on clever concatenation within SQL Server. That doesn't work so well if the referenced system is Oracle, MySQL, DB2, Informix, PostGres, etc. There may be an equivalent concept but there might not be.
For best performance, you need to filter against the second db before any of those rows ever hit the data flow. That means adding a filtering condition, as the others have suggested, to your source query. The challenge with this approach is that your query is going to be limited by some practical bounds that I don't remember. Ten, one hundred, a thousand values in your where clause is probably fine. A lakh, a million - probably not so much.
In the cases where you have large volumes of values to filter against the source table, it can make sense to create a table on that server and truncate and reload that table (execute sql task + data flow). This allows you to have all of the data local and then you can index the filter table and let the database engine do what it's really good at.
But, you say the source database is some custom solution that you can't make tables in. You can look at the above approach with temporary tables and within SSIS you just need to mark the connection as singleton/persisted (TODO: look this up). I don't much care for temporary tables with SSIS as debugging them is a nightmare I'd not wish upon my mortal enemy.
If you're still reading, we've identified why filtering in the source system might not be "doable", even if it will provide the best performance.
Now we're stuck with purely SSIS solutions. To get the best performance, do not select the table name in the drop down - unless you absolutely need every column. Also, pay attention to your data types. Pulling LOB (XML, text, image (n)varchar(max), varbinary(max)) into the dataflow is a recipe for bad performance.
The default suggestion is to use a Lookup Component to filter the data within the data flow. As long as your source system supports and OLE DB provider (or you can coerce the data into a Cache Connection Manager)
If you can't use a Lookup component for some reason, then you can explicitly sort your data in your source systems, mark your source components as such, and then use a Merge Join of type Inner Join in the data flow to only bring in matched data.
However, be aware that sorts in source systems are going to be sorted according to native rules. I ran into a situation where SQL Server was sorting based on the default ASCII sort and my DB2 instance, running on zOS, provided an EBCDIC sort. Which was great when my domain was only integers but went to hell in a handbasket when the keys became alphanumeric (AAA, A2B, and AZZ will sort differently based on this).
Finally, excluding the final paragraph, the above assumes you have integers. If you're performing string matching, you get an extra level of ugliness because different components may or may not perform a case sensitive match (sorting with case sensitive systems can also be a factor).
I would first create a String variable e.g. SQL_Select, at the Scope of the Package. Then I would assign that a value using an Execute SQL Task against the 1st database. The ResultSet property on the General page should be set to Single row. Add an entry to the Result Set tab to assign it to your Variable.
The SQL Statement used needs to be designed to return the required SELECT statement for your 2nd database, in a single row of text. An example is shown below:
SELECT
'SELECT * from MySecondDB WHERE ID IN ( '
+ STUFF ( (
SELECT TOP 5
' , ''' + [name] + ''''
FROM dbo.spt_values
FOR XML PATH(''), TYPE).value('(./text())[1]', 'VARCHAR(4000)'
) , 1 , 3, '' )
+ ' ) '
AS SQL_Select
Remove the TOP 5 and replace [name] and dbo.spt_values with your column and table names.
Then you can use the variable SQL_Select in a downstream task e.g. an OLE DB Source against database 2. OLE DB Sources and OLE DB Command Tasks both let you specify a Variable as the SQL Statement source.
You could add a LinkedServer between the two servers. The SQL command would be something like this:
EXEC sp_addlinkedserver #server='SRV' --or any name you want
EXEC sp_addlinkedsrvlogin 'SRV', 'false', null, 'username', 'password'
SELECT * FROM SRV.CatalogNameInSecondDB.dbo.SecondDBTableName s
INNER JOIN FirstDBTableName f on s.ID = f.ID
WHERE f.ID IN (list of values)
EXEC sp_dropserver 'SRV', 'droplogins'

Creating a calculated field table based on data in separate tables

It is straight forward to create a calculated field in a table that uses data IN the table... due to the fact that the expression builder is straight forward to use. However, it appears to me that the expression builder for the calculated field only works with data IN the table;
i.e: expression builder in table MYTABLE works with fields FIELD1.MYTABLE, FIELD2.MYTABLE etc.
Inventory Problem
My problem is that I have two 'count' fields that result from my queries that apply to INPUTQUERY and OUTPUTQUERY (gives me a count of all input data added and a count of all output data added) and now I want to subtract the two to get a stock.
I can't link the table that was created from my query because it wont be able to continually update do the relationship itself, and thus i'm stuck either using the expression builder/SQL.
First question:
Is it possible to have the expression builder reference data from other tables?
i.e expressionbuilder for:
MAINTABLE CALCULATEDFIELD.MAINTABLE = INPUTSUM.INPUTTABLE - OUTPUTSUM.OUTPUTTABLE
(which gives a difference of the two)?
Second question:
if the above isn't possible, can I do this through an SQL code ?
i.e
SELECT(data from INPUTSUM)
FROM(INPUTTABLE)
-
SELECT(data from OUTPUTSUM)
FROM(OUTPUTTABLE)
Try this:
SELECT SUM(T.INPUTSUM) - SUM(T.OUTPUTSUM) AS RESULTSUM
FROM
(
SELECT INPUTSUM, 0 AS OUTPUTSUM
FROM INPUTTABLE
UNION
SELECT 0 AS INPUTSUM, OUTPUTSUM
FROM OUTPUTTABLE
) AS T

Find out all useful columns in a table in sql server

I have a table which has 50+ columns but only few columns are getting used. that means when any stored procedure uses that table it only refers 4-5 columns in select/where statements . rest of columns are not getting used . i just want to list down those columns that are actually getting used. one way is finding out the dependencies of a table and then go through every SP and find out which columns are getting used . but in that case i have around 30+ Sp. is there any efficient way to do it.
To use multiple columns in a procedure, you can use a code like below
create procedure sp_sample
#column_names varchar(200)
as
if #column_names='' or #column_nams is null
set #column_names='*'
exec ('select '+#column_name +' from table')
Here are some examples :
exec sp_sample #columnname='id,name'
or
exec sp_sample #columnname='id,name,telphone'
Try this:
select name from syscomments c
join sysobjects o on c.id = o.id
where TEXT like '%table_name%' and TEXT like '%column_name%'
In table_name give you table name, in column_name give the column for which you want to chck the procedure dependencies.You will get the stored procedure names as output
If you import your database as a database project using the SQL Server Data Tools, you will be able to find all references to a table or column using the "Find All References" context command. What makes this particularly useful is the accuracy: it will even find instances of SELECT * that don't mention the column explicitly, but implicitly refer to it anyway. It will also not be confused by tables or columns with similar names (finding particular instances of ID is otherwise rather problematic).
If all you want to know if a column is referenced at all, you can simply delete it and see if any "unresolved reference" errors appear in the error list -- if yes, then the column is used somewhere.

SqlQuery and SqlFieldsQuery

it looks that SqlQuery only supports sql that starts with select *? Doesn't it support other sql that only select some columns like
select id, name from person and maps the columns to the corresponding POJO?
If I use SqlFieldQuery to run sql, the result is a QueryCursor of List(each List contains one record of the result). But if the sql starts with select *, the this list's contents would be different with field query like:
select id,name,age from person
For the select *, each List is constructed with 3 parts:
the first elment is the key of the cache
the second element is the pojo object that contains the data
the tailing element are the values for each column.
Why was it so designed? If I don't know what the sql that SqlFieldsQuery runs , then I need additional effort to figure out what the List contains.
SqlQuery returns key and value objects, while SqlFieldsQuery allows to select specific fields. Which one to use depends on your use case.
Currently select * indeed includes predefined _key and _val fields, and this will be improved in the future. However, generally it's a good practice to list fields you want to fetch when running SQL queries (this is true for any SQL database, not only Ignite). This way your code will be protected from unexpected behavior in case schema is changed, for example.

Jet engine (Access) : Passing a list of values to a stored procedure

I am currently writing a VBA-based Excel add-in that's heavily based on a Jet database backend (I use the Office 2003 suite -- the problem would be the same with a more recent version of Office anyway).
During the initialization of my app, I create stored procedures that are defined in a text file. Those procedures are called by my app when needed.
Let me take a simple example to describe my issue: suppose that my app allows end-users to select the identifiers of orders for which they'd like details. Here's the table definition:
Table tblOrders: OrderID LONG, OrderDate DATE, (other fields)
The end-user may select one or more OrderIDs, displayed in a form - s/he just has to tick the checkbox of the relevant OrderIDs for which s/he'd like details (OrderDate, etc).
Because I don't know in advance how many OrderID s/he will select, I could dynamically create the SQL query in the VBA code by cascading WHERE clauses based on the choices made on the form:
SELECT * FROM tblOrders WHERE OrderID = 1 OR OrderID = 2 OR OrderID = 3
or, much simpler, by using the IN keyword:
SELECT * FROM tblOrders WHERE OrderID IN (1,2,3)
Now if I turn this simple query into a stored procedure so that I can dynamically pass list of OrderIDs I want to be displayed, how should I do? I already tried things like:
CREATE PROCEDURE spTest (#OrderList varchar) AS
SELECT * FROM tblOrders WHERE OrderID IN (#OrderList)
But this does not work (I was expecting that), because #OrderList is interpreted as a string (e.g. "1,2,3") and not as a list of long values. (I adapted from code found here: Passing a list/array to SQL Server stored procedure)
I'd like to avoid dealing with this issue via pure VBA code (i.e. dynamically assigning list of values to a query that is hardcoded in my application) as much as possible. I'd understand if ever this is not possible.
Any clue?
You can create the query-statement string dynamically. In SQL Server you can have a function whose return value is a TABLE, and invoke that function inline as if it were a table. Or in JET you could also create a kludge -- a temporary table (or persistent table that serves the function of a temporary table) that contains the values in your in-list, one per row, and join on that table. The query would thus be a two-step process: 1) populate temp table with INLIST values, then 2) execute the query joining on the temp table.
MYTEMPTABLE
autoincrementing id
QueryID [some value to identify the current query, perhaps a GUID]
myvalue one of the values in your in-list, string
select * from foo
inner join MYTEMPTABLE on foo.column = MYTEMPTABLE.myvalue and MYTEMPTABLE.QueryId = ?
[cannot recall if JET allows ANDs in INNER JOIN as SQL Server does --
if not, adjust syntax accordingly]
instead of
select * from foo where foo.column IN (... )
In this way you could have the same table handle multiple queries concurrently, because each query would have a unique identifier. You could delete the in-list rows after you're finished with them:
DELETE FROM MYTEMPTABLE where QueryID = ?
P.S. There would be several ways of handling data type issues for the join. You could cast the string value in MYTEMPTABLE as required, or you could have multiple columns in MYTEMPTABLE of varying datatypes, inserting into and joining on the correct column:
MYTEMPTABLE
id
queryid
mytextvalue
myintvalue
mymoneyvalue
etc