Describe is not working in IBM DB2 - sql

I am running an query in IBM DB2 as;
DESCRIBE TABLE Schema.Table
But I am getting an error as
DESCRIBE TABLE Schema.Table Error 42601:Token Table was not valid. Valid tokens: :. SQLCODE=-104
I search a lot but can't find out the reason and as I am very new in IBM DB2 so can't figure out the matter. Is it a permission related issue?
I don't have command prompt access.

You appear to be using DB2 on IBM i (formerly known as AS/400), where catalog views are in the QSYS2 schema.
In recent versions there are also their equivalents: SYSIBM.SQLCOLUMNS and INFORMATION_SCHEMA.COLUMNS.

If you are simply trying to get catalog information for a table or view, the system catalog will work just fine, as noted in another answer by mustaccio. But if you want to embed a DESCRIBE TABLE in your RPG or COBOL program, that will work as well. One reason you might want to do this is if you have a dynamic number of columns, or you don't know the table name at compile time. You can use an sql descriptor built by describing a table or cursor to receive the output of a FETCH statement in your program. You will need an SQL Descriptor or an SQLDA to receive the description of the table. It would look something like this:
dcl-s tableName Varchar(128);
exec sql allocate sql descriptor 'D1' with max 20;
tableName = 'MYTABLE';
exec sql
describe table :tableName
using sql descriptor 'D1';
This will retrieve information about the table into the specified descriptor. In this case D1. The descriptor name can be a host variable. This example allocates a local descriptor for 20 items. If your table has more than 20 columns, you can request a larger descriptor in the ALLOCATE DESCRIPTOR statement. If you will be spreading your sql that uses a given descriptor across multiple modules, you will need to use a global descriptor by replacing 'D1' with global 'D1'. You can also use an SQLDA, but I find that those can be more difficult to work with.
To get information out of the descriptor you would use GET DESCRIPTOR. It would be beyond the scope of this site to go into all the details of what you can get out of the descriptor, but as an example you could get the column name of the first column of MYTABLE like this:
dcl-s columnName Varchar(128) Inz('');
exec sql
get sql descriptor 'D1'
value 1 :columnName = name;
Don't forget to deallocate the descriptor when you are through with it.
exec sql deallocate sql descriptor 'D1';
You can find more information on DESCRIBE TABLE here https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_71/db2/rbafzdescrtb.htm. The knowledge center also has information about ALLOCATE DESCRIPTOR, DEALLOCATE DESCRIPTOR, and GET DESCRIPTOR.

Related

SQL Transformation in Informatica for Google Bigquery

I have a SQL Script with multiple drop & create DDL(Create tables As Select *), I want to run them at one go. I am quite new to informatica powercenter, can some one provide the process of using SQL transformation for BigQuery in informatica.
Sample Query:-
drop table if exists sellout.account_table;
CREATE TABLE sellout.account_table
AS
SELECT * FROM
sellout.account_src
WHERE
UPPER(account_name) IN ('RANDOM');
Similar to the above queries i have around 24 SQL's in a script.
I want to run them at once and later make them as part of informatica job.
If the "PowerExchange Google BigQuery" server and client are installed and after executing the infasetup.bat(sh) validateandregisterallfeatures, the mappings would be opened/exported successfully.
Here are some FAQs that might be handy for you:
Q: Why are the output fields in SQL Transformation not seen?
​A: Stored Procedure selected in the SQL Transformation must have output parameters declared. Else it would not have output fields other than default Return Code column.
​​
Q: A set of columns are displayed as result while running the Stored Procedure, however, you still do not see the same columns as output in SQL Transformation. Why?
A: Columns seen in the output might not be defined/declared as output parameters in the Stored Procedure. Procedure might have 'SELECT * FROM' like statement, which retrieves the data when the procedure is run from DB UI and a similar result could be seen when the procedure is run programmatically.
However, to call the same procedure from SQL Transformation, explicitly declared output parameters should be present as the transformation imports the metadata of the proc when selected. Unless you declare the output parameters explicitly in the procedure, it cannot be seen as output in the transformation.
​Q: Is it necessary to have input/output parameters in Stored Procedure to call it from SQL Transformation?
A: Yes, it is necessary to have input/output parameters in Stored Procedure if it is not having default ones. As these parameters appear as input/output fields in SQL transformation, without these Mapping becomes invalid.
Q: I have SELECT statement in the procedure, does the SQL transformation can push this to next transforamtion?
A: Approprioate output parameters are required for this to work.

Dynamic parameter value in OLEDB Source

I had asked a question earlier this day and this is another question from me.
Well, there is my Execute SQL Task which assigns a result (single value of type int) to a parameter. After this I have a DFT inside which there is an OLEDB Source. I need to execute a stored procedure in the oledb source which should get the parameter value from earlier exec sql task resultset variable. This will give me a resultset and I need to load the same into another table.
My question is, I am not able to view the column list because of the dynamic sql and hence unable to map the destination columns. How best should I proceed in this case? Is this a good approach?
Since the stored procedure was using a dynamic parameter, I used the "WITH RESULT SETS" option in the OLEDB Source and it seems to work fine

Specify name of table types in SQL Server [duplicate]

This question already has an answer here:
SQL Server create User-Defined Table Types with schema not working properly
(1 answer)
Closed 9 years ago.
Create a new database in MS SQL Server 2008 R2 and then create a new table type with the following command
CREATE TYPE typeMyType AS TABLE (ID INT)
Now execute the following query
SELECT OBJECT_NAME (object_id) AS ObjectName, *
FROM sys.indexes
WHERE index_id <= 1
ORDER BY ObjectName
This will show you that an object of type HEAP was created which is fine as the data for typeMyType has to be stored somewhere.
But the object is called TT_typeMyType_01142BA1 in my case.
Question:
Why isn't it called typeMyType and how can I overwrite this obviously server generated name?
The table type is listed in several places that can be looked at via the following system tables (sys.sysobjects, sys.indexes, sys.table_types and sys.types).
I can understand that table_types is a subset of types. Therefore, those two places are where to look for my table type AssociativeArray in my AdventureWorks2012 database.
The question I have is why is it looking like a table in sysobjects and sysindexes?
We have not defined any use for the variable yet. However, it looks like a table. It must be how the engine defines meta data for future use.
One take away, does the information in sysobjects and sysindexes get updated a run-time when we declare a variable of type AssociativeArray?
Also, what happens when two SPIDS create the same variable at the same time with different data being inserted?
That is a in-depth engine question that maybe a someone from Microsoft CAT team might know off the top of their head.
I guess I could do some research to find out.
Enclosed is a link stating that table variables use tempdb. That is what I know is a fact. Again, I wonder if the sys.objects and sys.indexes get updated or just place holders.
http://databases.aspfaq.com/database/should-i-use-a-temp-table-or-a-table-variable.html

Using dynamic SQL in an OLE DB source in SSIS 2012

I have a stored proc as the SQL command text, which is getting passed a parameter that contains a table name. The proc then returns data from that table. I cannot call the table directly as the OLE DB source because some business logic needs to happen to the result set in the proc. In SQL 2008 this worked fine. In an upgraded 2012 package I get "The metadata could not be determined because ... contains dynamic SQL. Consider using the WITH RESULT SETS clause to explicitly describe the result set."
The problem is I cannot define the field names in the proc because the table name that gets passed as a parameter can be a different value and the resulting fields can be different every time. Anybody encounter this problem or have any ideas? I've tried all sorts of things with dynamic SQL using "dm_exec_describe_first_result_set", temp tables and CTEs that contains WITH RESULT SETS, but it doesn't work in SSIS 2012, same error. Context is a problem with a lot of the dynamic SQL approaches.
This is latest thing I tried, with no luck:
DECLARE #sql VARCHAR(MAX)
SET #sql = 'SELECT * FROM ' + #dataTableName
DECLARE #listStr VARCHAR(MAX)
SELECT #listStr = COALESCE(#listStr +',','') + [name] + ' ' + system_type_name FROM sys.dm_exec_describe_first_result_set(#sql, NULL, 1)
exec('exec(''SELECT * FROM myDataTable'') WITH RESULT SETS ((' + #listStr + '))')
So I ask out of kindness, by why on God's green earth are you using an SSIS Data Flow task to handle dynamic source data like this?
The reason you're running into trouble is because you're perverting every purpose of an SSIS Data flow task:
to extract a known source with known metadata that can be statically typed and cached in design-time
to run through a known process with straightforward (and ideally asynchronous) transformations
to take that transformed data and load it into a known destination also with known metadata
It's fine to have parameterized data sources that bring back different data. But to have them bring back entirely different metadata each time with no congruity between the different sets is, frankly, ridiculous, and I'm not entirely sure I want to know how you handled all your column metadata in the working 2008 package.
This is why it wants you add a WITH RESULTS SET to the SSIS query - so it can generate some metadata. It doesn't do this at runtime - it can't! It has to have a known set of columns (because it aliases them all into compiled variables anyway) to work with. It expects the same columns every time it runs that Data Flow Task - the exact same columns, down to the names, the types, and the constraints.
Which leads to one (terrible, terrible) solution - just stick all the data into a temporary table with Column1, Column2 ... ColumnN and then use the same variable you're using as the table name parameter to conditionally branch your code and do whatever you want with the columns.
Another more sane solution would be to create a data flow task for each of your source tables, and use your parameter in a precedence constraint to just pick which data flow task should run.
For a solution this poorly tailored for an out-of-the-box ETL, you should also highly consider just rolling your own in C# or a script task instead of the Data Flow Task provided by SSIS.
In short, please don't do this. Think of the children (packages)!
I've used CozyRoc Dynamic DataFlow Plus to achieve this.
Using configuration tables to build the SQL Select statements, I have a single SSIS package that loads data from Oracle and Sybase (or any OLEDB source) to MS SQL. Some of the result sets are in the millions of rows and performance is excellent.
Instead of writing a new package every time a new table is needed, this can be configured in minutes and run on a the pre-tested and robust existing package.
Without it I would have been up for writing hundreds of packages.

Display DataType and Size of Column from SQL Server Query Results at Runtime

Is there a way to run a query and then have SQL Server management studio or sqlcmd or something simply display the datatype and size of each column as it was received.
Seems like this information must be present for the transmission of the data to occur between the server and the client. It would be very helpful to me if it could be displayed.
A little background:
The reason I ask is because I must interface with countless legacy stored procedures with anywhere from 50 to 5000+ lines of code each. I do not want to have to try and follow the cryptic logic flow in and out of temp tables, into other procedures, into string concatenated eval statement and so on. I wish to maintain no knowledge of the implementation, simply what to expect when they work. Unfortunately following the logic flow seems to be the only way to figure out what exactly is being returned without trying to infer what the actual types of the data string representations om management studio studio or from the native type in .net for example.
To clarify: I am not asking about how to tell the types of a table or something static like that. I'm pretty sure something like sp_help will not help me. I am asking how to tell what the sql server types (ie varchar(25), int...) are of what I have been given. Additionally, changing the implementation of the sprocs is not possible so please consider that in your solutions. I am really hoping there is a command I have missed somewhere. Much appreciation to all.
Update
I guess what I am really asking is how to get the schema of the result set when the result set originates from a query using a temp table. I understand this to be impossible but don't find much sense with that conclusion because the data is being transmitted after all. Here is an example of a stored procedure that would cause a problem.
CREATE PROCEDURE [dbo].[IReturnATempTable]
AS
Create table #TempTable
(
MyMysteryColumn char(50)
)
INSERT #TempTable (
MyMysteryColumn
) VALUES (
'Do you know me?' )
select TOP 50 * FROM #TempTable
What will you do about stored procedures which return different result sets based on their parameters?
In any case, you can configure a SqlDataAdapter.SelectCommand, along with the necessary parameters, then call the FillSchema method. Assuming that the schema can be determined, you'll get a DataTable configured with correct column names and types, and some constraints.
A bit of a long shot, try messing around with SET FMTONLY ON (or off). According to BOL, this "Returns only metadata to the client. Can be used to test the format of the response without actually running the query." I suspect that this will inlcude what you're looking for, as BCP uses this. (I stumbled across this setting when debugging some very oddball BCP problems.)
Could you append another select to your procedure?
If so you might be able to do it by using the sql_variant_property function.
Declare #Param Int
Set #Param = 30
Select sql_variant_property(#Param, 'BaseType')
Select sql_variant_property(#Param, 'Precision')
Select sql_variant_property(#Param, 'Scale')
I posted that on this question.
I am asking how to tell what the sql
server types (ie varchar(25), int...)
are of what I have been given
You could then print out the type, precision (i.e. 25 if its VarChar(25)), and the scale of the parameter.
Hope that helps... :)
If you are not limited to T-SQL, and obviously you don't mind running the SPs (because SET FMTONLY ON isn't fully reliable), you definitely CAN call the SPs from, say C#, using a SqlDataReader. Then inspect the SqlDataReader to get the columns and the data types. You might also have multiple result sets, you you can also go to the next result set easily from this environment.
This code should fix you up. It returns a schema only dataset with no records. You can use this Dataset to query the columns' DataType and any other metadata. Later, if you wish, you can populate the DataSet with records by creating a SqlDataAdapter and calling it's Fill method (IDataAdapter.Fill).
private static DataSet FillSchema(SqlConnection conn)
{
DataSet ds = new DataSet();
using (SqlCommand formatCommand = new SqlCommand("SET FMTONLY ON;", conn))
{
formatCommand.ExecuteNonQuery();
SqlDataAdapter formatAdapter = new SqlDataAdapter(formatCommand);
formatAdapter.FillSchema(ds, SchemaType.Source);
formatCommand.CommandText = "SET FMTONLY OFF;";
formatCommand.ExecuteNonQuery();
formatAdapter.Dispose();
}
return ds;
}
I know this is an old question, I found it through a link from SqlDataAdapter.FillSchema with stored procedure that has temporary table. Unfortunately, neither question had an accepted answer, and none of the proposed answers were able to resolve my issue.
For the sake of brevity, if you are using SQL Server 2012 or later, using the following built-in functions will work in most situations:
sys.dm_exec_describe_first_result_set
sys.dm_exec_describe_first_result_set_for_object
However, there are some cases in which these functions will not provide any useful output. In my case, the problem was more similar to the question linked above and therefore, I believe the solution is more appropriately answered under that question. My answer can be found here.