I have the following code:
SELECT S~CLUSTD AS ZZCLUSTD
INTO CORRESPONDING FIELDS OF TABLE #lt_viqmel_iflos
FROM viqmel AS v
LEFT OUTER JOIN stxl AS S
ON s~tdobject = #lv_qmel
AND s~tdname = v~qmnum
Select statement generates following short dump:
Only the prefixed length field can be used to read from the LRAW field or
LCHR field S~CLUSTD.
Internal table lt_viqmel_iflos is type viqmel_iflos(DB view which contains DB table QMEL) to which I appended ZZCLUSTD type char200.
The problem is that I cannot make ZZCLUSTD type LRAW in QMEL because I get the following error:
So my only option (that I am aware of) remains to select into char200 the first 200 characters of LRAW.
Is this even possible?
Or is there another way to select LRAW data?
I found the info about the topic, but unfortunately I can't adapt it to my scenario:read LRAW data
In fact, there are two questions here.
The first one is the activation error of table QMEL:
Field ZZCLUSTD does not have a preceding length field of type INT4
A DDIC table containing a column of type LCHR and LRAW, requires that it's always immediately preceded with a column of type INT2 or INT4 (although the message says only INT4).
The second question is about how to read such a field. Both columns must always be read at the same time, and the INT2/INT4 column must be "read before" the LCHR/LRAW column. The only reference I could find to explain this restriction is in the note 302788 - LCHR/LRAW fields in logical cluster tables.
The INT2 column of STXL table being named CLUSTR, the following code works:
TYPES: BEGIN OF ty_viqmel_iflos,
clustr TYPE stxl-clustr, "INT2
zzclustd TYPE stxl-clustd, "LCHR
END OF ty_viqmel_iflos.
DATA lt_viqmel_iflos TYPE TABLE OF ty_viqmel_iflos.
SELECT S~CLUSTR, S~CLUSTD AS ZZCLUSTD
INTO CORRESPONDING FIELDS OF TABLE #lt_viqmel_iflos
FROM viqmel AS v
INNER JOIN stxl AS S
ON s~tdname = v~qmnum
UP TO 100 ROWS.
NB: there is a confusion in your question, where you refer to both CLUSTD from STXL and ZZCLUSTD from QMEL. I don't understand what you are trying to achieve exactly.
NB: if you want to read the texts from the table STXL, there's another solution by calling the function module READ_TEXT_TABLE, or READ_MULTIPLE_TEXTS if you prefer. They were made available by the note 2261311. In case you don't have or can't install these function modules, you may try this gist which does the same thing. It also contains a reference to another discussion.
NB: for information, to be more precise, LRAW contains bytes, not characters, and for data clusters (case of STXL), these bytes correspond to any values (characters in the case of STXL) zipped with the statement EXPORT and are to be unzipped with IMPORT`.
Related
I have I table consisting of 3 columns: system, module and block. Table is filled in a procedure which accepts system, module and block and then it checks if the trio is in the table:
select count(*) into any_rows_found from logs_table llt
where system=llt.system and module=llt.module and block=llt.block;
If the table already has a row containing those three values, then don't write them into the table and if it doesn't have them, write them in. The problem is, if the table has values 'system=a module=b block=c' and I query for values 'does the table have system=a module=d block=e' it returns yes, or, to be precise, any_rows_found=1. Value 1 is only not presented when I send a trio that doesn't have one of it's values in the table, for example: 'system=g module=h and block=i'. What is the problem in my query?
Problem is in this:
where system = llt.system
Both systems are the same, it is as if you put where 1 = 1, so Oracle is kind of confused (thanks to you).
What to do? Rename procedure's parameters to e.g. par_system so that query becomes
where llt.system = par_system
Another option (worse, in my opinion) is to precede parameter's name with the procedure name. If procedure's name was e.g. p_test, then you'd have
where llt.system = p_test.system
From the documentation:
If a SQL statement references a name that belongs to both a column and either a local variable or formal parameter, then the column name takes precedence.
So when you do
where system=llt.system
that is interpreted as
where llt.system=llt.system
which is always true (unless it's null). It is common to prefix parameters and local variables (e.g. with p_ or l_) to avoid confusion.
So as #Littlefoot said, either change the procedure definition to make the parameter names different to the column names, or qualify the parameter names with the procedure name - which some people prefer but I find more cumbersome, and it's easier to forget and accidentally use the wrong reference.
Root cause is alias used for table name.
where system=llt.system and module=llt.module and block=llt.block;
Table name alias in select query and input to procedure having the same name(i.e. llt
). You should consider either renaming one of them.
I have been trying to set up a test database but I keep running into issues with pulling in the data from its normalized form.
Below is the latest version of the SQL query I've been working on.
INSERT INTO TestData.dbo.Info (Name,Did)
SELECT DISTINCT a.Name, b.Did
FROM StageDB.dbo.MockData a INNER JOIN Testdata.dbo.Dinfo b
ON a.Name = CAST(b.Did as varchar(10))
The output I get is the following:
(0 row(s) affected)
I've been trying to monkey around with it on my own but can't seem to make it work the way I want to.
My objective here is to pull data (the primary key from a table with data already in my database, Did from TestData.dbo.Dinfo that is of int type) and merge it with data from my staging table (a particular column from the table in the staging database, StageDB.dbo.MockData, Name of type varchar(10)), then inserting into a new table on my main database. The database table I'm trying to put these things into is all set up with the correct fields and types (primary key column, auto generated as rows are added, Name column that is varchar(10), and Did column that is int).
EDIT: Table Definitions, Sample Data, Desired Result
Destination Table:
TestData.dbo.Info
Columns: Iid (int, primary key of table set to auto increment as new records are added), Name (varchar(10)), Did (int, foreign key from TestData.dbo.Dinfo).
StageDB.dbo.MockData
Columns: Many columns exist in this table that are not relevant to what I am trying to pull off. The only one I am interested in is the column containing names that I want to tie together with information from the Dinfo table. Name (nvarchar(255),null).
TestData.dbo.Dinfo
Columns: Did (int, primary key), Donor (varchar(20)).
Sample of Data
From Dinfo:
Did Donor
01 Howard L
From MockData:
Name
Smith J
Desired Results
Iid Name Did
01 Smith J 01
Any help or advice would be much appreciated. I would really like it if someone can show me the correct SQL syntax for this as I think it may just be a matter of writing it correctly. Additionally, any tips or websites that can help me learn more SQL would be appreciated.
Thank you!
Change this:
ON a.Name = b.Did
To this:
ON a.Name = CAST(b.Did as varchar(10))
I suspect there's a lot more wrong with your query in terms of getting the results you want, but this should fix your error.
You need to figure out where the error is occurring. There are three possibilities:
mockdata.name is a string and NInfo.data is an integer
dinfo.did is a string and NInfo.did is an integer
mockdata.name is a string and dinfo.did is an integer (or vice versa)
Based on the naming conventions, the third is the most likely. When a number is compared to a string, the string is converted to a number. However, you need to be careful whenever you use implicit type conversions.
If the third option, then you can convert the integer to a string (as other answers propose). However, I would ask why you are doing such a comparison.
Error is in ON a.Name = b.Did Name column may contains alphabets,number or special characters. Did column contains only number or integers only.
When performing a select query from a data base the returned result will have columns of a certain type.
If you perform a simple query like
select name as FirstName
from database
then the type of the resulting FirstName column will be that of database.name.
If you perform a query like
select age*income
from database
then the resulting data type will be that of the return value from the age*income expression.
What happens you use something like
select try_convert(float, mycolumn)
from database
where database.mycolumn has type of nvarchar. I assume that the resulting column has type of float which is decided by the return type of the first call to try_convert.
But consider this example
select coalesce(try_convert(float, mycolumn), mycolumn)
from database
which should give a column with the values of mycolumn unchanged if try_convert fails, but mycolumn as a float when/if that is possible.
Is this determination made as the first row is handled?
Or will the type always be determined by the function called independently of the data in the rows?
Is it possible to conditionally perform a conversion?
I would like to convert to float in the case where this is possible for all rows and leave unchanged in case it fails for any row.
Update 1
It seems that the answer to the first part of the question is that the column type is determined by the expression at compile time which means that you cannot have a dynamic type of your column depending on the data.
I see two workaround for this
Option 1
For each column count the number of not null rows of try_convert(float, mycolumn) and if this number is 0 then do not perform conversion. This will of course read the rows many times and might be inefficient.
Option 2
Simple repeat all columns; once without conversion and once with conversion and then simply use the interesting one.
One could also perform another select statement where only columns with non-null values are included.
Background
I have a dynamically generated pivot table with many (~200 columns) of which some have string values and others have numbers.
I would like to cast all columns as float where this is possible and leave the other columns unchanged (or cast as nvarchar).
The data
The data is mostly NULL values with some columns having text string and other columns having numbers. There are no columns with "mixed" content.
The types are determined at compile time, not at execution. try_convert(float, ...) knows exactly the type at parse/compile time, because float here is a keyword, not a value. As for expressions like COALESCE(foo, bar) the type similarly determined at compile time, following the rules of data type precedence lad already linked.
When you build your dynamic pivot you'll have to know the result type, using the same inference rules the SQL parser/compiler uses. I understand some rules are counter intuitive, when in doubt, test it out.
For the detail oriented: some expressions types can be determined at parse time, eg. N'foo'. But most have to be resolved at compile time, when the names of tables and columns are bind to actual object in the database, because only then the type is discovered.
Is there any term like 'DOT(.) notation' used in SQL joins?
if practised, pls explain how to use it.
Thanks in advance.
Yes here is how you do it
When you do your SELECT
SELECT firstname, lastname from dbo.names n -- The n becomes an alias
JOIN address a --- another alias
on a.userid = n.userid
Collated from multiple sources of official documentation.
Dot notation (sometimes called the membership operator) allows you to qualify an SQL identifier with another SQL identifier of which it is a component. You separate the identifiers with the period ( . ) symbol. For example, you can qualify a column name with any of the following SQL identifiers:
Table name: table_name.column_name
View name: view_name.column_name
Synonym name: syn_name.column_name
These forms of dot notation are called column projections.
You can also use dot notation to directly access the fields of a named or unnamed ROW column, as in the following example:
row-column name.field name
This use of dot notation is called a field projection. For example, suppose you have a column called rect with the following definition:
CREATE TABLE rectangles
(
area float,
rect ROW(x int, y int, length float, width float)
)
The following SELECT statement uses dot notation to access field length of the rect column:
SELECT rect.length FROM rectangles WHERE area = 64
Selecting Nested Fields
When the ROW type that defines a column itself contains other ROW types, the column contains nested fields. Use dot notation to access these nested fields within a column.
For example, assume that the address column of the employee table contains the fields: street, city, state, and zip. In addition, the zip field contains the nested fields: z_code and z_suffix. A query on the zip field returns values for the z_code and z_suffix fields. You can specify, however, that a query returns only specific nested fields. The following example shows how to use dot notation to construct a SELECT statement that returns rows for the z_code field of the address column only:
SELECT address.zip.z_code
FROM employee
Rules of Precedence
The database server uses the following precedence rules to interpret dot notation:
schema name_a . table name_b . column name_c . field name_d
table name_a . column name_b . field name_c . field name_d
column name_a . field name_b . field name_c . field name_d
When the meaning of an identifier is ambiguous, the database server uses precedence rules to determine which database object the identifier specifies. Consider the following two tables:
CREATE TABLE b (c ROW(d INTEGER, e CHAR(2));
CREATE TABLE c (d INTEGER);
In the following SELECT statement, the expression c.d references column d of table c (rather than field d of column c in table b) because a table identifier has a higher precedence than a column identifier:
SELECT *
FROM b,c
WHERE c.d = 10
For more information about precedence rules and how to use dot notation with ROW columns, see the IBM Informix: Guide to SQL Tutorial.
Using Dot Notation with Row-Type Expressions
Besides specifying a column of a ROW data type, you can also use dot notation with any expression that evaluates to a ROW type. In an INSERT statement, for example, you can use dot notation in a subquery that returns a single row of values. Assume that you created a ROW type named row_t:
CREATE ROW TYPE row_t (part_id INT, amt INT)
Also assume that you created a typed table named tab1 that is based on the row_t ROW type:
CREATE TABLE tab1 OF TYPE row_t
Assume also that you inserted the following values into table tab1:
INSERT INTO tab1 VALUES (ROW(1,7));
INSERT INTO tab1 VALUES (ROW(2,10));
Finally, assume that you created another table named tab2:
CREATE TABLE tab2 (colx INT)
Now you can use dot notation to insert the value from only the part_id column of table tab1 into the tab2 table:
INSERT INTO tab2
VALUES ((SELECT t FROM tab1 t
WHERE part_id = 1).part_id)
The asterisk form of dot notation is not necessary when you want to select all fields of a ROW-type column because you can specify the column name alone to select all of its fields. The asterisk form of dot notation can be quite helpful, however, when you use a subquery, as in the preceding example, or when you call a user-defined function to return ROW-type values.
Suppose that a user-defined function named new_row returns ROW-type values, and you want to call this function to insert the ROW-type values into a table. Asterisk notation makes it easy to specify that all the ROW-type values that the new_row( ) function returns are to be inserted into the table:
INSERT INTO mytab2 SELECT new_row (mycol).* FROM mytab1
References to the fields of a ROW-type column or a ROW-type expression are not allowed in fragment expressions. A fragment expression is an expression that defines a table fragment or an index fragment in SQL statements like CREATE TABLE, CREATE INDEX, and ALTER FRAGMENT.
Additional Examples of How to Specify Names With the Dot Notation
Dot notation is used for identifying record fields, object attributes, and items inside packages or other schemas. When you combine these items, you might need to use expressions with multiple levels of dots, where it is not always clear what each dot refers to. Here are some of the combinations:
Field or Attribute of a Function Return Value
func_name().field_name
func_name().attribute_name
Schema Object Owned by Another Schema
schema_name.table_name
schema_name.procedure_name()
schema_name.type_name.member_name()
Packaged Object Owned by Another User
schema_name.package_name.procedure_name()
schema_name.package_name.record_name.field_name
Record Containing an Object Type
record_name.field_name.attribute_name
record_name.field_name.member_name()
Differences in Name Resolution Between PL/SQL and SQL
The name resolution rules for PL/SQL and SQL are similar. You can avoid the few differences if you follow the capture avoidance rules. For compatibility, the SQL rules are more permissive than the PL/SQL rules. SQL rules, which are mostly context sensitive, recognize as legal more situations and DML statements than the PL/SQL rules.
PL/SQL uses the same name-resolution rules as SQL when the PL/SQL compiler processes a SQL statement, such as a DML statement. For example, for a name such as HR.JOBS, SQL matches objects in the HR schema first, then packages, types, tables, and views in the current schema.
PL/SQL uses a different order to resolve names in PL/SQL statements such as assignments and procedure calls. In the case of a name HR.JOBS, PL/SQL searches first for packages, types, tables, and views named HR in the current schema, then for objects in the HR schema.
I have a program where the user will have the option to map a variety of data source with an unpredictable column schema. So they might have a SQL Server database with 10 fields or an excel file with 20 - the names can be anything and data types of the fields can be a mixture of text and numeric, dates, etc.
The user then has to provide a mapping of what each field means. So column 4 is a "LocName", column 2 is a "LocX", column 1 is a "LocDate", etc. The names and data types that the user is presented as options to map to is well defined by a DataSet DataTable (XSD xchema file).
For example, if the source contains data formatted like this:
User Column 1: "LocationDate" of type string
User Column 2: "XCoord" of type string
User Column 3: "YCoord" of type string
User Column 4: "LocationName" of type int
and the user provides a mapping that would require that translates to this for the Application required DataTable:
Application Column "LocName" of type string = Column **4** of user table
Application Column "LocX" of type double = Column **2** of user table
Application Column "LocY" of type double = Column **3** of user table
Application Column "LocDate" of type datetime = Column **1** of user table
I have routines that connect to the source and pull out the data for a user query in "raw" format as a DataTable - so it takes the schema of the source.
My question is, what is the best way to then "transform" the data from the raw DataTable to the required application DataTable bearing in mine that this projection has to account for type conversions?
A foreach would obviously work but that seems like brute force since it will have to account for the data types with every loop on each row. Is the a "slick" way to do it with LINQ or ADO.NET?
I would normally do in select that "looks like" the destination table, but with data from the source table. You would apply the data conversions also as required.
Select
Cast (LocationNameLocationName As varChar(...) As LocName
, LocX As XCoord
, ...
From SourceTable
Hard to describe in a simple answer. what I've done in the past is issue an "empty" query like "Select * From sourcetable Where 1=0" which returns no rows but makes all the columns and their types available in the result set. You can cycle through the column ADO objects to get the type of each. You can then use that info to dynamically build a real SQL statement with the conversions.
You still have a lot of logic to decide the conversion, but they all happen as you're building the statement, not as the table is being read. You still have to say in code "if the source column is Integer and the destination column is character, then I want to generate into the select ', Cast ( as Varchar) '"
When you finish building the text of the select, you have a select you can run in ADO to get the rows and the actual move becomes as simple as read/write with the field coming in just as you want them. You can also use that select for an "Insert into Select ...".
Hope this makes sense. The concept is easier to do than to describe.