How do I work with table variables with multiple rows in SAP HANA DB? - sql

In SQL sometimes it is easier and faster to use table variables.
I know I can't use insert to table var in HANA DB, but what would be the best practice to do something similar?
I tried using SELECT to populate the variable but I can't insert multiply rows.
Do I have to use temporary table instead?
I would like to have a table with some values I create, like the below example I use for SQL, such a way I can use it later in the query:
Declare #temp table(Group Int, Desc nvarchar(100))
insert into #temp (Group , Desc )
Values (1,'Desc 1'), (2,'Desc2 2'), (3,'Desc 3'), (4,'Desc 4'), (5,'Desc 5')
In HANA, I am able to create the variable, but not able to populate it with multiple rows :(
Is there any best way to do so?
Thank you so much.

The "UNION"-approach is one option to add records to the data that a table variable is pointing to.
Much better than this is to either use arrays to add, remove, and modify data and finally turn the arrays into table variables via the UNNEST function. This is an option that has been available for many years, even with HANA 1.
Alternatively, SAP HANA 2 (starting with SPS 03, I believe), offers additional SQLScript commands, to directly INSERT, UPDATE, and DELETE on table variables. The documentation covers this in "Modifying the Content of Table Variables".
Note, that this feature comes with a slightly different syntax for the DML commands.
As of SAP HANA 2 SPS 04, there is yet another syntax option provided for this:
"SQL DML Statements on Table Variables".
This one, finally, looks like "normal" SQL DML against table variables.
Given these options, the "union"-approach is the last option you should use in your coding.

For who else try to find about it, I found a "work around" that is to use the UNION ALL.
I add the first row, then I do a UNION on the table and ADD the second line, like below:
tempTable = Select 1 as "Group", 'Desc' as "Desc" FROM DUMMY;
tempTable = SELECT "Group", "Desc" FROM :AcctClassificacao UNION ALL Select 2 as "Group",
'Desc' as "Desc" FROM DUMMY ;
Select * from tempTable
In this case, I will have the result:
Group Desc
1 Desc
2 Desc
I don't know if this is the best way to do anyways.

Related

Is there any SQL query character limit while executing it by using the JDBC driver [duplicate]

I'm using the following code:
SELECT * FROM table
WHERE Col IN (123,123,222,....)
However, if I put more than ~3000 numbers in the IN clause, SQL throws an error.
Does anyone know if there's a size limit or anything similar?!!
Depending on the database engine you are using, there can be limits on the length of an instruction.
SQL Server has a very large limit:
http://msdn.microsoft.com/en-us/library/ms143432.aspx
ORACLE has a very easy to reach limit on the other side.
So, for large IN clauses, it's better to create a temp table, insert the values and do a JOIN. It works faster also.
There is a limit, but you can split your values into separate blocks of in()
Select *
From table
Where Col IN (123,123,222,....)
or Col IN (456,878,888,....)
Parameterize the query and pass the ids in using a Table Valued Parameter.
For example, define the following type:
CREATE TYPE IdTable AS TABLE (Id INT NOT NULL PRIMARY KEY)
Along with the following stored procedure:
CREATE PROCEDURE sp__Procedure_Name
#OrderIDs IdTable READONLY,
AS
SELECT *
FROM table
WHERE Col IN (SELECT Id FROM #OrderIDs)
Why not do a where IN a sub-select...
Pre-query into a temp table or something...
CREATE TABLE SomeTempTable AS
SELECT YourColumn
FROM SomeTable
WHERE UserPickedMultipleRecordsFromSomeListOrSomething
then...
SELECT * FROM OtherTable
WHERE YourColumn IN ( SELECT YourColumn FROM SomeTempTable )
Depending on your version, use a table valued parameter in 2008, or some approach described here:
Arrays and Lists in SQL Server 2005
For MS SQL 2016, passing ints into the in, it looks like it can handle close to 38,000 records.
select * from user where userId in (1,2,3,etc)
I solved this by simply using ranges
WHERE Col >= 123 AND Col <= 10000
then removed unwanted records in the specified range by looping in the application code. It worked well for me because I was looping the record anyway and ignoring couple of thousand records didn't make any difference.
Of course, this is not a universal solution but it could work for situation if most values within min and max are required.
You did not specify the database engine in question; in Oracle, an option is to use tuples like this:
SELECT * FROM table
WHERE (Col, 1) IN ((123,1),(123,1),(222,1),....)
This ugly hack only works in Oracle SQL, see https://asktom.oracle.com/pls/asktom/asktom.search?tag=limit-and-conversion-very-long-in-list-where-x-in#9538075800346844400
However, a much better option is to use stored procedures and pass the values as an array.
You can use tuples like this:
SELECT * FROM table
WHERE (Col, 1) IN ((123,1),(123,1),(222,1),....)
There are no restrictions on number of these. It compares pairs.

what is the maximum value we can use with IN operator in sql [duplicate]

I'm using the following code:
SELECT * FROM table
WHERE Col IN (123,123,222,....)
However, if I put more than ~3000 numbers in the IN clause, SQL throws an error.
Does anyone know if there's a size limit or anything similar?!!
Depending on the database engine you are using, there can be limits on the length of an instruction.
SQL Server has a very large limit:
http://msdn.microsoft.com/en-us/library/ms143432.aspx
ORACLE has a very easy to reach limit on the other side.
So, for large IN clauses, it's better to create a temp table, insert the values and do a JOIN. It works faster also.
There is a limit, but you can split your values into separate blocks of in()
Select *
From table
Where Col IN (123,123,222,....)
or Col IN (456,878,888,....)
Parameterize the query and pass the ids in using a Table Valued Parameter.
For example, define the following type:
CREATE TYPE IdTable AS TABLE (Id INT NOT NULL PRIMARY KEY)
Along with the following stored procedure:
CREATE PROCEDURE sp__Procedure_Name
#OrderIDs IdTable READONLY,
AS
SELECT *
FROM table
WHERE Col IN (SELECT Id FROM #OrderIDs)
Why not do a where IN a sub-select...
Pre-query into a temp table or something...
CREATE TABLE SomeTempTable AS
SELECT YourColumn
FROM SomeTable
WHERE UserPickedMultipleRecordsFromSomeListOrSomething
then...
SELECT * FROM OtherTable
WHERE YourColumn IN ( SELECT YourColumn FROM SomeTempTable )
Depending on your version, use a table valued parameter in 2008, or some approach described here:
Arrays and Lists in SQL Server 2005
For MS SQL 2016, passing ints into the in, it looks like it can handle close to 38,000 records.
select * from user where userId in (1,2,3,etc)
I solved this by simply using ranges
WHERE Col >= 123 AND Col <= 10000
then removed unwanted records in the specified range by looping in the application code. It worked well for me because I was looping the record anyway and ignoring couple of thousand records didn't make any difference.
Of course, this is not a universal solution but it could work for situation if most values within min and max are required.
You did not specify the database engine in question; in Oracle, an option is to use tuples like this:
SELECT * FROM table
WHERE (Col, 1) IN ((123,1),(123,1),(222,1),....)
This ugly hack only works in Oracle SQL, see https://asktom.oracle.com/pls/asktom/asktom.search?tag=limit-and-conversion-very-long-in-list-where-x-in#9538075800346844400
However, a much better option is to use stored procedures and pass the values as an array.
You can use tuples like this:
SELECT * FROM table
WHERE (Col, 1) IN ((123,1),(123,1),(222,1),....)
There are no restrictions on number of these. It compares pairs.

how to convert result of an select sql query into a new table in ms access

how to convert result of an select sql query into a new table in msaccess ?
You can use sub queries
SELECT a,b,c INTO NewTable
FROM (SELECT a,b,c
FROM TheTable
WHERE a Is Null)
Like so:
SELECT *
INTO NewTable
FROM OldTable
First, create a table with the required keys, constraints, domain checking, references, etc. Then use an INSERT INTO..SELECT construct to populate it.
Do not be tempted by SELECT..INTO..FROM constructs. The resulting table will have no keys, therefore will not actually be a table at all. Better to start with a proper table then add the data e.g. it will be easier to trap bad data.
For an example of how things can go wrong with an SELECT..INTO clause: it can result in a column that includes the NULL value and while after the event you can change the column to NOT NULL the engine will not replace the NULLs, therefore you will end up with a NOT NULL column containing NULLs!
Also consider creating a 'viewed' table e.g. using CREATE VIEW SQL DDL rather than a base table.
If you want to do it through the user interface, you can also:
A) Create and test the select query. Save it.
B) Create a make table query. When asked what tables to show, select the query tab and your saved query.
C) Tell it the name of the table you want to create.
D) Go make coffee (depending on taste and size of table)
Select *
Into newtable
From somequery

Limit on the WHERE col IN (...) condition

I'm using the following code:
SELECT * FROM table
WHERE Col IN (123,123,222,....)
However, if I put more than ~3000 numbers in the IN clause, SQL throws an error.
Does anyone know if there's a size limit or anything similar?!!
Depending on the database engine you are using, there can be limits on the length of an instruction.
SQL Server has a very large limit:
http://msdn.microsoft.com/en-us/library/ms143432.aspx
ORACLE has a very easy to reach limit on the other side.
So, for large IN clauses, it's better to create a temp table, insert the values and do a JOIN. It works faster also.
There is a limit, but you can split your values into separate blocks of in()
Select *
From table
Where Col IN (123,123,222,....)
or Col IN (456,878,888,....)
Parameterize the query and pass the ids in using a Table Valued Parameter.
For example, define the following type:
CREATE TYPE IdTable AS TABLE (Id INT NOT NULL PRIMARY KEY)
Along with the following stored procedure:
CREATE PROCEDURE sp__Procedure_Name
#OrderIDs IdTable READONLY,
AS
SELECT *
FROM table
WHERE Col IN (SELECT Id FROM #OrderIDs)
Why not do a where IN a sub-select...
Pre-query into a temp table or something...
CREATE TABLE SomeTempTable AS
SELECT YourColumn
FROM SomeTable
WHERE UserPickedMultipleRecordsFromSomeListOrSomething
then...
SELECT * FROM OtherTable
WHERE YourColumn IN ( SELECT YourColumn FROM SomeTempTable )
Depending on your version, use a table valued parameter in 2008, or some approach described here:
Arrays and Lists in SQL Server 2005
For MS SQL 2016, passing ints into the in, it looks like it can handle close to 38,000 records.
select * from user where userId in (1,2,3,etc)
I solved this by simply using ranges
WHERE Col >= 123 AND Col <= 10000
then removed unwanted records in the specified range by looping in the application code. It worked well for me because I was looping the record anyway and ignoring couple of thousand records didn't make any difference.
Of course, this is not a universal solution but it could work for situation if most values within min and max are required.
You did not specify the database engine in question; in Oracle, an option is to use tuples like this:
SELECT * FROM table
WHERE (Col, 1) IN ((123,1),(123,1),(222,1),....)
This ugly hack only works in Oracle SQL, see https://asktom.oracle.com/pls/asktom/asktom.search?tag=limit-and-conversion-very-long-in-list-where-x-in#9538075800346844400
However, a much better option is to use stored procedures and pass the values as an array.
You can use tuples like this:
SELECT * FROM table
WHERE (Col, 1) IN ((123,1),(123,1),(222,1),....)
There are no restrictions on number of these. It compares pairs.

Copy Query Result to another mysql table

I am trying to import a large CSV file into a MySQL database. I have loaded the entire file into one flat table. i can select the data that needs to go into separate tables using select statements, my question is how do i copy the results of those select queries to different tables. i would prefer to do it completely in SQL and not have to worry about using a scripting language.
INSERT
INTO new_table_1
SELECT *
FROM existing_table
WHERE condition_for_table_1;
INSERT
INTO new_table_2
SELECT *
FROM existing_table
WHERE condition_for_table_2;
INSERT INTO anothertable (list, of , column, names, to, give, values, for)
SELECT list, of, column, names, of, compatible, column, types
FROM bigimportedtable
WHERE possibly you want a predicate or maybe not;
The answer from Quassnoi was the one I was looking for. Please observe that if new_table_1 doesn't exist yet the "INSERT INTO" statement has to be replaced with a "CREATE TABLE" statement.