Teradata create volatile doesn't work on JDBC - sql

When running the following Python code and SQL query on a Teradata Vantage Express server:
#!/usr/bin/env python3
import teradatasql
query = """CREATE VOLATILE TABLE target_table AS (
select * FROM MY_DB.MY_TABLE
)
WITH DATA ON COMMIT PRESERVE ROWS;
SELECT * FROM target_table;"""
con = teradatasql.connect(host="localhost", user="dbc", password="dbc")
cur = con.cursor()
cur.execute(query)
I get the following error:
teradatasql.OperationalError: [Version 17.20.0.7] [Session 2988] [Teradata Database] [Error 3932] Only an ET or null statement is legal after a DDL Statement.
However, when using bteq (Teradata's CLIv2 db connector), and running the same query it works like a charm and doesn't throw any error:
BTEQ -- Enter your SQL request or BTEQ command:
CREATE VOLATILE TABLE target_table AS (
select * FROM MY_DB.MY_TABLE
)
WITH DATA ON COMMIT PRESERVE ROWS;
CREATE VOLATILE TABLE target_table AS (
select * FROM MY_DB.MY_TABLE
)
WITH DATA ON COMMIT PRESERVE ROWS;
*** Table has been created.
*** Total elapsed time was 1 second.
BTEQ -- Enter your SQL request or BTEQ command:
SELECT TOP 1 * FROM target_table;
SELECT TOP 1 * FROM target_table;
*** Query completed. One row found. 9 columns returned.
*** Total elapsed time was 1 second.
customer_id customer_token customer_branch customer_num
-------------- ------------------------------ --------------- ------------
8585 452004 83 808038
BTEQ -- Enter your SQL request or BTEQ command:
Any idea?
Note that no useful Google entries were found for either Python based JDBC drivers (e.g. teradatasql) or Node.js based drivers.

In the bteq examples you’ve given there are individual queries being executed; each query is separated by a “;”. However, in the Python code you have combined 2 queries into a single string and are trying to execute that string as a single query - which won’t work.
You need to write the Python code to run each query separately, in the same way that the bteq code does. For example:
query = """CREATE VOLATILE TABLE target_table AS (
select * FROM MY_DB.MY_TABLE
)
WITH DATA ON COMMIT PRESERVE ROWS;”””
con = teradatasql.connect(host="localhost", user="dbc", password="dbc")
cur = con.cursor()
cur.execute(query)
query = “””SELECT * FROM target_table;"""
cur.execute(query)

Related

How do we insert data into a table?

I'm attempting to insert data into a table:
#one_files =
EXTRACT //all columns
FROM "/1_Main{suffixOne}.csv"
USING Extractors.Text(delimiter : '|');
CREATE TABLE A1_Main (//all cols);
INSERT INTO A1_Main SELECT * FROM #one_files;
Within the same script I'm attempting to SELECT data:
#finalData =
SELECT //mycols
FROM A1_Main AS one;
OUTPUT #finalData
TO "/output/output.csv"
USING Outputters.Csv();
Here's the exception I get:
What am I doing wrong? How do I select from my table? Can we not insert and query in the same script?
Some statements have restrictions on how they can be combined inside a script. For example, you cannot create a table and read from the same table in the same script, since the compiler requires that any input already physically exists at compile time of the query.
Check this:
https://learn.microsoft.com/en-us/u-sql/concepts/scripts

db2 "NOT LOGGED INITIALLY" not working

My DB2 version is LUW v11.1.
I am running select queries on big tables and insert to new tables, so I try to use "NOT LOGGED INITIALLY" when creating new tables to avoid generating large size of log. But it seems that the NLI option is not working.
The following is my sql code:
create table diabetes_v3_2.comm_outpatient_prescription_drugs_t2dm
as (select * from commercial.outpatient_prescription_drugs)
with no data
not logged initially;
insert into diabetes_v3_2.comm_outpatient_prescription_drugs_t2dm
select * from commercial.outpatient_prescription_drugs
where enrolid in (
select enrolid from diabetes_v3_2.t2dm_cohort_filter_age_enrollment
);
create table diabetes_v3_2.comm_outpatient_services_t2dm
as (select * from commercial.outpatient_services)
with no data
not logged initially;
insert into diabetes_v3_2.comm_outpatient_services_t2dm
select * from commercial.outpatient_services
where enrolid in (
select enrolid from diabetes_v3_2.t2dm_cohort_filter_age_enrollment
);
I run the script as db2 -tvf script.sql. But I still got the "SQL0964C The transaction log for the database is full" error:
/* Generate cohort data for the cohort after * filtering according to age and continuous * enrollment criteria. */ /* facility header */ /* inpatient admissions */ /* inpatient services */ /* outpatient prescription drugs */ create table diabetes_v3_2.comm_outpatient_prescription_drugs_t2dm as (select * from commercial.outpatient_prescription_drugs) with no data not logged initially
DB20000I The SQL command completed successfully.
insert into diabetes_v3_2.comm_outpatient_prescription_drugs_t2dm select * from commercial.outpatient_prescription_drugs where enrolid in ( select enrolid from diabetes_v3_2.t2dm_cohort_filter_age_enrollment )
Number of rows affected : 275423901
DB20000I The SQL command completed successfully.
/* outpatient services */ create table diabetes_v3_2.comm_outpatient_services_t2dm as (select * from commercial.outpatient_services) with no data not logged initially
DB20000I The SQL command completed successfully.
insert into diabetes_v3_2.comm_outpatient_services_t2dm select * from commercial.outpatient_services where enrolid in ( select enrolid from diabetes_v3_2.t2dm_cohort_filter_age_enrollment )
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0964C The transaction log for the database is full. SQLSTATE=57011
Why is this?
To use NOT LOGGED INITIALLY properly, the application doing the changes should NOT have AUTOCOMMIT enabled. Having AUTOCOMMIT OFF also helps define the scope of the transactions:
For CLP, you can turn AUTOCOMMIT OFF using the environment variable DB2OPTIONS :
export DB2OPTIONS=+c
If you execute a script containing update SQL statements, such as inserts, and DB2OPTIONS is not set, you can execute the script using:
db2 +c -tvf input_script.sql -z output_script.out
Make sure you add explicit COMMIT statements in your scripts to ensure that they occur at reasonable points.

PostgreSQL combine multiple EXECUTE statements

I have a PREPARE statement which is being called multiple times using EXECUTE.
To save database connection cost, we make a big query like:
PREPARE updreturn as update myTable set col1 = 1 where col2= $1 returning col3;
EXECUTE updreturn(1);
EXECUTE updreturn(2);
....
EXECUTE updreturn(10);
and send to the database.
However, I get the result for only the last EXECUTE statement.
Is there a way I could store these results in a temporary table and get all the results?
You can use a transaction and a temporary table. And execute 3 queries:
Query 1: Start a Transaction (I don't know what you are using to connect to the database).
Query 2:
-- Create a Temporary Table to store the returned values
CREATE TEMPORARY TABLE temp_return (
col3 text
) ON COMMIT DROP;
-- Prepare the Statement
PREPARE updreturn AS
WITH u AS (
UPDATE myTable SET col1 = 1 WHERE col2= $1 RETURNING col3
)
INSERT INTO temp_return (col3) SELECT col3 FROM u;
EXECUTE updreturn(1);
EXECUTE updreturn(2);
.....
EXECUTE updreturn(10);
-- Deallocate the Statement
DEALLOCATE updreturn;
-- Actually return the results
SELECT * FROM temp_return;
Query 3: Commit the Transaction (see note at Query 1)
Without any other details about your complete scenario I can't tell you more, but you should get the idea.
I think you need a hack for that.
Create a result table to store your results
Create a trigger before update on myTable
Inside that trigger add INSERT INTO result VALUES(col3)
So every time your myTable row is update also a value will be insert into result

Batch / Bulk insert in R

I am trying to do a batch insert in R using RJDBC. It seems like it inserts 1 row at a time which takes a lot of time.
I was wondering if anyone knows of a solution in R to do bulk insert data from R to SQL. I know RODBC can do parametrized insert which is fast but not as fast as bulk insert.
I don't know about your "R" language, but there is a BULK sql statement available in sqlExe.
sqlExe is a utility that connects to SQL databases via ODBC and will execute any valid SQL, plus it has some additional features ( http://sourceforge.net/projects/sqlexe/ )
For example, assuming the target table is:
table: [mydata]
-------------------
row_id char(1)
row_idx integer
row_desc char(32)
To do your insert task with sqlExe you would prepare a file with your input:
input.dat
a,1,this is row 1
b,2,this is row 2
c,3,this is row 3
d,4,this is row 4
The command line to import:
sql --dsn MYDB -e "BULK INSERT input.dat, INSERT INTO mydata(row_id,row_idx,row_desc) VALUES(?,?,?)"

How to get existence of a temporary table in sql server 2008

I wrote this query:
SELECT * INTO #nima FROM Region r
Every time I execute this queries:
SELECT OBJECT_NAME(OBJECT_ID('tempdb..#nima'))
--or
SELECT OBJECT_NAME(OBJECT_ID('#nima'))
I get NULL, but when I execute above select I get error that #nima alreadty exist
Try just using the OBJECT_ID function to determine if the temp table exists:
SELECT object_id('tempdb..#nima')
Or if you wish to retrieve the object name, you will need to specify the database id using the DB_ID function for the temp database:
SELECT OBJECT_NAME(OBJECT_ID('tempdb..#nima'), DB_ID('tempdb'))
This gives the internal id of #nima as expected in tempdb
SELECT OBJECT_ID('tempdb..#nima'))
OBJECT_NAME takes a local database ID. There will be no object (except by rare chance) with that ID locally because the ID comes from tempdb
Demo (untested!)
USE tempdb
SELECT OBJECT_NAME(OBJECT_ID('tempdb..#nima')) --#nima + system generated stuff
USE MyDB
SELECT OBJECT_NAME(OBJECT_ID('tempdb..#nima')) --null
-- Now we add DBID for tempdb
SELECT OBJECT_NAME(OBJECT_ID('tempdb..#nima'), 2) -- #nima + system generated stuff