Teradatasql python module not throwing duplicate error - sql

I have installed teradatasql python module recently.When I am doing batch insert into table it is not throwing duplicate error in the script, Else it is skipping that insert statement. Table has first column as UNIQUE in teradata table. But I want it to throw an error in the code.
with teradatasql.connect ('{"host":"whomooz","user":"guest","password":"please"}') as con:
with con.cursor () as cur:
cur.fast_executemany=True
cur.execute ("insert into voltab (?, ?)", [
[1, "abc"],
[2, "def"],
[3, "ghi"]])

You need to use the escape functions teradata_nativesql and teradata_get_errors to obtain errors for unique column violations.
This example Python script:
import teradatasql
with teradatasql.connect (host="whomooz", user="guest", password="please") as con:
with con.cursor () as cur:
cur.execute ("create volatile table voltab (c1 integer unique not null, c2 varchar(10)) on commit preserve rows")
sInsert = "insert into voltab (?, ?)"
cur.execute (sInsert, [[123, "abc"], [123, "def"]])
cur.execute ("{fn teradata_nativesql}{fn teradata_get_errors}" + sInsert)
print (cur.fetchone () [0])
Prints the following output:
[Version 17.20.0.14] [Session 1080] [Teradata Database] [Error 2801] Batch request failed to execute 1 of 2 batched statements. Batched statement 2 failed to execute because of Teradata Database error 2801, Duplicate unique prime key error in guest.voltab.
at gosqldriver/teradatasql.formatError ErrorUtil.go:89
at gosqldriver/teradatasql.(*teradataConnection).formatDatabaseError ErrorUtil.go:217
...

Related

PyODBC syntax error with attempted CSV file insertion

for fileName in fileNames:
with open(fileName, mode="rt", encoding="utf-8", newline="") as csvfile:
csvFile = csv.reader(csvfile, delimiter=',')
header = next(csvFile)
headers = map((lambda x: x.strip()), header)
insert = 'INSERT INTO TEST ('.format(tableChoice) + ', '.join(headers) + ') VALUES '
for row , record in enumerate(csvFile, start=1):
values = map((lambda x: "'"+x.strip()+"'"), record)
myCursor.execute(insert +'('+ ', '.join(values) +');' )
cnxn.commit()
I get the below error when I reach the execute line in the script. I just need the data extracted from the csv to be inserted into the database, row by row. Anyone know what's causing the error?
ProgrammingError: ('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near '-'. (102) (SQLExecDirectW)")
Edit:
The SQL query string is as follows:
INSERT INTO TEST (this, that, those) VALUES ('1', '11', '111');
INSERT INTO TEST (this, that, those) VALUES ('2', '22', '222');
INSERT INTO TEST (this, that, those) VALUES ('3', '33', '333');
Likely issue is due to special characters in your column names such as - which requires wrapping in square brackets to escape in SQL Server. Additionally, consider using consistent Python string formatting and csv.DictReader to build a parameterized query followed by executemany for insertion:
for fileName in fileNames:
with open(fileName, mode="rt", encoding="utf-8", newline="") as csvfile:
reader = csv.DictReader(f)
data = [row for row in reader]
# BUILD SQL WITH [...] ESCAPED COLUMNS AND ? PARAM PLACEHOLDERS
sql = "INSERT INTO [Test] ([{cols}]) VALUES ({prms})"
sql = sql.format(cols="], [".join(map(lambda x: x.strip(), data[0].keys())),
prms=", ".join(['?'] * len(data[0])))
# APPEND ALL ROWS AND BIND PARAMS
myCursor.executemany(sql, [list(d.values()) for d in data])
cnxn.commit()

Unable to Insert Dataframe into Database Table

I am trying to insert my dataframe into a newly created table in Teradata. My connection and creating the table using SQLAchmey works, but I am unable to insert the data. I keep getting the same error that the schemy columns do not exist.
Here is my code:
username = '..'
password= '..'
server ='...'
database ='..'
driver = 'Aster ODBC Driver'
engine_stmt = ("mssql+pyodbc://%s:%s#%s/%s?driver=%s" % (username, password, server, database, driver ))
engine = sqlalchemy.create_engine(engine_stmt)
conn = engine.raw_connection()
#create tble function
def create_sql_tbl_schema(conn):
#tbl_cols_sql = gen_tbl_cols_sql(df)
sql = "CREATE TABLE so_sandbox.mn_testCreation3 (A INTEGER NULL,B INTEGER NULL,C INTEGER NULL,D INTEGER NULL) DISTRIBUTE BY HASH (A) STORAGE ROW COMPRESS LOW;"
cur = conn2.cursor()
cur.execute('rollback')
cur.execute(sql)
cur.close()
conn.commit()
create_mysql_tbl_schema(conn) #this works and the table is created
df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('abcd'))
df.to_sql('mn_testCreation3', con=engine,
schema='so_sandbox', index=False, if_exists='append') #this is giving me problems
Error message returned is:
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000] [AsterData][nCluster] (34) ERROR: relation "INFORMATION_SCHEMA"."COLUMNS" does not exist. (34) (SQLPrepare)') [SQL: 'SELECT [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA], [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME], [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME], [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE], [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE], [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION], [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH], [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION], [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE], [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT], [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME] \nFROM [INFORMATION_SCHEMA].[COLUMNS] \nWHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = ? AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = ?'] [parameters: ('mn_testCreation3', 'so_sandbox')] (Background on this error at: http://sqlalche.me/e/f405)

Using insert into with R

I am tring to using insert into sql syntax in R to insert row in data frame but is showing the following error:
(( error in sentax ))
Below is sample of my code:
Vector <- c("alex" ,"IT")
Tst <- data.frame( name.charcher(), major.charachter())
sqldf( c(" insert into Tst values (" , Vector[1] , "," ,Vector[2] , ")" , "select * from main.Tst "))
I hope my question is clear
A few edits to help address the syntax error:
use a lower case s in the function name (sqldf() instead of Sqldf())
add a comma between "," and Vector[2]
add quotes around select * from main.Tst
Also, to note:
the 1d data structure for the heterogeneous content types in your Vector <- c("alex", 32) should be a list (rather than an atomic vector where all contents are of the same type).
depending on what database driver you're using, sqldf() may return an error if you try to insert values into an empty R data frame as you have in your code. Creating the empty data frame within the sqldf() call is one approach to avoid this (used below in absence of knowing your database info).
For example, you could use the following to resolve the error message you're getting:
library(sqldf)
new <- list(name='alex', age=as.integer(32))
Tst <- sqldf(c("create table T1 (name char, age int)",
paste0("insert into T1 (name, age) values ('", new$name[1],"',", new$age[1],")",sep=''),
"select * from T1"))
Tst
# > Tst
# name age
# 1 alex 32

SqlSave Error: Unable to append to table

Code:
sqlSave(SQL,data.frame(df),tablename='Data',append = TRUE,rownames = FALSE)
The table in which I am trying to insert the data has a primary key which is auto-increment. My table has a total of 5 columns including the primary key. In my data frame, I have 4 columns because I don't want to insert the PK myself. However, when I run the command, I get the following error:
Error in colnames<-(*tmp*, value = c("BId", "name", "Set", :
length of 'dimnames' [2] not equal to array extent
Also, when I insert the Primary key in the dataframe by myself, it still doesn't work.
Error in sqlSave(SQL, data.frame(df), tablename = "Data", :
unable to append to table ‘Data’
have a try safer = FALSE
the defination of sqlSave
if (!append) {
if (safer)
stop("table ", sQuote(tablename), " already exists")
......
}
......
if (safer)
stop("unable to append to table ", sQuote(tablename))
You can use use verbose argument to get the actual database error.
sqlsave(con, df, verbose = T)

Python 3.4.1 INSERT INTO SQL Azure (pyodbc)

I am trying to INSERT some data into a table that has been created in SQL Azure.
SQL Structure
Field 1 DATE
Field 2 INT
Field 3 INT
Python code used:
#I know I have connected to the correct database.
Connection = pyodbc.connect(conn.conn())
cursor = Connection.cursor()
SQLCommand = ('INSERT INTO table_name ([Field 1], [Field 2], [Field 3]) VALUES ('31-Dec-14', 1, 2);')
cursor.execute(SQLCommand)
Connection.commit()
I get the following error
pyodbc.ProgrammingError: ('42S22', "[42S22] [Microsoft][SQL Server Native Client 11.0][SQL Server]Invalid column name '31-DEC-2014'. (207)
If I replace it with
SQLCommand = ('INSERT INTO table_name ([Field 1], [Field 2], [Field 3]) VALUES (?, ?, ?);', ('31-DEC-2014',1,2))
cursor.execute(SQLCommand)
Connection.commit()
I get the following error
TypeError: The first argument to execute must be a string or unicode query.
How should I input dates and integers into an SQL azure table via python?
Thanks
Thanks for the question.
I highly recommend you use pymssql if you are trying to connect to Azure SQL DB using Python.
Coming to your question, it depends on what the datetime format is used when you create your SQL table.
Here is how you would insert dates and integers using pymssql against the AdventureWorks schema(AdventureWorks schema is a pre loaded schema that you can create your database with for testing).
import pymssql
conn = pymssql.connect(server='yourserver.database.windows.net', user='yourusername#yourserver', password='yourpassword', database='AdventureWorks')
cursor = conn.cursor()
cursor.execute("INSERT SalesLT.Product (Name, ProductNumber, StandardCost, ListPrice, SellStartDate) OUTPUT INSERTED.ProductID VALUES ('SQL Server Express', 'SQLEXPRESS', 0, 0, CURRENT_TIMESTAMP)")
row = cursor.fetchone()
while row:
print "Inserted Product ID : " +str(row[0])
row = cursor.fetchone()
If you have questions about how to install pymssql on your machine, here is some reference documentation that will help you :)
- Windows
- Mac
- Linux
If you have any issues with using pymssql with Azure SQL DB do let me know as I would love to help.
Best,
Meet Bhagdev
Program Manager, Microsoft
The date parser does not like your format. See the Microsoft documentation for a list of valid formats
The following syntax should work:
SQLCommand = ("INSERT INTO table_name ([Field 1], [Field 2], [Field 3]) VALUES ('2014-12-31', 1, 2);")
cursor.execute(SQLCommand)
Connection.commit()