Python sqlite3 error- the database is encrypted - sql

I'm trying to query a data base in the form of an SQL file I have downloaded on my computer to use the data in a machine learning project. I've looked at the database source code, and there is no password setting statement, so I'm very confused at the error I keep getting, which is 'DatabaseError: file is encrypted or is not a database.'
import sqlite3 as lite
con = None
con = lite.connect('haiku1aip1.sql')
cur = con.cursor()
cur.execute('SELECT * FROM haiku1aip1')
rows = cur.fetchall()
poems = []
for row in rows:
poems.append(row)
print(poems)

con = lite.connect('haiku1aip1.sql')
This line is trying to connect to a database named "haiku1aip1.sql", but .sql is not the correct file extension for a database file. Your database file would end in .db.
.sql files contain SQL queries, inserts, and other statements (similar to your "SELECT * FROM haiku1aip1" query).

Related

Save the output as csv file from VScode programmatically

My Requirement: Save the SQL query output (received the data) in to the local drives in csv file format.
My OS: Windows 10 64 bit
VS Code 1.67.1:
I have installed the following extensions to connect with snowflake data warehouse:
SQL Tools
Snowflake driver for SQL Tools
I have successfully connected my snowflake (cloud) data warehouse and received the data at
the VS code.
What I want is to save (export) the output (received the data) to the local file (for example to D:\result\result.csv).
How can I achieve that?
image attached for your reference.
thank you all.
pmk
If you can run a bit of python, this does pretty much what you need
import snowflake.connector
import csv
conn = snowflake.connector.connect(
user='',
password='',
account='',
warehouse='',
database='',
schema='',
role=''
)
results = conn.cursor().execute("""MY QUERY""").fetchall()
csvfile = open(r"PATH TO FILE",'a', newline='')
output = csv.writer(csvfile)
output.writerows(results)
csvfile.close

How can I resolve an Arithmetic Overflow Error when inserting data from a CSV into a SQL Server table?

I am trying to read in a .CSV file of test results into my SQL Server Express database, and I am able to get two of the 25 rows into my database, but then it errors out with the following error:
DataError: ('22003', u'[22003] [Microsoft][ODBC SQL Server Driver][SQL
Server]Arithmetic overflow error converting varchar to data type
numeric. (8115) (SQLExecDirectW); [22003] [Microsoft][ODBC SQL Server
Driver][SQL Server]The statement has been terminated. (3621)')
It doesn't specify which field or column is causing the issue; and I have searched Google and Stack Exchange, but haven't found anything to resolve this issue. Any help would be greatly appreciated.
I am using Python 2.7, SQL Server Express, and pyodbc for the connection between the two.
Here is the code that I am trying to run:
import pyodbc
import csv
print
print("Please wait while I connect to database...")
print
# Connecting to the Database
mydb = pyodbc.connect('Driver={SQL Server};'
'Server=DESKTOP-5I015MM\SQLEXPRESS;'
'Trusted_Connection=yes;')
# Turning on autocommit
mydb.autocommit = True
# Creating a cursor object
mycursor = mydb.cursor()
mycursor.execute("USE mydatabase")
print
print("Connection to the database was successful!")
# Adding Test Results to database - Look into this...
strSQL='''INSERT INTO TestResults(datasource, modelNumber, testSequence, reportingCondition, testDate, isc, voc, imp, vmp, ff, pmp, noct)
VALUES ('SolarLabs',?,?,?,?,?,?,?,?,?,?,?)'''
#open csv file
with open('test_results_NOCT.csv','rb') as f:
csvfile= csv.reader(f)
next(csvfile)
for row in csvfile:
mycursor.execute(strSQL, row)
print
print("Test Results added successfully")
print
And here is the CSV code that I am trying to read into SQL:
Model,Test Sequence,Condition,Date,Isc,Voc,Imp,Vmp,FF,Pmp,NOCT
KUT0012,Baseline,STC,3/11/2008,5.2,44.7,4.88,35.7,75,174.3,51.9
KUT0003,Baseline,STC,3/11/2008,5.34,44.7,5.03,35.7,75.2,179.7,52.1
KUT0003,TC200,STC,5/7/2008,5.2,45.1,4.83,36.4,75.2,176.2,-
KUT0004,Baseline,STC,3/11/2008,5.21,44.8,4.91,36.1,76,177.2,51.8
KUT0004,TC200,STC,5/7/2008,5.17,45.1,4.81,36.5,75.3,175.6,-
KUT0004,Hotspot,STC,6/25/2008,5.09,45.6,4.7,37,74.9,173.7,-
KUT0001,Baseline,STC,3/11/2008,5.32,44.6,4.95,35.4,73.8,175.2,51.8
KUT0001,TC200,STC,5/7/2008,5.2,45,4.77,36.8,75.1,175.6,-
KUT0006,Baseline,STC,3/11/2008,5.35,44.4,4.95,35.8,74.5,177.2,52
KUT0006,UV,STC,6/5/2008,5.28,44.6,4.84,35.8,73.7,173.7,-
KUT0006,TC50,STC,7/4/2008,5.22,45,4.72,36.9,74.1,173.9,-
KUT0006,HF10,STC,8/1/2008,5.21,45.1,4.69,37,73.9,173.4,-
KUT0006,Termination,STC,8/19/2008,5.23,45,4.62,37.3,73.2,172.5,-
KUT0007,Baseline,STC,3/11/2008,5.25,44.4,4.87,35.8,74.6,174.2,52.1
KUT0007,UV,STC,6/5/2008,5.39,43.9,4.84,35.5,72.5,171.7,-
KUT0007,TC50,STC,7/4/2008,5.56,44.7,4.87,36.8,72.2,179.3,-
KUT0007,HF10,STC,8/1/2008,5.5,44.6,4.85,36.4,72.2,176.8,-
KUT0005,Baseline,STC,3/11/2008,5.13,44.3,4.84,35.6,75.7,172.3,51.7
KUT0005,Damp Heat,STC,5/8/2008,5.11,45.5,4.7,37.4,75.4,175.6,-
KUT0005,Static Load,STC,5/29/2008,4.95,45.6,4.67,37.6,77.6,175.5,-
KUT0008,Baseline,STC,3/11/2008,5.13,44.6,4.84,36,76.2,174.4,52.2
KUT0008,Damp Heat,STC,5/8/2008,5.17,44.9,4.78,36.2,74.4,172.7,-
KUT0008,Hail,STC,5/21/2008,5.14,44.5,4.73,35.8,74,169.4,-
KUT0011,Baseline,STC,3/11/2008,5.24,44.7,4.99,35.8,76.1,178.4,51.9
KUT0011,Outdoor Exposure,STC,4/17/2008,5.05,44.3,4.79,35.6,76.4,170.7,-
And here is the table schema:
Table Schema

R read a local .mdf file

I have a database file (a .mdf file from a microsoft SQL server) that I copied on my disk. It comes from a device, and I would like to read the data, preferably in R. I am totally new to SQL, and I don't understand how to deal with a local file.
I tried
library(RMySQL)
con <- dbConnect(RMySQL::MySQL(), dbname = "MGCDBase")
which gave me
Error in .local(drv, ...) :
Failed to connect to database: Error: Can't connect to MySQL server on 'localhost' (0)
I don't get if it is because there is a password, if I am doing it wrong, or if I should use something else than R or RMySQL.
Any help or advise would be welcome

how to insert utf8 characters into oracle database using robotframework database library

I have a robot script which inserts some sql statements from a sql file; some of these statements contain utf8 characters. If I insert this file manually into database using navicat tool, everything's fine. But when I try to execute this file using database library of robot framework, utf8 characters go crazy!
This is my utf8 included sql statement:
INSERT INTO "MY_TABLE" VALUES (2, 'تست1');
This is how I use database library:
Connect To Database Using Custom Params cx_Oracle ${dbConnection}
Execute Sql Script ${sqlFile}
Disconnect From Database
This is what I get in the database:
������������ 1
I have tried to execute the SQL file using cx_Oracle directly and it's still failing! It seems there is a problem in the original library. This is what I've used for importing SQL file:
import cx_Oracle
if __name__ == "__main__":
dsn_tns = cx_Oracle.makedsn(ip, port, sid)
db = cx_Oracle.connect(username, password, dsn_tns)
sql_commands = open(sql_file_addr, 'r').read().split(";")
cr = db.cursor()
for command in sql_commands:
if not command in ["", "\t", "\n", "\r", "\n\r", "\r\n", None]:
print "Executing SQL command:", command
cr.execute(command)
db.commit()
I have found that I can define character-set in the connection string. I've done it for mysql database and it the framework successfully inserted UTF8 characters into database; this is my connection string for MySQL:
database='db_name', user='db_username', password='db_password', host='db_ip', port=3306, charset='utf8'
But I don't know how to define character-set for Oracle connection string. I have tried this:
'db_username','db_password','db_ip:1521/db_sid','utf8'
And I've got this error:
TypeError: an integer is required
As #Yu Zhang suggested, I read discussion in this link and I found out that I should set an environment variable NLS_LANG in order to have a UTF-8 connection to the database. So I've added below line in my test setup:
os.environ["NLS_LANG"] = "AMERICAN_AMERICA.AL32UTF8"
Would any of links below help?
http://docs.oracle.com/cd/B19306_01/server.102/b14225/ch6unicode.htm#i1006779
http://www.theserverside.com/news/thread.tss?thread_id=39575
https://community.oracle.com/thread/502949
There can be several problems in here...
The first problem might be that you don't save the test files using UTF-8 encoding.
Robot framework expects plain text test files to be saved using UTF-8 encoding, yet most text editors will not save by default using UTF-8.
Verify that your editor saves that way - for example, by opening the file using NotePad++ and choosing Encoding -> UTF-8
Another problem might be the connection to the Oracle database. It doesn't seem like you can configure the connection custom properties to explicitly state UTF-8
This means you probably need to state that the database schema itself is UTF-8

Execute SQL from file in SQLAlchemy

How can I execute whole sql file into database using SQLAlchemy? There can be many different sql queries in the file including begin and commit/rollback.
sqlalchemy.text or sqlalchemy.sql.text
The text construct provides a straightforward method to directly execute .sql files.
from sqlalchemy import create_engine
from sqlalchemy import text
# or from sqlalchemy.sql import text
engine = create_engine('mysql://{USR}:{PWD}#localhost:3306/db', echo=True)
with engine.connect() as con:
with open("src/models/query.sql") as file:
query = text(file.read())
con.execute(query)
SQLAlchemy: Using Textual SQL
text()
I was able to run .sql schema files using pure SQLAlchemy and some string manipulations. It surely isn't an elegant approach, but it works.
# Open the .sql file
sql_file = open('file.sql','r')
# Create an empty command string
sql_command = ''
# Iterate over all lines in the sql file
for line in sql_file:
# Ignore commented lines
if not line.startswith('--') and line.strip('\n'):
# Append line to the command string
sql_command += line.strip('\n')
# If the command string ends with ';', it is a full statement
if sql_command.endswith(';'):
# Try to execute statement and commit it
try:
session.execute(text(sql_command))
session.commit()
# Assert in case of error
except:
print('Ops')
# Finally, clear command string
finally:
sql_command = ''
It iterates over all lines in a .sql file ignoring commented lines.
Then it concatenates lines that form a full statement and tries to execute the statement. You just need a file handler and a session object.
You can do it with SQLalchemy and psycopg2.
file = open(path)
engine = sqlalchemy.create_engine(db_url)
escaped_sql = sqlalchemy.text(file.read())
engine.execute(escaped_sql)
Unfortunately I'm not aware of a good general answer for this. Some dbapi's (psycopg2 for instance) support executing many statements at a time. If the files aren't huge you can just load them into a string and execute them on a connection. For others, I would try to use a command-line client for that db and pipe the data into that using the subprocess module.
If those approaches aren't acceptable, then you'll have to go ahead and implement a small SQL parser that can split the file apart into separate statements. This is really tricky to get 100% correct, as you'll have to factor in database dialect specific literal escaping rules, the charset used, any database configuration options that affect literal parsing (e.g. PostgreSQL standard_conforming_strings).
If you only need to get this 99.9% correct, then some regexp magic should get you most of the way there.
If you are using sqlite3 it has a useful extension to dbapi called conn.executescript(str), I've hooked this up via something like this and it seemed to work: (Not all context is shown but it should be enough to get the drift)
def init_from_script(script):
Base.metadata.drop_all(db_engine)
Base.metadata.create_all(db_engine)
# HACK ALERT: we can do this using sqlite3 low level api, then reopen session.
f = open(script)
script_str = f.read().strip()
global db_session
db_session.close()
import sqlite3
conn = sqlite3.connect(db_file_name)
conn.executescript(script_str)
conn.commit()
db_session = Session()
Is this pure evil I wonder? I looked in vain for a 'pure' sqlalchemy equivalent, perhaps that could be added to the library, something like db_session.execute_script(file_name) ? I'm hoping that db_session will work just fine after all that (ie no need to restart engine) but not sure yet... further research needed (ie do we need to get a new engine or just a session after going behind sqlalchemy's back?)
FYI sqlite3 includes a related routine: sqlite3.complete_statement(sql) if you roll your own parser...
You can access the raw DBAPI connection through this
raw_connection = mySqlAlchemyEngine.raw_connection()
raw_cursor = raw_connection() #get a hold of the proxied DBAPI connection instance
but then it will depend on which dialect/driver you are using which can be referred to through this list.
For pyscog2, you can just do
raw_cursor.execute(open("my_script.sql").read())
but pysqlite you would need to do
raw_cursor.executescript(open("my_script").read())
and in line with that you would need to check the documentation of whichever DBAPI driver you are using to see if multiple statements are allowed in one execute or if you would need to use a helper like executescript which is unique to pysqlite.
Here's how to run the script splitting the statements, and running each statement directly with a "connectionless" execution with the SQLAlchemy Engine. This assumes that each statement ends with a ; and that there's no more than one statement per line.
engine = create_engine(url)
with open('script.sql') as file:
statements = re.split(r';\s*$', file.read(), flags=re.MULTILINE)
for statement in statements:
if statement:
engine.execute(text(statement))
In the current answers, I did not found a solution which works when a combination of these features in the .SQL file is present:
Comments with "--"
Multi-line statements with additional comments after "--"
Function definitions which have multiple SQL-queries ending with ";" butmust be executed as a whole statement
A found a rather simple solution:
# check for /* */
with open(file, 'r') as f:
assert '/*' not in f.read(), 'comments with /* */ not supported in SQL file python interface'
# we check out the SQL file line-by-line into a list of strings (without \n, ...)
with open(file, 'r') as f:
queries = [line.strip() for line in f.readlines()]
# from each line, remove all text which is behind a '--'
def cut_comment(query: str) -> str:
idx = query.find('--')
if idx >= 0:
query = query[:idx]
return query
# join all in a single line code with blank spaces
queries = [cut_comment(q) for q in queries]
sql_command = ' '.join(queries)
# execute in connection (e.g. sqlalchemy)
conn.execute(sql_command)
Code bellow works for me in alembic migrations
from alembic import op
import sqlalchemy as sa
from ekrec.common import get_project_root
def upgrade():
path = f'{get_project_root()}/migrations/versions/fdb8492f75b2_.sql'
op.execute(open(path).read())
I had success with David's answer here, with two slight modifications:
Use get_bind() as I was working with a Session rather than an Engine
Call cursor() on the raw connection
raw_connection = myDbSession.get_bind().raw_connection()
raw_cursor = raw_connection.cursor()
raw_cursor.execute(open("my_script.sql").read())