Python Error when querying a SQL query (not all arguments converted during string formatting) - sql

I have the below python code that tries to pull some data from a SQL query. I however am getting an error
TypeError: not all arguments converted during string formatting
Given below is the code I am using
import pandas as pd
import psycopg2
from psycopg2 import sql
import xlsxwriter
def func(input):
db_details = conn.cursor() # set DB Cursor
db_details.execute(sql.SQL("""select name from store where name = (%s)"""), (input))
names = dwh_cursor.fetchall()
df = pd.DataFrame(names,columns=[desc[0] for desc in dwh_cursor.description])
Could anyone guide me where am I going wrong. Thanks

If i recall correctly you need to pass to sql query a name included in single quotes, so your query need to be ...where name = '{}' """.format(variablename)

Related

How to filter Socrata API dataset by multiple values for a single field?

I am attempting to create a CSV file using Python by reading from this specific api:
https://dev.socrata.com/foundry/data.cdc.gov/5jp2-pgaw
Where I'm running into trouble is that I would like to specify multiple values of "loc_admin_zip" to search for at once. For example, returning a CSV file where the zip is either "10001" or "10002". However, I can't figure out how to do this, I can only get it to work if "loc_admin_zip" is set to a single value. Any help would be appreciated. My code so far:
import pandas as pd
from sodapy import Socrata
client = Socrata("data.cdc.gov", None)
results = client.get("5jp2-pgaw",loc_admin_zip = 10002)
results_df = pd.DataFrame.from_records(results)
results_df.to_csv('test.csv')

error: its says legacy_id is an invalid identifier

import streamlit
import pandas as pd
import snowflake.connector
streamlit.title('Citibike station')
my_cnx = snowflake.connector.connect(**streamlit.secrets["snowflake"])
my_cur = my_cnx.cursor()
my_cur.execute("select legacy_id from citibike_status") <-- error
my_catalog = my_cur.fetchall()
df = pd.DataFrame(my_catalog) streamlit.write(df)
if I try * it fetches all the data but when I mention any of the col names it says it's invalid.
Most likely it was quoted during table creation and should be accessed as such:
select "legacy_id" from citibike_status
In Python:
my_cur.execute("""select "legacy_id" from citibike_status""")
Double-quoted Identifiers
If an object is created using a double-quoted identifier, when referenced in a query or any other SQL statement, the identifier must be specified exactly as created, including the double quotes. Failure to include the quotes might result in an Object does not exist error (or similar type of error).

pandas to_sql with Exasol

When I use to_sql to upload dataframe to exasol and specify if_exists='replace', the default string data type is 'text', which is not supported by Exasol. I think Varchar is the right type. How could I make to_sql to create table with Varchar rather than Text?
I know it's not 100% what you are asking for but I would suggest to use the pyexasol package for communication between Pandas and Exasol. Deleting and following uploading then works like
import pyexasol
import _config as config
# Connect with compression enabled
C = pyexasol.connect(dsn=config.dsn, user=config.user,
password=config.password, schema=config.schema,
compression=True)
C.execute('TRUNCATE TABLE users_copy')
# Import from pandas DataFrame into Exasol table
C.import_from_pandas(pd, 'users_copy')
stmt = C.last_statement()
print(f'IMPORTED {stmt.rowcount()} rows in
{stmt.execution_time}s')
C.close()
Problems with varchar do not appear.

UUID to NUUID in Python

In my application, I get some values from MSSQL using PyMSSQL. Python interpret one of this values as UUID. I assigned this value to a variable called id. When I do
print (type(id),id)
I get
<class 'uuid.UUID'> cc26ce03-b6cb-4a90-9d0b-395c313fc968
Everything is as expected so far. Now, I need to make a query in MongoDb using this id. But the type of my field in MongoDb is ".NET UUID(Legacy)", which is NUUID. But I don't get any result when I query with
client.db.collectionname.find_one({"_id" : id})
This is because I need to convert UUID to NUUID.
Note: I also tried
client.db.collectionname.find_one({"_id" : NUUID("cc26ce03-b6cb-4a90-9d0b-395c313fc968")})
But it didn't work. Any ideas?
Assuming you are using PyMongo 3.x:
from bson.binary import CSHARP_LEGACY
from bson.codec_options import CodecOptions
options = CodecOptions(uuid_representation=CSHARP_LEGACY)
coll = client.db.get_collection('collectionname', options)
coll.find_one({"_id": id})
If instead you are using PyMongo 2.x
from bson.binary import CSHARP_LEGACY
coll = client.db.collectionname
coll.uuid_subtype = CSHARP_LEGACY
coll.find_one({"_id": id})
You have to tell PyMongo what format the UUID was originally stored in. There is also a JAVA_LEGACY representation.

Adding Excel Spreadsheet to SQL Database

How can I import an excel file into my SQL database? I have two options, MSSQL or MySQL.
Thank you
In Python it would be something like:
import MySQLdb, xlrd
def xl_to_mysql():
book = xlrd.open_workbook('yourdata.xls')
to_db = []
for sheet in book.sheets():
for rowx in xrange(sheet.nrows):
to_db.append(tuple(sheet.cell(rowx, colx).value
for colx in xrange(sheet.ncols)))
conn = MySQLdb.connect(host="yourhost",user="username",
passwd="yourpassword",db="yourdb")
curs = conn.cursor()
# however many placeholders `%s` you need in the query below
curs.executemany("INSERT INTO yourtable VALUES (%s,%s,%s);", to_db)
conn.commit()
curs.close()
conn.close()
if __name__ == '__main__':
xl_to_mysql()
You could export the excel file as a CSV and then use mysqlimport : http://dev.mysql.com/doc/refman/5.0/en/mysqlimport.html
You can import the file as any other file.
If the question is about data from Excel then in SQL Server I would have linked Excel as a linked server, see here or here, or used OPENROWSET. There are other options like exporting/importing as XML, etc.
All options are pretty well covered on internet. What us the concrete context and/or problem?