Pandas + pyodbc = Warnings - pandas only supports SQLAlchemy or sqlite3 [duplicate] - pandas
I am trying to make sense of the following error that I started getting when I setup my python code to run on a VM server, which has 3.9.5 installed instead of 3.8.5 on my desktop. Not sure that matters, but it could be part of the reason.
The error
C:\ProgramData\Miniconda3\lib\site-packages\pandas\io\sql.py:758: UserWarning: pandas only support SQLAlchemy connectable(engine/connection) or
database string URI or sqlite3 DBAPI2 connection
other DBAPI2 objects are not tested, please consider using SQLAlchemy
warnings.warn(
This is within a fairly simple .py file that imports pyodbc & sqlalchemy fwiw. A fairly generic/simple version of sql calls that yields the warning is:
myserver_string = "xxxxxxxxx,nnnn"
db_string = "xxxxxx"
cnxn = "Driver={ODBC Driver 17 for SQL Server};Server=tcp:"+myserver_string+";Database="+db_string +";TrustServerCertificate=no;Connection Timeout=600;Authentication=ActiveDirectoryIntegrated;"
def readAnyTable(tablename, date):
conn = pyodbc.connect(cnxn)
query_result = pd.read_sql_query(
'''
SELECT *
FROM [{0}].[dbo].[{1}]
where Asof >= '{2}'
'''.format(db_string,tablename,date,), conn)
conn.close()
return query_result
All the examples I have seen using pyodbc in python look fairly similar. Is pyodbc becoming deprecated? Is there a better way to achieve similar results without warning?
Is pyodbc becoming deprecated?
No. For at least the last couple of years pandas' documentation has clearly stated that it wants either a SQLAlchemy Connectable (i.e., an Engine or Connection object) or a SQLite DBAPI connection. (The switch-over to SQLAlchemy was almost universal, but they continued supporting SQLite connections for backwards compatibility.) People have been passing other DBAPI connections (like pyodbc Connection objects) for read operations and pandas hasn't complained … until now.
Is there a better way to achieve similar results without warning?
Yes. You can take your existing ODBC connection string and use it to create a SQLAlchemy Engine object as described in the SQLAlchemy 1.4 documentation:
from sqlalchemy.engine import URL
connection_string = "DRIVER={ODBC Driver 17 for SQL Server};SERVER=dagger;DATABASE=test;UID=user;PWD=password"
connection_url = URL.create("mssql+pyodbc", query={"odbc_connect": connection_string})
from sqlalchemy import create_engine
engine = create_engine(connection_url)
Then pass engine to the pandas methods you need to use.
It works for me.
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import pyodbc
import sqlalchemy as sa
import urllib
from sqlalchemy import create_engine, event
from sqlalchemy.engine.url import URL
server = 'IP ADDRESS or Server Name'
database = 'AdventureWorks2014'
username = 'xxx'
password = 'xxx'
params = urllib.parse.quote_plus("DRIVER={SQL Server};"
"SERVER="+server+";"
"DATABASE="+database+";"
"UID="+username+";"
"PWD="+password+";")
engine = sa.create_engine("mssql+pyodbc:///?odbc_connect={}".format(params))
qry = "SELECT t.[group] as [Region],t.name as [Territory],C.[AccountNumber]"
qry = qry + "FROM [Sales].[Customer] C INNER JOIN [Sales].SalesTerritory t on t.TerritoryID = c.TerritoryID "
qry = qry + "where StoreID is not null and PersonID is not null"
with engine.connect() as con:
rs = con.execute(qry)
for row in rs:
print (row)
You can use the SQL Server name or the IP address, but this requires a basic DNS listing. Most corporate servers should already have this listing though. You can check the server name or IP address using the nslookup command in the command prompt followed by the server name or IP address.
I'm using SQL 2017 on Ubuntu server running on VMWare. I'm connecting with IP Address here as part of a wider "running MSSQL on Ubuntu" project.
If you are connecting with your Windows credentials, you can replace the params with the trusted_connection parameter.
params = urllib.parse.quote_plus("DRIVER={SQL Server};"
"SERVER="+server+";"
"DATABASE="+database+";"
"trusted_connection=yes")
since its a warning, I suppressed the message using the warnings python library. Hope this helps
import warnings
with warnings.catch_warnings(record=True):
warnings.simplefilter("always")
#your code goes here
My company doesn't use SQLAlchemy, preferring to use postgres connections based on pscycopg2 and incorporating other features. If you can run your script directly from a command line, then turning warnings off will solve the problem: start it with python3 -W ignore
The correct way to import for SQLAlchemy 1.4.36 is using:
import pandas as pd
from sqlalchemy import create_engine, event
from sqlalchemy.engine.url import URL
#...
conn_str = set_db_info() # see above
conn_url = URL.create("mssql+pyodbc", query={"odbc_connect": conn_str})
engine = create_engine(conn_url)
df = pd.read_sql(SQL, engine)
df.head()
Related
Nobe at SQL scripting. Please haalp
I am trying to access my table in SQL database. However, I am getting an unusual error. Can someone please help me I am very new at this. import sqlite3 import pandas as pd com = sqlite3.connect('Reporting.db') Note: Panda dataframe is already defined above that's why I am not including this over here. df.to_sql('tblReporting', com, index=False, if_exists='replace') print('tblReporting loaded \n')``` %load_ext sql %sql sqlite:///Reporting.db %%sql SELECT * FROM tblReporting This is the error I am getting SELECT * ^ SyntaxError: invalid syntax Note #2: I am using Anaconda Navigator for writing scripts
Solved it!! that's my Syntax import sqlite3 import pandas as pd com = sqlite3.connect('Reporting.db') df.to_sql('tblReporting', com, index=False, if_exists='replace') print('tblReporting loaded \n') org_query = '''SELECT * FROM tblReporting''' df = pd.read_sql_query(org_query, com) df.head() Note: added ''' before and after my org_query helped me resolved this
correct syntax for SQL calls in python with pyodbc
I need help to write the SQL calls from my python scripts in a proper way. what I my script currently looks like is : import pyodbc cnx = pyodbc.connect(**connection details**) c = cnx.cursor() results = c.execute("select * .....") final_result= results.fetchall() issue is sometimes result value is NONE type object and sometimes I get a return values. I want to know if the below way is a correct way to call , because it seems to work everytime for all the queries import pyodbc cnx = pyodbc.connect(**connection details**) c = cnx.cursor() c.execute("select * .....") final_result= c.fetchall()
Your second approach is correct and is what many DB API modules require. However, pyodbc has extended the DB API specification so that Cursor.execute() returns the Cursor object itself so you can simply do final_result = c.execute("select * .....").fetchall()
"Data source name too long" error with mssql+pyodbc in SQLAlchemy
I am trying to upload a dataframe to a database on Azure SQL Server Database using SQLAlchemy and pyobdc. I have established connection but when uploading I get an error that says (pyodbc.Error) ('IM010', '[IM010] [Microsoft][ODBC Driver Manager] Data source name too long (0) (SQLDriverConnect)') I'm not sure where this error is coming from since I've used sqlalchemy before without a problem. I've attached my code below, can anybody help me diagnose the problem? username = 'bcadmin' password = 'N#ncyR2D2' endpoint = 'bio-powerbi-bigdata.database.windows.net' engine = sqlalchemy.create_engine(f'mssql+pyodbc://{username}:{password}#{endpoint}') df.to_sql("result_management_report",engine,if_exists='append',index=False) I know of other ETL methods like Data Factory and SSMS but I'd prefer to use pandas as the ETL process. Please help me with this error.
Three issues here: If a username or password might contain an # character then it needs to be escaped in the connection URL. For the mssql+pyodbc dialect, the database name must be included in the URL in order for SQLAlchemy to recognize a "hostname" connection (as opposed to a "DSN" connection). Also for mssql+pyodbc hostname connections, the ODBC driver name must be supplied using the driver attribute. The easiest way to build a proper connection URL is to use the URL.create() method: from sqlalchemy import create_engine from sqlalchemy.engine import URL my_uid = "bcadmin" my_pwd = "N#ncyR2D2" my_host = "bio-powerbi-bigdata.database.windows.net" my_db = "master" my_odbc_driver = "ODBC Driver 17 for SQL Server" connection_url = URL.create( "mssql+pyodbc", username=my_uid, password=my_pwd, host=my_host, database=my_db, # required; not an empty string query={"driver": my_odbc_driver}, ) print(connection_url) """console output: mssql+pyodbc://bcadmin:N%40ncyR2D2#bio-powerbi-bigdata.database.windows.net/master?driver=ODBC+Driver+17+for+SQL+Server """ engine = create_engine(connection_url, fast_executemany=True)
Unable to upload Images to sql server via pyodbc
I'm trying to upload an image to SQL server in Linux (raspbian) environment using python language.So far i was able connect to Sql server and also i created a table and i'm using pyodbc. #! /user/bin/env python import pyodbc from PIL import Image dsn = 'nicedcn' user = myid password = mypass database = myDB con_string = 'DSN=%s;UID=%s;PWD=%s;DATABASE=%s;' % (dsn, user, password, database) cnxn = pyodbc.connect(con_string) cursor = cnxn.cursor() string = "CREATE TABLE Database2([image name] varchar(20), [image] image)" cursor.execute(string) cnxn.commit() This part complied without any error.That means i have successfully created a table right? Or is there any issue? I try to upload image as this way. image12= Image.open('new1.jpg') cursor.execute("insert into Database1([image name], [image]) values (?,?)", 'new1', image12) cnxn.commit() I get the error on this part. and it pyodbc.ProgrammingError: ('Invalid Parameter type. param-index=1 param-type=instance', 'HY105') Please tell me another way or proper way to upload a image via pyodbc to a database
How to write a Python script that uses the OpenERP ORM to directly upload to Postgres Database
I need to write a "standalone" script in Python to upload sales taxes to the account_tax table in the database using ONLY the ORM module of OpenERP. What I would like to do is something like the pseudo code below. Can someone provide me a more details on the following: 1) what sys.path's do I need to set 2) what modules do I need to import before importing the "account" module. Currently when I import the "account" module I get the following error: AssertionError: The report "report.custom" already exists! 3) What is the proper way to get my database cursor. In the code below I am simply calling psycopg2 directly to get a cursor. If this approach cannot work, can anyone suggest an alternative approach other than writing XML files to load the data from the OpenERP application itself. This process needs to run outside of the the standard OpenERP application. PSEUDO CODE: import sys # set Python paths to access openerp modules sys.path.append("./openerp") sys.path.append("./openerp/addons") # import OpenERP import openerp # import the account addon modules that contains the tables # to be populated. import account # define connection string conn_string2 = "dbname='test2' user='xyz' password='password'" # get a db connection conn = psycopg2.connect(conn_string2) # conn.cursor() will return a cursor object cursor = conn.cursor() # and finally use the ORM to insert data into table.
If you wanna do it via web service then have look at the OpenERP XML-RPC Web services Example code top work with OpenERP Web Services : import xmlrpclib username = 'admin' #the user pwd = 'admin' #the password of the user dbname = 'test' #the database # OpenERP Common login Service proxy object sock_common = xmlrpclib.ServerProxy ('http://localhost:8069/xmlrpc/common') uid = sock_common.login(dbname, username, pwd) #replace localhost with the address of the server # OpenERP Object manipulation service sock = xmlrpclib.ServerProxy('http://localhost:8069/xmlrpc/object') partner = { 'name': 'Fabien Pinckaers', 'lang': 'fr_FR', } #calling remote ORM create method to create a record partner_id = sock.execute(dbname, uid, pwd, 'res.partner', 'create', partner) More clearly you can also use the OpenERP Client lib Example Code with client lib : import openerplib connection = openerplib.get_connection(hostname="localhost", database="test", \ login="admin", password="admin") user_model = connection.get_model("res.users") ids = user_model.search([("login", "=", "admin")]) user_info = user_model.read(ids[0], ["name"]) print user_info["name"] You see both way are good but when you use the client lib, code is less and easy to understand while using xmlrpc proxy is lower level calls that you will handle Hope this will help you.
As per my view one must go for XMLRPC or NETSVC services provided by Open ERP for such needs. You don't need to import accounts module of Open ERP, there are possibilities that other modules have inherited accounts.tax object and had altered its behaviour as per your business needs. Eventually if you feed data by calling those methods manually without using Open ERP Web service its possible you'll get undesired result / unexpected failures / inconsistent database state.
You can use Erppeek to browse data, but not sure if you can really upload data to DB, personally I use/prefer XMLRPC
Why don't you use the xmlrpc call of openerp. it will not need to import account or openerp . and even you can have all orm functionality.
You can use python library to access openerp server using xmlrpc service. Please check https://github.com/OpenERP/openerp-client-lib It is officially supported by OpenERP SA.
If you want to interacti directly with the DB, you could just import psycopg2 and: conn = psycopg2.connect(dbname='dbname', user='dbuser', password='dbpassword', host='dbhost') cur = conn.cursor() cur.execute('select * from table where id = %d' % table_id) cur.execute('insert into table(column1, column2) values(%d, %d)' % (value1, value2)) cur.close() conn.close()
Why you want to fix it like that?! You should create a localization module and define data in XML files. This is the standard way to fix such a problem in OpenERP. You want to insert sales taxes for which country? Explain more plz.
from openerp.modules.registry import RegistryManager registry = RegistryManager.get("databasename") with registry.cursor() as cr: user = registry.get('res.users').browse(cr, userid, listids) print user