I'm trying to upload an image to SQL server in Linux (raspbian) environment using python language.So far i was able connect to Sql server and also i created a table and i'm using pyodbc.
#! /user/bin/env python
import pyodbc
from PIL import Image
dsn = 'nicedcn'
user = myid
password = mypass
database = myDB
con_string = 'DSN=%s;UID=%s;PWD=%s;DATABASE=%s;' % (dsn, user, password, database)
cnxn = pyodbc.connect(con_string)
cursor = cnxn.cursor()
string = "CREATE TABLE Database2([image name] varchar(20), [image] image)"
cursor.execute(string)
cnxn.commit()
This part complied without any error.That means i have successfully created a table right? Or is there any issue?
I try to upload image as this way.
image12= Image.open('new1.jpg')
cursor.execute("insert into Database1([image name], [image]) values (?,?)",
'new1', image12)
cnxn.commit()
I get the error on this part. and it pyodbc.ProgrammingError:
('Invalid Parameter type. param-index=1 param-type=instance', 'HY105')
Please tell me another way or proper way to upload a image via pyodbc to a database
Related
I am trying to make sense of the following error that I started getting when I setup my python code to run on a VM server, which has 3.9.5 installed instead of 3.8.5 on my desktop. Not sure that matters, but it could be part of the reason.
The error
C:\ProgramData\Miniconda3\lib\site-packages\pandas\io\sql.py:758: UserWarning: pandas only support SQLAlchemy connectable(engine/connection) or
database string URI or sqlite3 DBAPI2 connection
other DBAPI2 objects are not tested, please consider using SQLAlchemy
warnings.warn(
This is within a fairly simple .py file that imports pyodbc & sqlalchemy fwiw. A fairly generic/simple version of sql calls that yields the warning is:
myserver_string = "xxxxxxxxx,nnnn"
db_string = "xxxxxx"
cnxn = "Driver={ODBC Driver 17 for SQL Server};Server=tcp:"+myserver_string+";Database="+db_string +";TrustServerCertificate=no;Connection Timeout=600;Authentication=ActiveDirectoryIntegrated;"
def readAnyTable(tablename, date):
conn = pyodbc.connect(cnxn)
query_result = pd.read_sql_query(
'''
SELECT *
FROM [{0}].[dbo].[{1}]
where Asof >= '{2}'
'''.format(db_string,tablename,date,), conn)
conn.close()
return query_result
All the examples I have seen using pyodbc in python look fairly similar. Is pyodbc becoming deprecated? Is there a better way to achieve similar results without warning?
Is pyodbc becoming deprecated?
No. For at least the last couple of years pandas' documentation has clearly stated that it wants either a SQLAlchemy Connectable (i.e., an Engine or Connection object) or a SQLite DBAPI connection. (The switch-over to SQLAlchemy was almost universal, but they continued supporting SQLite connections for backwards compatibility.) People have been passing other DBAPI connections (like pyodbc Connection objects) for read operations and pandas hasn't complained … until now.
Is there a better way to achieve similar results without warning?
Yes. You can take your existing ODBC connection string and use it to create a SQLAlchemy Engine object as described in the SQLAlchemy 1.4 documentation:
from sqlalchemy.engine import URL
connection_string = "DRIVER={ODBC Driver 17 for SQL Server};SERVER=dagger;DATABASE=test;UID=user;PWD=password"
connection_url = URL.create("mssql+pyodbc", query={"odbc_connect": connection_string})
from sqlalchemy import create_engine
engine = create_engine(connection_url)
Then pass engine to the pandas methods you need to use.
It works for me.
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import pyodbc
import sqlalchemy as sa
import urllib
from sqlalchemy import create_engine, event
from sqlalchemy.engine.url import URL
server = 'IP ADDRESS or Server Name'
database = 'AdventureWorks2014'
username = 'xxx'
password = 'xxx'
params = urllib.parse.quote_plus("DRIVER={SQL Server};"
"SERVER="+server+";"
"DATABASE="+database+";"
"UID="+username+";"
"PWD="+password+";")
engine = sa.create_engine("mssql+pyodbc:///?odbc_connect={}".format(params))
qry = "SELECT t.[group] as [Region],t.name as [Territory],C.[AccountNumber]"
qry = qry + "FROM [Sales].[Customer] C INNER JOIN [Sales].SalesTerritory t on t.TerritoryID = c.TerritoryID "
qry = qry + "where StoreID is not null and PersonID is not null"
with engine.connect() as con:
rs = con.execute(qry)
for row in rs:
print (row)
You can use the SQL Server name or the IP address, but this requires a basic DNS listing. Most corporate servers should already have this listing though. You can check the server name or IP address using the nslookup command in the command prompt followed by the server name or IP address.
I'm using SQL 2017 on Ubuntu server running on VMWare. I'm connecting with IP Address here as part of a wider "running MSSQL on Ubuntu" project.
If you are connecting with your Windows credentials, you can replace the params with the trusted_connection parameter.
params = urllib.parse.quote_plus("DRIVER={SQL Server};"
"SERVER="+server+";"
"DATABASE="+database+";"
"trusted_connection=yes")
since its a warning, I suppressed the message using the warnings python library. Hope this helps
import warnings
with warnings.catch_warnings(record=True):
warnings.simplefilter("always")
#your code goes here
My company doesn't use SQLAlchemy, preferring to use postgres connections based on pscycopg2 and incorporating other features. If you can run your script directly from a command line, then turning warnings off will solve the problem: start it with python3 -W ignore
The correct way to import for SQLAlchemy 1.4.36 is using:
import pandas as pd
from sqlalchemy import create_engine, event
from sqlalchemy.engine.url import URL
#...
conn_str = set_db_info() # see above
conn_url = URL.create("mssql+pyodbc", query={"odbc_connect": conn_str})
engine = create_engine(conn_url)
df = pd.read_sql(SQL, engine)
df.head()
I am trying to upload a dataframe to a database on Azure SQL Server Database using SQLAlchemy and pyobdc. I have established connection but when uploading I get an error that says
(pyodbc.Error) ('IM010', '[IM010] [Microsoft][ODBC Driver Manager] Data source name too long (0) (SQLDriverConnect)')
I'm not sure where this error is coming from since I've used sqlalchemy before without a problem. I've attached my code below, can anybody help me diagnose the problem?
username = 'bcadmin'
password = 'N#ncyR2D2'
endpoint = 'bio-powerbi-bigdata.database.windows.net'
engine = sqlalchemy.create_engine(f'mssql+pyodbc://{username}:{password}#{endpoint}')
df.to_sql("result_management_report",engine,if_exists='append',index=False)
I know of other ETL methods like Data Factory and SSMS but I'd prefer to use pandas as the ETL process.
Please help me with this error.
Three issues here:
If a username or password might contain an # character then it needs to be escaped in the connection URL.
For the mssql+pyodbc dialect, the database name must be included in the URL in order for SQLAlchemy to recognize a "hostname" connection (as opposed to a "DSN" connection).
Also for mssql+pyodbc hostname connections, the ODBC driver name must be supplied using the driver attribute.
The easiest way to build a proper connection URL is to use the URL.create() method:
from sqlalchemy import create_engine
from sqlalchemy.engine import URL
my_uid = "bcadmin"
my_pwd = "N#ncyR2D2"
my_host = "bio-powerbi-bigdata.database.windows.net"
my_db = "master"
my_odbc_driver = "ODBC Driver 17 for SQL Server"
connection_url = URL.create(
"mssql+pyodbc",
username=my_uid,
password=my_pwd,
host=my_host,
database=my_db, # required; not an empty string
query={"driver": my_odbc_driver},
)
print(connection_url)
"""console output:
mssql+pyodbc://bcadmin:N%40ncyR2D2#bio-powerbi-bigdata.database.windows.net/master?driver=ODBC+Driver+17+for+SQL+Server
"""
engine = create_engine(connection_url, fast_executemany=True)
We are trying to copy the data frame to Teradata specific database and the script is not accepting the schema_name parameter. Data Copy to User Database, which used in logon command is happening. But I tried to override the default and specifying Database Name in the copy_to_sql it is failing.
from teradataml import *
from teradataml.dataframe.copy_to import copy_to_sql
create_context(host = "ipaddrr", username='uname', password = "pwd")
df = DataFrame.from_query("select top 10* from dbc.tables;")
copy_to_sql(df = df ,table_name = 'Tab', schema_name='DB_Name',if_exists = 'replace')
Error: TeradataMlException: [Teradata][teradataml](TDML_2007) Invalid value(s) 'DB_Name' passed to argument 'schema_name', should be: A valid database/schema name..
Do you have a database / user named DB_Name? If not, try creating the database first and then running your copy script:
CREATE DATABASE DB_NAME FROM <parent_DB> AS PERMANENT = 1000000000;
I don't think the utilities / packages will typically create a database for you on the fly, since it can be a more involved operation (locks, space allocation, etc.) than creating a table.
I use Microsoft SQL Server Management Studio on Windows 10 to connect to the following database and this is what the login screen looks like:
Server Type: Database Engine
Server Name: sqlmiprod.b298745190e.database.windows.net
Authentication: SQL Server Authentication
Login: my_user_id
Password: my_password
This recent R Studio article offers an easy way to connect to SQL Servers from R Studio using the following:
con <- DBI::dbConnect(odbc::odbc(),
Driver = "[your driver's name]",
Server = "[your server's path]",
Database = "[your database's name]",
UID = rstudioapi::askForPassword("Database user"),
PWD = rstudioapi::askForPassword("Database password"),
Port = 1433)
I have two questions
What should I use as "[your driver's name]"?
What should I use as "[your database's name]"?
The server path I'll use is sqlmiprod.b298745190e.database.windows.net (from above) and I'll leave the port at 1433. If that's wrong please let me know.
Driver
From #Zaynul's comment and my own experience, the driver field is a text string with the name of the ODBC driver. This answer contains more details on this.
You probably want someting like:
Driver = 'ODBC Driver 17 for SQL Server' (from #Zaynul's comment)
Driver = 'ODBC Driver 11 for SQL Server' (from my own context)
Database
The default database you want to connect to. Roughly equivalent to starting an SQL script with
USE my_database
GO
If all your work will be within a single database then puts its name here.
In some contexts you should be able to leave this blank, but you then have to use the in_schema command to add the database name every time you connect to a table.
If you are working across multiple databases, I recommend putting the name of one database in, and then using the in_schema command to specify the database at every point of connection.
Example using the in_schema command (more details):
df = tbl(con, from = in_schema('database.schema', 'table'))
Though I have not tried it, if you do not have a schema then
df = tbl(con, from = in_schema('database', 'table'))
Should also work (I've been using this hack without issue for a while).
I need to write a "standalone" script in Python to upload sales taxes to the account_tax table in the database using ONLY the ORM module of OpenERP. What I would like to do is something like the pseudo code below.
Can someone provide me a more details on the following:
1) what sys.path's do I need to set
2) what modules do I need to import before importing the "account" module. Currently when I import the "account" module I get the following error:
AssertionError: The report "report.custom" already exists!
3) What is the proper way to get my database cursor. In the code below I am simply calling psycopg2 directly to get a cursor.
If this approach cannot work, can anyone suggest an alternative approach other than writing XML files to load the data from the OpenERP application itself. This process needs to run outside of the the standard OpenERP application.
PSEUDO CODE:
import sys
# set Python paths to access openerp modules
sys.path.append("./openerp")
sys.path.append("./openerp/addons")
# import OpenERP
import openerp
# import the account addon modules that contains the tables
# to be populated.
import account
# define connection string
conn_string2 = "dbname='test2' user='xyz' password='password'"
# get a db connection
conn = psycopg2.connect(conn_string2)
# conn.cursor() will return a cursor object
cursor = conn.cursor()
# and finally use the ORM to insert data into table.
If you wanna do it via web service then have look at the OpenERP XML-RPC Web services
Example code top work with OpenERP Web Services :
import xmlrpclib
username = 'admin' #the user
pwd = 'admin' #the password of the user
dbname = 'test' #the database
# OpenERP Common login Service proxy object
sock_common = xmlrpclib.ServerProxy ('http://localhost:8069/xmlrpc/common')
uid = sock_common.login(dbname, username, pwd)
#replace localhost with the address of the server
# OpenERP Object manipulation service
sock = xmlrpclib.ServerProxy('http://localhost:8069/xmlrpc/object')
partner = {
'name': 'Fabien Pinckaers',
'lang': 'fr_FR',
}
#calling remote ORM create method to create a record
partner_id = sock.execute(dbname, uid, pwd, 'res.partner', 'create', partner)
More clearly you can also use the OpenERP Client lib
Example Code with client lib :
import openerplib
connection = openerplib.get_connection(hostname="localhost", database="test", \
login="admin", password="admin")
user_model = connection.get_model("res.users")
ids = user_model.search([("login", "=", "admin")])
user_info = user_model.read(ids[0], ["name"])
print user_info["name"]
You see both way are good but when you use the client lib, code is less and easy to understand while using xmlrpc proxy is lower level calls that you will handle
Hope this will help you.
As per my view one must go for XMLRPC or NETSVC services provided by Open ERP for such needs.
You don't need to import accounts module of Open ERP, there are possibilities that other modules have inherited accounts.tax object and had altered its behaviour as per your business needs.
Eventually if you feed data by calling those methods manually without using Open ERP Web service its possible you'll get undesired result / unexpected failures / inconsistent database state.
You can use Erppeek to browse data, but not sure if you can really upload data to DB, personally I use/prefer XMLRPC
Why don't you use the xmlrpc call of openerp.
it will not need to import account or openerp . and even you can have all orm functionality.
You can use python library to access openerp server using xmlrpc service.
Please check https://github.com/OpenERP/openerp-client-lib
It is officially supported by OpenERP SA.
If you want to interacti directly with the DB, you could just import psycopg2 and:
conn = psycopg2.connect(dbname='dbname', user='dbuser', password='dbpassword', host='dbhost')
cur = conn.cursor()
cur.execute('select * from table where id = %d' % table_id)
cur.execute('insert into table(column1, column2) values(%d, %d)' % (value1, value2))
cur.close()
conn.close()
Why you want to fix it like that?! You should create a localization module and define data in XML files. This is the standard way to fix such a problem in OpenERP.
You want to insert sales taxes for which country? Explain more plz.
from openerp.modules.registry import RegistryManager
registry = RegistryManager.get("databasename")
with registry.cursor() as cr:
user = registry.get('res.users').browse(cr, userid, listids)
print user