I am planning to store hashed value of password in SQL Server database when a user signs up and when the same user logs in, will compare user entered password with the stored hashed value.
I am using following piece of code to generate hashed value of password and want to insert same value in the database with column datatype varbinary(1000).
I have used following code snippets to insert into database and both options have failed.
insert into users.dbo.allusers values (123456789,
b'\xc8\xc2\x06\x9f\x8e\x96\xad\xb3\x14r\x97Rm"\'\xfdbt\x03\xc81F\xc59\xd03\xcfXs\x88\xff\x95bg\x7f\xd1\xf6\xfc\x98\xe5x~c\x9eb\x91\x89\x80{\x14i0\x99f&\xa5\\e?\xf2\xbd\x06\xf7\xd0',
'a#a.com',
'a',
'b'
)
insert into users.dbo.allusers values (123456789,
convert(varbinary(1000), b'\xc8\xc2\x06\x9f\x8e\x96\xad\xb3\x14r\x97Rm"\'\xfdbt\x03\xc81F\xc59\xd03\xcfXs\x88\xff\x95bg\x7f\xd1\xf6\xfc\x98\xe5x~c\x9eb\x91\x89\x80{\x14i0\x99f&\xa5\\e?\xf2\xbd\x06\xf7\xd0', 1),
'a#a.com',
'a',
'b'
)
The error I am getting is
SQL Error [102] [S0001]: Incorrect syntax near '\xc8\xc2\x06\x9f\x8e\x96\xad\xb3\x14r\x97Rm"'.
I am using cloudsql (gcp product) with SQL Server 2017 standard and dbeaver client to insert data. Any help is really appreciated.
Based on comments I am editing my question. Also used python to insert data to SQL Server using following flask code
def generate_password(password_value):
salt = os.urandom(32)
key = hashlib.pbkdf2_hmac('sha256', password_value.encode('utf-8'), salt, 100000)
# Store them as:
storage = salt + key
return storage
#app.route('/add_new_user', methods = ['POST'])
def add_new_user():
data = request.get_json(silent=True, force=True)
cpf = data.get('cpf')
password = data.get('password')
email = data.get('email')
fname = data.get('fname')
lname = data.get('lname')
password = generate_password(password)
mssqlhost = '127.0.0.1'
mssqluser = 'sqlserver'
mssqlpass = 'sqlserver'
mssqldb = 'users'
try:
# - [x] Establish Connection to db
mssqlconn = pymssql.connect(
mssqlhost, mssqluser, mssqlpass, mssqldb)
print("Connection Established to MS SQL server.")
cursor = mssqlconn.cursor()
stmt = "insert into users.dbo.allusers (cpf, password, email, fname, lname) values (%s,%s,%s,%s,%s)"
data = f'({cpf}, {password}, {email}, {fname}, {lname})'
print(data)
cursor.execute(stmt)
mssqlconn.commit()
mssqlconn.close()
return {"success":"true"}
except Exception as e:
print(e)
return {"success":"false"}
I get different error in command prompt
more placeholders in sql than params available
because data already has quotes because of hash value (printed data)
(123456789, b'6\x17DnOP\xbb\xd0\xdbL\xb6"}\xda6M\x1dX\t\xdd\x12\xec\x059\xbb\xe1/\x1c|\xea\x038\xfd\r\xd1\xcbt\xd6Pe\xcd<W\n\x9f\x89\xd7J\xc1\xbb\xe1\xd0\xd2n\xa7j}\xf7\xf5:\xba0\xab\xbe', a#a.com, a, b)
A binary literal in TSQL looks like 0x0A23...
insert into dbo.allusers(cpf, password, email, fname, lname)
values
(
123456789,
0xC8C2069F8E96. . .,
'a#a.com',
'a',
'b'
)
Related
I want to execute a command using pyodbc in my Django app. When I do simple update with one column it works great:
cursor.execute("UPDATE dbo.Table SET attr = 1 WHERE id = {}".format(id))
However when I try to use a string as a column value it throws error:
cursor.execute("UPDATE dbo.Table SET attr = 1, user = '{}' WHERE id = {}".format(id, str(request.user.username)))
Here's error message:
('42S22', "[42S22] [Microsoft][ODBC SQL Server Driver][SQL Server]Invalid column name 'Admin'. (207) (SQLExecDirectW)")
Suprisingly this method works:
cursor.execute("UPDATE dbo.Table SET attr = 1, user = 'Admin' WHERE id = {}".format(id))
What seems to be the problem? Why is sql mistaking column value for its name?
As mentioned above, you have your arguments backwards, but if you're going to use cursor.execute(), the far more important thing to do is use positional parameters (%s). This will pass the SQL and values separately to the database backend, and protect you from SQL injection:
from django.db import connection
cursor = connection.cursor()
cursor.execute("""
UPDATE dbo.Table
SET attr = 1,
user = %s
WHERE id = %s
""", [
request.user.username,
id,
])
You've got your format arguments backwards. You're passing id to user, and username to the id WHERE clause.
I am trying to insert my dataframe into a newly created table in Teradata. My connection and creating the table using SQLAchmey works, but I am unable to insert the data. I keep getting the same error that the schemy columns do not exist.
Here is my code:
username = '..'
password= '..'
server ='...'
database ='..'
driver = 'Aster ODBC Driver'
engine_stmt = ("mssql+pyodbc://%s:%s#%s/%s?driver=%s" % (username, password, server, database, driver ))
engine = sqlalchemy.create_engine(engine_stmt)
conn = engine.raw_connection()
#create tble function
def create_sql_tbl_schema(conn):
#tbl_cols_sql = gen_tbl_cols_sql(df)
sql = "CREATE TABLE so_sandbox.mn_testCreation3 (A INTEGER NULL,B INTEGER NULL,C INTEGER NULL,D INTEGER NULL) DISTRIBUTE BY HASH (A) STORAGE ROW COMPRESS LOW;"
cur = conn2.cursor()
cur.execute('rollback')
cur.execute(sql)
cur.close()
conn.commit()
create_mysql_tbl_schema(conn) #this works and the table is created
df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('abcd'))
df.to_sql('mn_testCreation3', con=engine,
schema='so_sandbox', index=False, if_exists='append') #this is giving me problems
Error message returned is:
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000] [AsterData][nCluster] (34) ERROR: relation "INFORMATION_SCHEMA"."COLUMNS" does not exist. (34) (SQLPrepare)') [SQL: 'SELECT [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA], [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME], [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME], [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE], [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE], [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION], [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH], [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION], [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE], [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT], [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME] \nFROM [INFORMATION_SCHEMA].[COLUMNS] \nWHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = ? AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = ?'] [parameters: ('mn_testCreation3', 'so_sandbox')] (Background on this error at: http://sqlalche.me/e/f405)
I am connecting to a database in Matlab and doing a SQL query on the database. The issue I have is why the type being returned is a cell array and not a table. The code is below, I've omitted the specific details of my database.
% Clear the MATLAB worksapce
clear
clc
% Run SQL Script
% Create an ODBC database connection to a Microsoft(R) SQL Server(R)
% database with Windows(R) authentication. Specify a blank user name and
% password.
% Selecting the database with the default datasource as "SQLMiniProject"
datasource = 'my_project';
username = 'username';
password = 'password';
%Connecting to the database
conn = database(datasource, username,password);
% files for queries
test_script = 'sql_test_script.sql';
results= runsqlscript(conn,'sql_test_script.sql');
close(conn);
What I am getting back from the above code is ...
Data: {15×2 cell}
RowLimit: 0
SQLQuery: 'select FIRST_NAME AS 'FirstName', LAST_NAME AS 'LastName' from TABLE_1'
Message: []
Type: 'ODBCCursor Object'
Statement: [1×1 database.internal.ODBCStatementHandle]
The Data is being returned as a cell and not a Table, which I would expect. Does anyone have any guidance on this?
Many thanks in advance!
You can specify the output type by calling setdbprefs and specifying either cell or table. In your case you need to call:
setdbprefs('DataReturnFormat', 'table');
I'm getting an error on a simple statement through PG:
require 'pg'
conn = PG.connect( dbname: 'myDB' )
#res = conn.exec_params( 'SELECT count(id) FROM users WHERE username = $1 AND status = "active"', ['johnny5'] )
The error:
/Users/rich/app.rb:14:in `exec_params': ERROR: column "active" does not exist (PG::UndefinedColumn)
LINE 1: ...unt(id) FROM users WHERE username = $1 AND status = "active"
^
"active" is a field value, not a column.
My question: I have fixed this by entering the value "active" as another placeholder. Are quoted values in the SQL not permitted? I assumed that quoted aspects of the SQL would have been fine.
String literals in SQL use sigle quotes, double quotes are for identifiers (such as table and column names). So, when you mention "active", the database complains that there is no such column.
The solution is to use a placeholder:
#res = conn.exec_params(
%q{SELECT count(id) FROM users WHERE username = $1 AND status = $2},
['johnny5', 'active']
)
or use single quotes inside the SQL:
#res = conn.exec_params(
%q{SELECT count(id) FROM users WHERE username = $1 AND status = 'active'},
['johnny5']
)
Switching from '...' to %q{...} for your SQL string literal makes the internal quoting problems a bit easier to deal with.
I'm trying to insert data into a pre-existing PostgreSQL table using RPostgreSQL and I can't figure out the syntax for SQL parameters (prepared statements).
E.g. suppose I want to do the following
insert into mytable (a,b,c) values ($1,$2,$3)
How do I specify the parameters? dbSendQuery doesn't seem to understand if you just put the parameters in the ....
I've found dbWriteTable can be used to dump an entire table, but won't let you specify the columns (so no good for defaults etc.). And anyway, I'll need to know this for other queries once I get the data in there (so I suppose this isn't really insert specific)!
Sure I'm just missing something obvious...
I was looking for the same thing, for the same reasons, which is security.
Apparently dplyr package has the capacity that you are interested in. It's barely documented, but it's there. Scroll down to "Postgresql" in this vignette: http://cran.r-project.org/web/packages/dplyr/vignettes/databases.html
To summarize, dplyr offers functions sql() and escape(), which can be combined to produce a parametrized query. SQL() function from DBI package seems to work in exactly same way.
> sql(paste0('SELECT * FROM blaah WHERE id = ', escape('random "\'stuff')))
<SQL> SELECT * FROM blaah WHERE id = 'random "''stuff'
It returns an object of classes "sql" and "character", so you can either pass it on to tbl() or possibly dbSendQuery() as well.
The escape() function correctly handles vectors as well, which I find most useful:
> sql(paste0('SELECT * FROM blaah WHERE id in ', escape(1:5)))
<SQL> SELECT * FROM blaah WHERE id in (1, 2, 3, 4, 5)
Same naturally works with variables as well:
> tmp <- c("asd", 2, date())
> sql(paste0('SELECT * FROM blaah WHERE id in ', escape(tmp)))
<SQL> SELECT * FROM blaah WHERE id in ('asd', '2', 'Tue Nov 18 15:19:08 2014')
I feel much safer now putting together queries.
As of the latest RPostgreSQL it should work:
db_connection <- dbConnect(dbDriver("PostgreSQL"), dbname = database_name,
host = "localhost", port = database_port, password=database_user_password,
user = database_user)
qry = "insert into mytable (a,b,c) values ($1,$2,$3)"
dbSendQuery(db_connection, qry, c(1, "some string", "some string with | ' "))
Here's a version using the DBI and RPostgres packages, and inserting multiple rows at once, since all these years later it's still very difficult to figure out from the documentation.
x <- data.frame(
a = c(1:10),
b = letters[1:10],
c = letters[11:20]
)
# insert your own connection info
con <- DBI::dbConnect(
RPostgres::Postgres(),
dbname = '',
host = '',
port = 5432,
user = '',
password = ''
)
RPostgres::dbSendQuery(
con,
"INSERT INTO mytable (a,b,c) VALUES ($1,$2,$3);",
list(
x$a,
x$b,
x$c
)
)
The help for dbBind() in the DBI package is the only place that explains how to format parameters:
The placeholder format is currently not specified by DBI; in the
future, a uniform placeholder syntax may be supported. Consult the
backend documentation for the supported formats.... Known examples are:
? (positional matching in order of appearance) in RMySQL and RSQLite
$1 (positional matching by index) in RPostgres and RSQLite
:name and $name (named matching) in RSQLite
? is also the placeholder for R package RJDBC.