This should be stupidly simple to answer, but for the life of me I cannot find a definitive answer on this.
Can you use "?" in postgres, like you can in other database engines?
For example:
SELECT * FROM MyTable WHERE MyField = ?
I know I can use the $n syntax for this, for example from psql this works:
CREATE TABLE dummy (id SERIAL PRIMARY KEY, value INT);
PREPARE bar(int) AS INSERT INTO dummy (value) VALUES ($1);
EXECUTE bar(10);
SELECT * FROM DUMMY;
But if I try to prepare a statement using "?", eg.
PREPARE bar(int) AS INSERT INTO dummy (value) VALUES (?);
I get:
ERROR: syntax error at or near ")"
LINE 1: PREPARE bar(int) AS INSERT INTO dummy (value) VALUES (?);
...and yet, in various places I read that "postgres supports the ? syntax".
What's going on here? Does postgres support using ? instead of $1, $2, etc.
If so, how do you use it?
Specifically, this is making my life a pain porting a bunch of existing sql server queries to postgres, and if I can avoid having to rewrite all the where conditions an all of the sql statements that would be very, very nice.
SQL-level PREPARE in PostgreSQL does not support the ? placeholder, it uses the $1 ... $n style.
Most client libraries support the standard placeholders used by that language in parameterized queries, eg PgJDBC uses ? placeholders.
If you're sending your queries via a client library like nPgSQL, psqlODBC, PgJDBC, psycopg2, etc then you should be able to use the usual placeholders for that language and client.
Related
I have a task to check and prove if there is a way to prevent SQL Injection without access to the database first - So no parameterized statements.
This basically means:
Is there a way to parse SQL statement as a string using any kind of tool or framework that would prove that SQL has been injected into it.
Any techniques available.
At first I had an idea to check if SQL matches a certain pattern like this:
Let somewhere be any kind of string that user can type in.
This is my SQL:
SELECT Id FROM somewhere
This statement has a pattern that looks like this:
SELECT SOME_VALUE FROM SOME_TABLE
Then let's say user wrote someTable WHERE 1=1; into the somewhere variable - (I know its not the smartest of SQL Injections)
But the point is now I have a statement that looks like this:
SELECT Id FROM someTable WHERE 1=1;
Which effectively gives us a statement that has a pattern like this:
SELECT SOME_VALUE FROM SOME_TABLE WHERE SOME_CONDITION
Which does not match the initial pattern:
SELECT SOME_VALUE FROM SOME_TABLE
Is that a correct way to check if SQL has been injected? I haven't found any tools that actually use this technique, or any other technique than just parameters (which require connection to the database). Don't worry - I know that parameters are the way to go, this task is about having no connection to the database.
It would be an idea if you block certain keywords and symbols like 'WHERE', 'AND', 'OR', ';', and '='.
But this is just a naive approach and for production, you should use parameterized statements.
I research around the forum of postgresql injection in Go and I found some useful information in SQL injection like below:
How to execute an IN lookup in SQL using Golang?
How can I prevent SQL injection attacks in Go while using "database/sql"?
but I still need some advice because my code in Go is using a different kind of code and usecases.
some usecase/question i need advice for are like this
Using query looping to multiple insert like
INSERT INTO a (a1,a2,a3) VALUES (%d,%d,%s) using fmt.Sprintf, I know using sprinft is bad. so is there any solution for this loop query for insert ?
Ex: INSERT INTO a (a1,a2,a3) VALUES (%d,%d,%s),(%d,%d,%s),(%d,%d,%s)
Is it safe to use fmt.Sprintf to generate query if the param is using %d instead of %s ?
Using Prepare statement and Query is safe, but what if I'm using function Select (using $1,$2) and function NamedQuery (using struct named.)
Ex: Select * from a where text = $1 -> is using this $1 safe ?
and
Ex : Select * from a where text = :text -> is this safe in function NamedQuery?
Kindly need your advice guys. Thank you!
Firstly, usually prefer to use the db placeholders ? $1 etc.
Yes it is safe to use fmt.Sprintf with integer parameters to build SQL, though worth avoiding if you can, but your third param is %s - avoid that and use ?
Yes it is safe to use fmt.Sprintf with integer parameters, but %s or %v is far more risky and I'd avoid, can't think why you'd need it.
Use placeholders here, then yes it is safe.
General rules:
Use placeholders by default, it should be rare to use %d (as in your IN query for example)
Parse params into types like integer before any validation or use
Avoid string concat if you can, and be particularly wary of string params
Always hard code things like column and table names, never generate them from user input (e.g. ?sort=mystringcolname)
Always validate that the params you get are only those authorised for that user
I'm attempting to write an extension for SQL Developer to better support Postgres. These are just XML configuration files with SQL snippets in them. To display the values for a postgres sequence, I need to run a simple query of the following form:
select * from schema.sequence
The trouble with this is that the Oracle SQL Developer environment provides the correct schema and node (sequence) name as bind variables. This would mean that I should format the query as:
select * from :SCHEMA.:NAME
The trouble with this is that bind variables are only valid in the select clause or the where clause (as far as I'm aware), and using this form of the query returns a "syntax error at or near "$1" error message.
Is there a way to return the values in the sequence object without directly selecting them from the sequence? Perhaps some obtuse joined statement from pg_catalog tables?
Try this:
select *
from information_schema.sequences
where sequence_name = :name
and sequence_schema = :schema;
It's not exactly the same thing as doing a select from the sequence, but the basic information is there.
I have to perform lexical analysis on the oracle query and separate the query to various parts (based on the clauses)in perl. For example,Consider :
Select deleteddate,deletedby from temptable where id = 10;
I need to print
select : deleteddate , deletedby
from : temptable
where : id = 10
I used this code snippet :
my $parser= SQL::Statement->new();
$parser->{PrinteError}=1;
my $query = SQL::Statement->new("select deleteddate,deletedby from temptable where id =10",$parser);
my #columns = $query->columns();
print $columns[0]->name();
Though this prints deleteddate, this fails when i give a subquery inside the select clause:
Select deleteddate,deletedby,(select 1+1 from dual) from temptable where id = 10;
Can you please point me in the correct direction.
Thanks.
It looks to be a limitation of that package; it seems to be a general purpose parser and not something that can understand advanced features like subqueries or Oracle-specific constructs like "from dual".
What are the constraints of your system? If python is an option it looks like this is a more fully-featured library:
http://code.google.com/p/python-sqlparse/
The other option would be to use the actual Oracle database, if that's an option. You would:
use the DBI and DBD::Oracle modules to create a connection to Oracle & get a database handle,
create a statement handle by calling prepare() on the database handle using your query,
execute the query (there may be an option in Oracle to execute in "test only" or "parse only" mode),
examine the statement handle (such as the NAMES_hash property) to get the column names.
Otherwise it seems the SQL::Statement module unfortunately just isn't up to the task...
I need to upload a lot of data to a MySQL db. For most models I use django's ORM, but one of my models will have billions (!) of instances and I would like to optimize its insert operation.
I can't seem to find a way to make executemany() work, and after googling it seems there are almost no examples out there.
I'm looking for the correct sql syntax + correct command syntax + correct values data structure to support an executemany command for the following sql statement:
INSERT INTO `some_table` (`int_column1`, `float_column2`, `string_column3`, `datetime_column4`) VALUES (%d, %f, %s, %s)
Yes, I'm explicitly stating the id (int_column1) for efficiency.
A short example code would be great
Here's a solution that actually uses executemany() !
Basically the idea in the example here will work.
But note that in Django, you need to use the %s placeholder rather than the question mark.
Also, you will want to manage your transactions. I'll not get into that here as there is plenty of documentation available.
from django.db import connection,transaction
cursor = connection.cursor()
query = ''' INSERT INTO table_name
(var1,var2,var3)
VALUES (%s,%s,%s) '''
query_list = build_query_list()
# here build_query_list() represents some function to populate
# the list with multiple records
# in the tuple format (value1, value2, value3).
cursor.executemany(query, query_list)
transaction.commit()
are you serisouly suggesting loading billions of rows (sorry instances) of data via some ORM data access layer - how long do you have ?
bulk load if possible - http://dev.mysql.com/doc/refman/5.1/en/load-data.html
If you need to modify the data, bulk load with load data into a temporary table as is. Then apply modifications with an insert into select command. IME, this is by far the fastest way to get a lot of data into a table.
I'm not sure how to use the executemany() command, but you can use a single SQL INSERT statement to insert multiple records