Postgres could not determine data type of parameter $1 in Golang application - sql

I am creating an application in Golang that uses Postgres using the pq driver. I want to make a function that can select a user-determined field from my database, but I get an error:
pq: could not determine data type of parameter $1
Below is the code that generated this error:
var ifc interface{}
if err := conn.QueryRow("SELECT $1 FROM "+db+" WHERE uuid=$3 OR uri=$4 LIMIT 1", field, UUIDOrURI, UUIDOrURI).Scan(&ifc); err != nil {
if err == sql.ErrNoRows {
return http.StatusNotFound
}
log.Println(err)
return http.StatusInternalServerError
}
Why can I not insert the field that I want to SELECT using $1? Is there another way to do this?

You cannot use placeholders for field names. You'll have to build the query directly, as in:
"SELECT `" + field + "` FROM "
To avoid SQL injections, make sure that the field is part of a list of allowed fields beforehand.

IMHO an easier way, but not safe, to create SQL queries is to use fmt.Sprintf:
query := fmt.Sprintf("SELECT %s FROM %s WHERE uuid=%s", field, db, UUIDOrURI)
if err := conn.QueryRow(query).scan(&ifc); err != nil {
}
You can even specify an argument index:
query := fmt.Sprintf("SELECT %[2]s FROM %[1]s", db, field)
In order to ease the development, I recommend to use a package for the postgresql communication, I tried this one and worked great.

Related

extracting data from MariaDB using a raw KNEX statement

Former sequelize person here.
What is the best method to extract data from MariaDB using a raw Knex statement? Here are the two methods i have tried:
MariaDb:
SELECT JSON_OBJECT(
...........
) 'JSON_OBJECT'
jscript to interpret results:
for ( tmp of result[0] ) { console.log (JSON.parse(tmp.JSON_OBJECT)) ; }
and this MariaDb:
SELECT JSON_ARRAYAGG(
JSON_OBJECT(
..........
)
) 'JSON_ARRAYAGG'
jscript:
for ( tmp of JSON.parse(result[0][0]['JSON_ARRAYAGG'] )) { console.log(tmp) ; }
Both of these methods work, but there may be a much cleaner way rather than using JSON.parse.
Suggestions?
note: i certainly understand there is a controversy using raw - i have a number of very large sql statements that were in a previous application, and i would rather (for now) just use them as is rather than rewrite them in pure Knex.
EDIT: this may be "good enough" since it appears that knex raw does not necessarily return objects.
node:
SELECT JSON_ARRAYAGG(
JSON_OBJECT(
..........
)
) 'JSON_ARRAYAGG'
.....................................
const result = await knex.raw( sqlStatement, sqlQuery);
return result[0][0].JSON_ARRAYAGG ;
browser:
(async () => { try { let result = await app.service('raw-service').find({query: sqlQuery}) ; newResult = result; } catch (e) { console.log(e); } } )() ;
console.log(JSON.parse(newResult));
but at least the really ugly stuff is hidden away, inside of node.
SQL databases are designed to return rows of columns; any query where you route everything through json instead is not going to be "the best method" you ask for.
Your first try at least uses rows, so is better than the second.
That said, if you are just modifying existing code to work with Knex rather than rewriting it, whatever requires minimum changes will lead to fewer bugs and should be preferred.
To extract data from a MariaDB database using Knex, you can use the .select() method on a Knex query builder object. This method allows you to specify the columns that you want to retrieve from the database, as well as any conditions or filters using the .where() method.

Rails/SQL combine where conditions for rails scope

I want to get total count of items with language_id other than [30, 54] and language_id: nil
LOCALES = {
non_arabic_languages: {
id: [30, 54]
}
}
scope :non_arabic_languages, -> { where.not(language_id: LOCALES[:non_arabic_languages][:id]) || where(language_id: nil) }
This example predictably returns first part, so I only get non arabic items. && works wrong as well. How may I combine it? We'll be thankful for the advice!
You're falling into a common trap where you confuse logical operations in Ruby with actually creating SQL via the ActiveRecord query interface.
Using || will return the first truthy value:
where.not(language_id: LOCALES[:non_arabic_languages][:id]) || where(language_id: nil)
Which is the ActiveRecord relation returned by where.not(language_id: LOCALES[:non_arabic_languages][:id]) since everything except false and nil are truthy in Ruby. || where(language_id: nil) is never actually evaluated.
Support for .or was added in Rails 5. In previous versions the most straight forward solution is to use Arel or a SQL string:
scope :non_arabic_languages, -> {
non_arabic_languages_ids = LOCALES[:non_arabic_languages].map { |h| h[:id] }
where(arel_table[:language_id].not_in(non_arabic_languages_ids ).or(arel_table[:language_id].eq(nil)))
}
I would leave a tagging comment (like #fixme) so that you fix this after upgrading to 5.0 or consider a better solution that doesn't involve hardcoding database ids in the first place like for example using a natural key.

How to make safe SELECT from sqlite3 DB in Python passing table name as parameter?

folks.
I'm trying to figure out how to safely pass table name as parameter in SQL request from sqlite3 DB using aiosqlite in Python 3.9.
Request like this:
t = '2c2c33d6-6eb2-4040-959f-08821942e1af'
cursor = await dbconn.execute("""SELECT obstmstmp FROM '{}' ORDER BY obstmstmp DESC LIMIT 1;""".format(t))
obstmstmp = (await cursor.fetchone())[0]
do work, but may be prone to SQL injection. Even in this particular case uuid's come from the DB itself and not from user, I'm still wondering how to safely pass table name as parameter.
I have tried something like this:
t = '2c2c33d6-6eb2-4040-959f-08821942e1af'
cursor = await dbconn.execute("""SELECT obstmstmp FROM '?' ORDER BY obstmstmp DESC LIMIT 1;""", (t, ))
obstmstmp = (await cursor.fetchone())[0]
but it seems that ? is not recognized, resulting in no such table: ? exception.
Any ideas?
As you've seen, you can't dynamically bind object names (a table name in this case), only values), so you'll probably need to resort to some string concatenation.
One approach could be try and sanitize the table yourself:
if "'" in t:
raise ValueError('hacking attempt')
A slightly stronger approach may be to check that the input is a valid UUID:
from uuid import UUID
UUID(t) # Will raise a ValueError if t isn't a valid UUID

How correctly pass arguments to the SQL request via the sqlx package in Golang?

In my Golang (1.15) application I use sqlx package to work with the PostgreSQL database (PostgreSQL 12.5).
When I try to execute SQL statement with arguments PostgreSQL database it raises an error:
ERROR: could not determine data type of parameter $1 (SQLSTATE 42P18):
PgError null
According to the official documentation, this error means that an INDETERMINATE DATATYPE was passed.
The organizationId has value. It's not null/nil or empty. Also, its data type is a simple built-in data type *string.
Code snippet with Query method:
rows, err := cr.db.Query(`
select
channels.channel_id::text,
channels.channel_name::text
from
channels
left join organizations on
channels.organization_id = organizations.organization_id
where
organizations.tree_organization_id like concat( '%', '\', $1, '%' );`, *organizationId)
if err != nil {
fmt.Println(err)
}
I also tried to use NamedQuery but it also raise error:
ERROR: syntax error at or near ":" (SQLSTATE 42601): PgError null
Code snippet with NamedQuery method:
args := map[string]interface{}{"organization_id": *organizationId}
rows, err := cr.db.NamedQuery(`
select
channels.channel_id::text,
channels.channel_name::text
from
channels
left join organizations on
channels.organization_id = organizations.organization_id
where
organizations.tree_organization_id like concat( '%', '\', :organization_id, '%' );`, args)
if err != nil {
fmt.Println(err)
}
In all likelihood, the arguments is not passed correctly to my request. Can someone explain how to fix this strange behavior?
P.S. I must say right away that I do not want to form an sql query through concatenation, or through the fmt.Sprintf method. It's not safe.
Well, I found the solution of this problem.
I found the discussion in github repository of the sqlx package.
In the first option, we can make concatenation of our search string outside of the query. This should still be safe from injection attacks.
The second choice to try this: concat( '%', '\', $1::text, '%' ). As Bjarni Ragnarsson said in the comment, PostgreSQL cannot deduce the type of $1.

SQL table name as variable to query

I am using the pgx library to populate a Postgres database in Go.
Following e.g. the tutorial here and this question, I construct my query like so:
// this works
tblinsert = `INSERT into tablename (id, body) VALUES ($1, $2) RETURNING id`
var id string
err := Connector.QueryRow(context.Background(), tblinsert, "value1", "value2").Scan(&id)
Question: I would like to supply the tablename as a variable to the query as well, e.g. change tblinsert to INSERT into $1 (id, body) VALUES ($2, $3)
Issue: the above code errors out and returns a "syntax error at or near "$1" (SQLSTATE 42601)" when I run:
//this errors out
err := Connector.QueryRow(context.Background(), tblinsert, "tablename", "value1", "value2").Scan(&id)`.
I do not fully understand why the error message even references the $ placeholder - I expected the query to do the string substitution here, just like for the VALUES.
I found similar questions in pure SQL here and here, so not sure if this is even possible. .
Any pointers on where I am going wrong, or where I can learn more about the $x syntax (I got the above to work using Sprintf, but this appears discouraged) are much appreciated - I am pretty new to both SQL and Go.