Why is QueryRow().Scan() returning an empty string when it is not empty in table? - sql

I am trying to Query a single row from a PostgreSQL database table.
func getPrefix(serverID int64, db *sql.DB) string {
var prefix string
err := db.QueryRow("SELECT prefix FROM servers WHERE serverid = 1234").Scan(&prefix)
if err != nil {
fmt.Println(err.Error())
}
spew.Dump(prefix)
fmt.Println("Prefix is " + prefix)
return prefix
}
Apparently, the variable prefix is an empty String, but when I query it in the database, it's not empty
You are now connected to database "mewbot" as user "postgres".
mewbot=# select * from servers;
serverid | prefix
----------+--------
1234 | ;
(1 row)
mewbot=#
My question is, why is it returning an Empty String when it should be ;
All checks taken; I've made sure I'm connected to the same database et al

Apparently it was not working because SSL mode was disabled on my server but I was trying to connect without specifying it was disabled.
Changing postgres://postgres:7890#localhost:5432/mewbot to postgres://postgres:7890#localhost:5432/mewbot?sslmode=disable solved my issue.

Related

GORM not returning results, but when I run the SQL query in my database client, records come back

My GORM query looks a little like this:
selectCallSQL = "SELECT * from callautomation_schedule WHERE id = ?"
testSelect = "SELECT * FROM callautomation_schedule WHERE next_planned_call > date_trunc('minute', now())"
func SelectCall(id int) *CallSchedule{
var result CallSchedule
connection.Raw(selectCallSQL, id).Scan(&result)
return &result
}
func SelectCall2() *CallSchedule{
var result CallSchedule
connection.Raw(testSelect).Scan(&result)
return &result
}
The first function returns a result as expected, however, the second function does not.
If I run the testSelect SQL in my database client, I do get a result. Why is this happening?
The issue I discovered was with my connection string and column setup. I was using the type TIMESTAMP NO TIMEZONE in my table schema, but in my connection string, I was connecting via the Asia Timezone.
Annoying bug, but fixed now!

PostgreSQL import from CSV NULL values are text - Need null

I had exported a bunch of tables (>30) as CSV files from MySQL database using phpMyAdmin. These CSV file contains NULL values like:
"id","sourceType","name","website","location"
"1","non-commercial","John Doe",NULL,"California"
I imported many such csv to a PostgreSQL database with TablePlus. However, the NULL values in the columns are actually appearing as text rather than null.
When my application fetches the data from these columns it actually retrieves the text 'NULL' rather than a null value.
Also SQL command with IS NULL does not retrieve these rows probably because they are identified as text rather than null values.
Is there a SQL command I can do to convert all text NULL values in all the tables to actual NULL values? This would be the easiest way to avoid re-importing all the tables.
PostgreSQL's COPY command has the NULL 'some_string' option that allows to specify any string as NULL value: https://www.postgresql.org/docs/current/sql-copy.html
This would of course require re-importing all your tables.
Example with your data:
The CSV:
"id","sourceType","name","website","location"
"1","non-commercial","John Doe",NULL,"California"
"2","non-commercial","John Doe",NULL,"California"
The table:
CREATE TABLE import_with_null (id integer, source_type varchar(50), name varchar(50), website varchar(50), location varchar(50));
The COPY statement:
COPY import_with_null (id, source_type, name, website, location) from '/tmp/import_with_NULL.csv' WITH (FORMAT CSV, NULL 'NULL', HEADER);
Test of the correct import of NULL strings as SQL NULL:
SELECT * FROM import_with_null WHERE website IS NULL;
id | source_type | name | website | location
----+----------------+----------+---------+------------
1 | non-commercial | John Doe | | California
2 | non-commercial | John Doe | | California
(2 rows)
The important part that transforms NULL strings into SQL NULL values is NULL 'NULL' and could be any other value NULL 'whatever string'.
UPDATE For whoever comes here looking for a solution
See answers for two potential solutions
One of the solutions provides a SQL COPY method which must be performed before the import itself. The solution is provided by Michal T and marked as accepted answer is the better way to prevent this from happening in the first place.
My solution below uses a script in my application (Built in Laravel/PHP) which can be done after the import is already done.
Note- See the comments in the code and you could potentially figure out a similar solution in other languages/frameworks.
Thanks to #BjarniRagnarsson suggestion in the comments above, I came up with a short PHP Laravel script to perform update queries on all columns (which are of type 'string' or 'text') to replace the 'NULL' text with NULL values.
public function convertNULLStringToNULL()
{
$tables = DB::connection()->getDoctrineSchemaManager()->listTableNames(); //Get list of all tables
$results = []; // an array to store the output results
foreach ($tables as $table) { // Loop through each table
$columnNames = DB::getSchemaBuilder()->getColumnListing($table); //Get list of all columns
$columnResults = []; // array to store the results per column
foreach ($columnNames as $column) { Loop through each column
$columnType = DB::getSchemaBuilder()->getColumnType($table, $column); // Get the column type
if (
$columnType == 'string' || //check if column type is string or text
$columnType == 'text'
) {
$query = "update " . $table . " set \"" . $column . "\"=NULL where \"" . $column . "\"='NULL'"; //Build the update query as mentioned in comments above
$r = DB::update($query); //perform the update query
array_push($columnResults, [
$column => $r
]); //Push the column Results
}
}
array_push($results, [
$table => $columnResults
]); // push the table results
}
dd($results); //Output the results
}
Note I was using Laravel 8 for this.

Export BigQuery temporary table into multiple files based on value in column

I have a following problem.
Basing on query like this:
SELECT lp_id, MOD(ABS(FARM_FINGERPRINT(lp_id)), 10) AS bucket FROM dataset.table
I am doing the query and saving the result as csv in Google Storage.
// defined in scope:
// ctx context.Context
// bucket string
// folderName string
// queryString string
query := bqClient.Query(queryString)
job, err := query.Run(ctx)
conf, err := job.Config()
table := conf.(*bigquery.QueryConfig).Dst
gcsURI := fmt.Sprintf("gs://%s/%s/*.%s", bucket, folderName, "csv")
gcsRef := bigquery.NewGCSReference(gcsURI)
gcsRef.FieldDelimiter = ","
extractor := table.ExtractTo(gcsRef)
// run job...
And, what I want to do is to split the result into multiple files basing on the bucket the user is (like users from bucket n into file {{ n_filename }}) in one job, to avoid increasing costs for data processing.
Is it possible?
Thanks for your help.

My ASP.NET Website is Attacked With SQL Injection

Hacker reached my database User list and other tables.
First of all, I use parameterized command in all of the transactions by using
command.Parameters.Add("#Parameter1", SqlDbType.NVarChar).Value
All transactions are stored procedures.
I am inserting every single site navigation into database. Particular database table as follows;
ID int (PK)
UserID int (null)
URL nvarchar(500)
IPAddress nvarchar(25)
CreatedAt datetime
Project gets UserID information from the code is session opened or not.
CreatedAt is DateTime.UtcNow.
IPAddress code as follows:
public static string GetIPAddress(HttpContext context)
{
string ipAddress = context.Request.ServerVariables["HTTP_X_FORWARDED_FOR"];
if (!string.IsNullOrEmpty(ipAddress))
{
string[] addresses = ipAddress.Split(',');
if (addresses.Length != 0)
return addresses[0];
}
return context.Request.ServerVariables["HTTP_CLIENT_IP"] ?? context.Request.ServerVariables["REMOTE_ADDR"];
}
However URL is filled from Website Current URL with all query string. (Request.RawUrl)
Normally, when the user visits the site, Log is inserted into database as I stated above. Following records are inserted normally. Example data looks like this:
ID UserID URL IPAddress CreatedAt
1 NULL /User 1.22.33.444 2019-12-12 16:22:33.441
2 NULL /User/MyOrders 1.22.33.444 2019-12-12 16:24:33.441
3 NULL /User?utm_source=email 1.22.33.444 2019-12-12 16:29:33.441
The hacker somehow inserted a record into database as follows:
ID UserID URL IPAddress CreatedAt
4 NULL /User (select(0)from(select(sle 2019-12-12 17:22:33.441
5 NULL /User/MyOrders -1; waitfor delay '0:0:9' 2019-12-12 17:24:33.441
6 NULL /User?utm_source=email prvNA0R6'; waitfor delay 2019-12-12 17:29:33.441
7 NULL /User?utm_source=email -1' OR 2+198-198-1=0+0+0+ 2019-12-12 17:29:33.441
As you can see IPAddress column is the SQL Query attack. IPAddress field is restricted to 25 character length. Following SQL query text is truncated by the SQL.
In my opinion, hacker gets database records by using SQL Injection by changing URL or IPAddress as SQL scripts.
Any idea how hacker reached my database and how to avoid attack from now on?
EDIT
Stored procedure is as follows:
create procedure SP_InsertLogNavigation
#URL nvarchar(150),
#UserID int,
#IPAddress nvarchar(25),
#CreatedAt datetime
as
insert into LogNavigation (URL, UserID, IPAddress, CreatedAt)
values (#URL, #UserID, #IPAddress, #CreatedAt)
Usage of the stored procedure is as follows:
public bool Save(LogNavigation logNavigation)
{
int affectedRows = 0;
InitializeSqlFields("SP_InsertLogNavigation");
command.Parameters.Add("#URL", SqlDbType.NVarChar).Value = logNavigation.URL;
command.Parameters.Add("#UserID", SqlDbType.Int).Value = Validation.IsNull(logNavigation.UserID);
command.Parameters.Add("#IPAddress", SqlDbType.NVarChar).Value = logNavigation.IPAddress;
command.Parameters.Add("#CreatedAt", SqlDbType.DateTime).Value = logNavigation.CreatedAt;
try
{
Connect();
affectedRows = command.ExecuteNonQuery();
}
catch (SqlException)
{
}
finally
{
Disconnect();
}
return affectedRows != 0;
}
So I would assert that you actually have not succumbed to the SQL injection attack. If you are using only parameterised queries then the attacker has tried to gain access but failed.
However, the reason why your table has their attack attempts lodged is to do with these lines of code:
string ipAddress = context.Request.ServerVariables["HTTP_X_FORWARDED_FOR"];
return context.Request.ServerVariables["HTTP_CLIENT_IP"] ?? context.Request.ServerVariables["REMOTE_ADDR"];
You must understand that the client has almost total control over the headers submitted to your website. The attacker can modify the headers to be whichever values they desire.
These parameters are supplied by the client in their request:
HTTP_X_FORWARDED_FOR
REMOTE_ADDR
HTTP_CLIENT_IP
In your case, the attacker has provided spoofed headers that contain SQL Injection code, which you have faithfully placed into your database in the IP Address column.
Edit following OP query in comments
OP asked:
Excellent, but I have only one question. How she/he passed more than
25 characters to the my server side
The request headers have no specified size limit, although their are practical limits applied by various implementations (i.e. 8Kb in Apache). The client can send a request header of any length up to what is allowed by your website host software.
However, as your SP is configured with a parameter whose maximum length is 25 characters, the overflowing text is being truncated when persisted to the database.

"Operator does not exist: integer =?" when using Postgres

I have a simple SQL query called within the QueryRow method provided by go's database/sql package.
import (
"github.com/codegangsta/martini"
"github.com/martini-contrib/render"
"net/http"
"database/sql"
"fmt"
_ "github.com/lib/pq")
)
type User struct {
Name string
}
func Show(db *sql.DB, params martini.Params) {
id := params["id"]
row := db.QueryRow(
"SELECT name FROM users WHERE id=?", id)
u := User{}
err := row.Scan(&u.Name)
fmt.Println(err)
}
However, I'm getting the error pq: operator does not exist: integer =? It looks like the code doesn't understand that the ? is just a placeholder. How can I fix this?
PostgreSQL works with numbered placeholders ($1, $2, ...) natively rather than the usual positional question marks. The documentation for the Go interface also uses numbered placeholders in its examples:
rows, err := db.Query("SELECT name FROM users WHERE age = $1", age)
Seems that the Go interface isn't translating the question marks to numbered placeholders the way many interfaces do so the question mark is getting all the way to the database and confusing everything.
You should be able to switch to numbered placeholders instead of question marks:
row := db.QueryRow(
"SELECT name FROM users WHERE id = $1", id)