How to scan SQL array to []int64 in Go? - sql

I am scanning an array of int from Postgres DB and it is returning as []uint8. I need them in []int64, how can I convert them into []int64 or how can I return them from the DB as []int64? In my query I am selecting using the Array function in Postgres: Array(col1) where col1 is serial.
The error I am getting is:
unsupported Scan, storing driver.Value type []uint8 into type []int64

If you're using github.com/lib/pq, just use Int64Array.
col1arr := []int64{}
arr := pq.Int64Array{}
err := rows.Scan(&arr)
// ...
col1arr = []int64(arr)

Related

go with sqlx NamedQuery timestamp works with date but not with datetime. NamedQuery vs Query

rows, err := db.NamedQuery(`SELECT ts FROM test_table WHERE ts > '1999-01-08 04:05:06';`, map[string]interface{}{})
The code above gave me the following error:
unexpected `:` while reading named param at 74
panic: runtime error: invalid memory address or nil pointer dereference
This is strange, as the following snippet,
rows, err := db.NamedQuery(`SELECT ts FROM test_table WHERE ts > '1999-01-08';`, map[string]interface{}{})
runs without fault.
The difference between the two, is adding time to the input.
I resorted to using db.Query instead of the sqlx method db.NamedQuery which solved my problem.
I now see that I should have passed my input to NamedQuery as a parameter.
How does one typically write such a query and why would you use NamedQuery rather than Query?
why would you use NamedQuery rather than Query?
Queries that use named parameters are easier for the human to parse.
How does one typically write such a query
layout := "2006-01-02 15:04:05"
ts, err := time.Parse(layout, "1999-01-08 04:05:06")
if err != nil {
return err
}
arg := map[string]interface{}{"ts": ts}
rows, err := db.NamedQuery(`SELECT ts FROM test_table WHERE ts > :ts`, arg)
if err != nil {
return err
}

Scan unstructured JSON BYTEA into map[string]string

This seems like a common problem and may be posted somewhere already, but I can't find any threads talking about it, so here is the problem:
I have a Postgres table storeing a column of type BYTEA.
CREATE TABLE foo (
id VARCHAR PRIMARY KEY,
json_data BYTEA
)
The column json_data is really just JSON stored as BYTEA (It's not ideal I know). It is unstructured, but guaranteed to be of string -> string JSON.
When I query this table, I need to scan the query SELECT * FROM foo WHERE id = $1 into the following struct:
type JSONData map[string]string
type Foo struct {
ID string `db:"id"`
Data JSONData `db:"json_data"`
}
I'm using sqlx's Get method. When I execute a query I'm getting the error message sql: Scan error on column index 1, name "json_data": unsupported Scan, storing driver.Value type []uint8 into type *foo.JSONData.
Obviously, the scanner is having trouble scanning the JSON BYTEA into a map. I can implement my own scanner and call my custom scanner on the json_data column, but I'm wondering if there are better ways to do this. Could my JSONData type implement an existing interface to do this automatically?
As suggested by #iLoveReflection, implementing the Scanner interface on *JSONData worked. Here is the actual implementation:
func (j *JSONData) Scan(src interface{}) error {
b, ok := src.([]byte)
if !ok {
return errors.New("invalid data type")
}
return json.Unmarshal(b, j)
}

Control flow over query results in SQLX (lazy/eager)

I'm implementing a messages table with postgres (aws-rds) and I'm using golang as a backend to query the table.
CREATE TABLE:
CREATE TABLE IF NOT EXISTS msg.Messages(
id SERIAL PRIMARY KEY,
content BYTEA,
timestamp DATE
);
Here is the INSERT query:
INSERT INTO msg.Messages (content,timestamp) VALUES ('blob', 'date')
RETURNING id;
Now I want to be able to fetch a specific message, like this:
specific SELECT query:
SELECT id, content,timestamp
FROM msg.Messages
WHERE id = $1
Now let's say the user was offline for a long time and he needs to get a lot of messages from this table, let's say 10M messages, I don't want to return all results because it might explode the app memory.
each user saves his last message.id that he fetched, so the query will be:
SELECT id, content, timestamp
FROM msg.Messages
WHERE id > $1
Implementing paging in this query is feeling like inventing the wheel again, there must be out of the box solution for that.
I'm using sqlx, here is a rough example of my code:
query := `
SELECT id, content, timestamp
FROM msg.Messages
WHERE id > $0
`
args = 5
query = ado.db.Rebind(query)
rows, err := ado.db.Queryx(query, args...)
var res []Message
for rows.Next() {
msg := Message{}
err = rows.StructScan(&msg)
if err != nil {
return nil, err
}
res = append(res, msg)
}
return res, nil
How can I convert this code to be with lazy loading, that only on rows.next() will fetch the next item (and not loading all items in advance), and what about the garbage collector,
will it release the memory on each iteration of the row.next()??

Database Table Column is taken as String Value in Oracle Function in JSP

Select decrypt(PRODUCT_NUMBER,'123456789') as PRODUCT_NUMBER FROM Test
PRODUCT_NUMBER is a column in Test Table and contains Encrypted data Decrypt() is a function created and working fine.
When i run this Sql on Oracle SQL Developer it gives correct Result but when i run the same on JSP it gives me error on Function.
In JSP i call this by:
String sql = "Select decrypt(PRODUCT_NUMBER,'123456789') as PRODUCT_NUMBER FROM Test";
rs = conn.executeQuery(sql);
I think it takes PRODUCT_NUMBER as String ('PRODUCT_NUMBER') and Not as Column Name so it gives error.
java.sql.SQLException: ORA-01465: invalid hex number
This is the Decrypt Function
create or replace FUNCTION decrypt(p_raw IN RAW, p_key IN VARCHAR2) RETURN VARCHAR2 IS
v_retval RAW(255);
p_key2 RAW(255);
BEGIN
p_key2 := utl_raw.cast_to_raw(p_key);
dbms_obfuscation_toolkit.DES3Decrypt
(
input => p_raw,
key => p_key2,
which => 1,
decrypted_data => v_retval
);
RETURN RTRIM(utl_raw.cast_to_varchar2(v_retval), CHR(0));
END decrypt;
Solved!!
Query and Function both work perfect.
Actually My Co Developer had pointed the Connection to another replica instance of the DB that contained -1 in some columns so that's why it was giving error.
I reverted that and query worked like a charm.
Thanks for your time Everyone:)

Setting a Clob value in a native query

Oracle DB.
Spring JPA using Hibernate.
I am having difficulty inserting a Clob value into a native sql query.
The code calling the query is as follows:
#SuppressWarnings("unchecked")
public List<Object[]> findQueryColumnsByNativeQuery(String queryString, Map<String, Object> namedParameters)
{
List<Object[]> result = null;
final Query query = em.createNativeQuery(queryString);
if (namedParameters != null)
{
Set<String> keys = namedParameters.keySet();
for (String key : keys)
{
final Object value = namedParameters.get(key);
query.setParameter(key, value);
}
}
query.setHint(QueryHints.HINT_READONLY, Boolean.TRUE);
result = query.getResultList();
return result;
}
The query string is of the format
SELECT COUNT ( DISTINCT ( <column> ) ) FROM <Table> c where (exact ( <column> , (:clobValue), null ) = 1 )
where "(exact ( , (:clobValue), null ) = 1 )" is a function and "clobValue" is a Clob.
I can adjust the query to work as follows:
SELECT COUNT ( DISTINCT ( <column> ) ) FROM <Table> c where (exact ( <column> , to_clob((:stringValue)), null ) = 1 )
where "stringValue" is a String but obviously this only works up to the max sql string size (4000) and I need to pass in much more than that.
I have tried to pass the Clob value as a java.sql.Clob using the method
final Clob clobValue = org.hibernate.engine.jdbc.ClobProxy.generateProxy(stringValue);
This results in a java.io.NotSerializableException: org.hibernate.engine.jdbc.ClobProxy
I have tried to Serialize the Clob using
final Clob clob = org.hibernate.engine.jdbc.ClobProxy.generateProxy(stringValue);
final Clob clobValue = SerializableClobProxy.generateProxy(clob);
But this appears to provide the wrong type of argument to the "exact" function resulting in (org.hibernate.engine.jdbc.spi.SqlExceptionHelper:144) - SQL Error: 29900, SQLState: 99999
(org.hibernate.engine.jdbc.spi.SqlExceptionHelper:146) - ORA-29900: operator binding does not exist
ORA-06553: PLS-306: wrong number or types of arguments in call to 'EXACT'
After reading some post about using Clobs with entities I have tried passing in a byte[] but this also provides the wrong argument type (org.hibernate.engine.jdbc.spi.SqlExceptionHelper:144) - SQL Error: 29900, SQLState: 99999
(org.hibernate.engine.jdbc.spi.SqlExceptionHelper:146) - ORA-29900: operator binding does not exist
ORA-06553: PLS-306: wrong number or types of arguments in call to 'EXACT'
I can also just pass in the value as a String as long as it doesn't break the max string value
I have seen a post (Using function in where clause with clob parameter) which seems to suggest that the only way is to use "plain old JDBC". This is not an option.
I am up against a hard deadline so any help is very welcome.
I'm afraid your assumptions about CLOBs in Oracle are wrong. In Oracle CLOB locator is something like a file handle. And such handle can be created by the database only. So you can not simply pass CLOB as bind variable. CLOB must be somehow related to database storage, because this it can occupy up to 176TB and something like that can not be held in Java Heap.
So the usual approach is to call either DB functions empty_clob() or dbms_lob.create_temporary (in some form). Then you get a clob from database even if you think it is "IN" parameter. Then you can write as many data as you want into that locator (handle, CLOB) and then you can use this CLOB as a parameter for a query.
If you do not follow this pattern, your code will not work. It does not matter whether you use JPA, SpringBatch or plan JDBC. This constrain is given by the database.
It seems that it's required to set type of parameter explicitly for Hibernate in such cases. The following code worked for me:
Clob clob = entityManager
.unwrap(Session.class)
.getLobHelper()
.createClob(reader, length);
int inserted = entityManager
.unwrap(org.hibernate.Session.class)
.createSQLQuery("INSERT INTO EXAMPLE ( UUID, TYPE, DATA) VALUES (:uuid, :type, :data)")
.setParameter("uuid", java.util.Uuid.randomUUID(), org.hibernate.type.UUIDBinaryType.INSTANCE)
.setParameter("type", java.util.Uuid.randomUUID(), org.hibernate.type.StringType.INSTANCE)
.setParameter("data", clob, org.hibernate.type.ClobType.INSTANCE)
.executeUpdate();
Similar workaround is available for Blob.
THE ANSWER: Thank you both for your answers. I should have updated this when i solved the issue some time ago. In the end I used JDBC and the problem disappeared in a puff of smoke!