Following code:
ret = SQLTables( m_hstmt, (SQLWCHAR *) SQL_ALL_CATALOGS, SQL_NTS, (SQLWCHAR *) SQL_ALL_SCHEMAS, SQL_NTS, (SQLWCHAR *) SQL_ALL_TABLE_TYPES, SQL_NTS, L"", SQL_NTS );
if( ret != SQL_SUCCESS && ret != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1 );
result = 1;
}
else
{
for( ret = SQLFetch( m_hstmt ); ( ret == SQL_SUCCESS || ret == SQL_SUCCESS_WITH_INFO ); ret = SQLFetch( m_hstmt ) )
{
if( catalog[0].StrLen_or_Ind != SQL_NULL_DATA )
catalogName = (SQLWCHAR *) catalog[0].TargetValuePtr;
if( catalog[1].StrLen_or_Ind != SQL_NULL_DATA )
schemaName = (SQLWCHAR *) catalog[1].TargetValuePtr;
if( catalog[2].StrLen_or_Ind != SQL_NULL_DATA )
tableName = (SQLWCHAR *) catalog[2].TargetValuePtr;
}
}
returns SQL_NO_DATA for SQLTables call, whereas following code:
ret = SQLTables( m_hstmt, (SQLWCHAR *) SQL_ALL_CATALOGS, SQL_NTS, L"", SQL_NTS, L"", SQL_NTS, L"", SQL_NTS );
if( ret != SQL_SUCCESS && ret != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1 );
result = 1;
}
else
{
for( ret = SQLFetch( m_hstmt ); ( ret == SQL_SUCCESS || ret == SQL_SUCCESS_WITH_INFO ); ret = SQLFetch( m_hstmt ) )
{
if( catalog[0].StrLen_or_Ind != SQL_NULL_DATA )
catalogName = (SQLWCHAR *) catalog[0].TargetValuePtr;
if( catalog[1].StrLen_or_Ind != SQL_NULL_DATA )
schemaName = (SQLWCHAR *) catalog[1].TargetValuePtr;
if( catalog[2].StrLen_or_Ind != SQL_NULL_DATA )
tableName = (SQLWCHAR *) catalog[2].TargetValuePtr;
}
}
gives me just catalog names and schema/table names are blank.
Does this mean I can't retrieve everything in one shot?
Thank you.
Apparently following code works:
ret = SQLTables( m_hstmt, NULL, 0, NULL, 0, NULL, 0, NULL, 0 );
which is kind of weird way to call this function.
Microsoft needs to mention this case somewhere in the documentation, because if the developer sees SQL_ALL_CATALOGS, SQL_ALL_SCHEMAS and SQL_ALL_TABLE_TYPES parameters, (s)he will presume that those values needs to be passed to get all the info from the server.
Solution was found on the easysoft site.
Thank you all for reading.
I'm updating my answer, below you see the old content (starting with the deleted paragraph). As Igor has shown in his answer, it is possible to list everything in one shot.
On the documentation site about the SQLTables() is a link to: Arguments in Catalog Functions
There is an explicit entry at the very beginning of the article, stating that calling SQLTables(hstmt1, NULL, 0, NULL, 0, NULL, 0, NULL, 0); will
[..] return a result set containing information about all tables
There is also a lot of explanations about the influence of the attribute SQL_ATTR_METADATA_ID and how the arguments in the catalog functions can be used as
Catalog function string arguments fall into four different types: ordinary argument (OA), pattern value argument (PV), identifier argument (ID), and value list argument (VL).
I added the link above as a reference.
Yes, I think you cannot list all schemas, all catalogs and all types in one shot. From the documentation at microsoft:
To support enumeration of catalogs, schemas, and table types, the
following special semantics are defined for the CatalogName,
SchemaName, TableName, and TableType arguments of SQLTables:
If CatalogName is SQL_ALL_CATALOGS and SchemaName and TableName are empty strings, the result set contains a list of valid catalogs
for the data source. (All columns except the TABLE_CAT column contain
NULLs.)
If SchemaName is SQL_ALL_SCHEMAS and CatalogName and TableName are empty strings, the result set contains a list of valid schemas for the
data source. (All columns except the TABLE_SCHEM column contain
NULLs.)
If TableType is SQL_ALL_TABLE_TYPES and CatalogName, SchemaName, and TableName are empty strings, the result set contains a list of
valid table types for the data source. (All columns except the
TABLE_TYPE column contain NULLs.)
Url: https://msdn.microsoft.com/en-us/library/ms711831%28v=vs.85%29.aspx
If I understand this right, you cannot combine this values: Catalogs are only iterated if the CatalogName is SQL_ALL_CATALOGS and all other params are empty strings, the same for SchemaName, and so on.
SQL_ALL_CATALOGS, SQL_ALL_SCHEMAS and SQL_ALL_TABLE_TYPES are defined to % on my system here.
So, if you query with all three parameters set to the SQL_ALL_foobar you will query using % as values for all strings, which is not the empty string that is expected for the two other parameters and because of that you will get no result.
The code is wrong because it mixed up TableName and TableType parameters. So you ask for any table in any schema and catalogue but with no type thus you receive empty set because no table in target database has empty type.
This combination is not clearly described in MS documentation so most likely it is driver-specific.
Related
in my app, I am using a SQLite database to store some data. One of the integer fields is optional, so it is defined like so:
I have the following CREATE statement:
CREATE TABLE IF NOT EXISTS Test (id INTEGER PRIMARY KEY AUTOINCREMENT, timestamp FLOAT NOT NULL, testProperty INT);
I can get the float property using sqlite3_column_double. For the int column I could use 'sqlite3_column_int' but this always returns a value (0) even if the row does not contain a value.
How can I check if the row actually has a value for this property?
I have the following code to get all rows:
var statement: OpaquePointer? = nil
let sql = "SELECT * FROM Test;"
sqlite3_prepare_v2(self.connection, sql, -1, &statement, nil)
while sqlite3_step(statement) == SQLITE_ROW
{
let testProperty = sqlite3_column_int(statement, 2) // always returns 0
}
sqlite3_finalize(statement)
You can use sqlite3_column_type() to check if it's SQLITE_INTEGER or SQLITE_NULL.
I have a table "articles" where there're "id" and "slug" among other things. On an html page I have a list of links to articles. A link can contain either "id" or "slug" in it.
But if in a URL there's only a number, it doesn't still mean that it's an id -- therefore, casting to int to determine whether or not it's slug or id, won't work.
/articles/my_article
/articles/35
/articles/666 --> still may be slug
I have this sql query:
import (
"github.com/jackc/pgx/v4"
//.........
)
// [..........]
vars := mux.Vars(req)
q1 := `
SELECT
ar.id,
[.........]
FROM
articles AS ar
WHERE ar.slug = $1 OR ar.id = $1`
ar := Article{}
row := db.QueryRow(context.Background(), q1, vars["id_or_slug"])
switch err := row.Scan(&ar.Id, /*[.......]*/); err {
case pgx.ErrNoRows:
wrt.WriteHeader(http.StatusNotFound)
wrt.Write([]byte("article not found"))
case nil:
// good, article found
I get:
ERROR: operator does not exist: bigint = text (SQLSTATE 42883)
You can "attempt" to convert the value to an integer and if the conversion fails just ignore the error and provide an id value known to not be present in the db.
Doing the conversion with Go:
slug := mux.Vars(req)["id_or_slug"]
// option 1:
id, err := strconv.ParseInt(slug, 10, 64)
if err != nil {
id = -1 // provide a value that you're certain will not be present in the db
}
// option 2:
// if id 0 is good enough, you can skip error checking
// and use the following instead of the above.
id, _ := strconv.ParseInt(slug, 10, 64)
query := `SELECT ... FROM articles AS a
WHERE a.slug = $1
OR a.id = $2`
row := db.QueryRow(query, slug, id)
Doing the conversion with postgres: (the following postgres snippet was taken from here.
)
-- first create a postgres function that will do the conversion / cast
create or replace function cast_to_int(text, integer) returns integer as $$
begin
return cast($1 as integer);
exception
when invalid_text_representation then
return $2;
end;
$$ language plpgsql immutable;
... and then utilizing that in go:
slug := mux.Vars(req)["id_or_slug"]
query := `SELECT ... FROM articles AS a
WHERE a.slug = $1
OR a.id = cast_to_int($1::text, -1)` // use the postgres function in the go query string
row := db.QueryRow(query, slug)
Trying to fix this problem i'm having with my api im building.
db:
DROP TABLE IF EXISTS contacts CASCADE;
CREATE TABLE IF NOT EXISTS contacts (
uuid UUID UNIQUE PRIMARY KEY,
first_name varchar(150),
);
DROP TABLE IF EXISTS workorders CASCADE;
CREATE TABLE IF NOT EXISTS workorders (
uuid UUID UNIQUE PRIMARY KEY,
work_date timestamp WITH time zone,
requested_by UUID REFERENCES contacts (uuid) ON UPDATE CASCADE ON DELETE CASCADE,
);
struct:
https://gopkg.in/guregu/null.v3
type WorkorderNew struct {
UUID string `json:"uuid"`
WorkDate null.Time `json:"work_date"`
RequestedBy null.String `json:"requested_by"`
}
api code:
workorder := &models.WorkorderNew{}
if err := json.NewDecoder(r.Body).Decode(workorder); err != nil {
log.Println("decoding fail", err)
}
// fmt.Println(NewUUID())
u2, err := uuid.NewV4()
if err != nil {
log.Fatalf("failed to generate UUID: %v", err)
}
q := `
INSERT
INTO workorders
(uuid,
work_date,
requested_by
)
VALUES
($1,$2,$3)
RETURNING uuid;`
statement, err := global.DB.Prepare(q)
global.CheckDbErr(err)
fmt.Println("requested by", workorder.RequestedBy)
lastInsertID := ""
err = statement.QueryRow(
u2,
workorder.WorkDate,
workorder.RequestedBy,
).Scan(&lastInsertID)
global.CheckDbErr(err)
json.NewEncoder(w).Encode(lastInsertID)
When I send an API request with null as the value it works as expected
but when i try to send a "" as the value for the null.String or the null.Time it fails
works:
{
"work_date":"2016-12-16T19:00:00Z",
"requested_by":null
}
not working:
{
"work_date":"2016-12-16T19:00:00Z",
"requested_by":""
}
Basically when i call the QueryRow and save to database the workorder.RequestedBy value should be a null and not the "" value im getting
thanks
If you want to treat empty strings as nulls you have at least two options.
"Extend" null.String:
type MyNullString struct {
null.String
}
func (ns *MyNullString) UnmarshalJSON(data []byte) error {
if string(data) == `""` {
ns.Valid = false
return nil
}
ns.String.UnmarshalJSON(data)
}
Or use NULLIF in the query:
INSERT INTO workorders (
uuid
, work_date
, requested_by
) VALUES (
$1
, $2
, NULLIF($3, '')
)
RETURNING uuid
Update:
To extend the null.Time you have to understand that the type of null.Time.Time is a struct. The builtin len function works on slices, arrays, pointers to arrays, maps, channels, and strings. Not structs. So in this case you can check the data argument, which is a byte slice, by converting it to a string and comparing it to a string that contains an empty string, i.e. it has two double quotes and nothing else.
type MyNullTime struct {
null.Time
}
func (ns *MyNullTime) UnmarshalJSON(data []byte) error {
if string(data) == `""` {
ns.Valid = false
return nil
}
return ns.Time.UnmarshalJSON(data)
}
When trying to store in an Oracle SQL table a string of more than 100 chars, while the field limitation is 1000 bytes which I understood is ~1000 English chars, I'm getting out of bounds exception:
StringIndexOutOfBoundsException: String index out of range: -3
What might be the cause for this low limitation?
Thanks!
EDIT :
The code where the error occurs is (see chat):
// Commenting the existing code, because for sensitive information
// toString returns masked data
int nullSize = 5;
int i = 0;
// removing '[' and ']', signs and fields with 'null' value and also add
// ';' as delimiter.
while (i != -1) {
int index1 = str.indexOf('[', i);
int index2 = str.indexOf(']', i + 1);
i = index2;
if (index2 != -1 && index1 != -1) {
int index3 = str.indexOf('=', index1);
if (index3 + nullSize > str.length() || !str.substring(index3 + 1, index3 + nullSize).equals("null")) {
String str1 = str.substring(index1 + 1, index2);
concatStrings = concatStrings.append(str1);
concatStrings = concatStrings.append(";");
}
}
}
Generally, when the string to store in a varchar field is too long, it is cropped silently. Anyway when there is an error message, it is generally specific. The error seems to be related to a operation on a string (String.substring()?).
Furthermore, even when the string is encoded in UTF-8, the ratio characters/bytes shouldn't be that low.
You really should put the code sample where your error occurs in you question and the string causing this and also have a closer look at the stacktrace to see where the error occurs.
From the code you posted in your chat, I can see this line of code :
String str1 = str.substring(index1 + 1, index2);
You check that index1 and index2 are different than -1 but you don't check if (index1 + 1) >= index2 which makes your code crash.
Try this with str = "*]ab=null[" (which length is under 100 characters) but you can also get the error with a longer string such as "osh]] [ = null ]Clipers: RRR was removed by user and customer called in to have it since it was an RRT".
Once again the size of the string doesn't matter, only the content!!!
You can reproduce your problem is a closing square bracket (]) before an opening one([) and between them an equal sign (=) followed (directly or not) by the "null" string.
I agree with Jonathon Ogden "limitations of 1000 bytes does not necessarily mean 1000 characters as it depends on character encoding".
I recommend you to Alter column in your Oracle table from VARCHAR2(1000 Byte) to VARCHAR2(1000 Char).
I am trying to drop records that contain at least one null in any of the fields. For example, if the data has 3 fields, then:
filtered = FILTER data by ($0 is not null) AND ($1 is not null) AND ($2 is not null)
Is there any cleaner way to do this, without having to write out 3 boolean expressions?
If all of the fields are of numeric types, you could simply do something like
filtered = FILTER data BY $0*$1*$2 is not null;
In Pig, if any terms in an arithmetic expression are null, the result is null.
You could also write a UDF to take an arbitrary number of arguments and return null (or 0, or false, whatever you find most convenient) if any of the arguments are null.
filtered = FILTER data BY NUMBER_OF_NULLS($0, $1, $2) == 0;
where NUMBER_OF_NULLS is defined elsewhere, e.g.
public class NUMBER_OF_NULLS extends EvalFunc {
public Integer exec(Tuple input) {
if (input == null) { return 0; }
int c = 0;
for (int i = 0; i < input.size(); i++) {
if (input.get(i) == null) c++;
}
return c;
}
}
Note: I have not tested the above UDF, and I don't claim it adheres to any best practices for writing clear, robust UDFs. You should add exception-handling code, for example.
I was thinking there is a better way of doing this without using the UDF, i.e, using SPLIT in Pig.
emp = load '/Batch1/pig/emp' using PigStorage(',') as (id:chararray, name:chararray, salary:int, dept:chararray);
SPLIT emp INTO emptyDept IF depart == '', nonemptyDept IF depart != '';
DUMP nonemptyDept;
The resulting relation nonemptyDept would display all the non-empty Department values of the emp relation.