libpqxx prepared statements nonnull checks for pointer types - libpqxx

I'm a newbie to SQL overall and libpqxx as well.I'm trying to build a basic application where I just need to use prepared statements to execute very simple jobs.
I have started with libpxx 4.0 version and I implemented codes like this:
pqxx::work txn( *conn );
auto result = txn.prepared( "my_insert" )( x->getId( ), x->isIdSet( ) )( x->getUser( ), x->isUserSet( ) )(x->getCreatedAt( ), x->isCreatedAtSet( ) ).exec( );
txn.commit();
Now I had to change to version 6.4 and realized the prepared function is deprecated and I should use the exec_prepared function. Okay. BUT I'm really missing the "nonnull" condition. For certain reasons I'm working with a lot of pointers and I need a convenient way to pass on these values to the database API. I could write something like:
auto result = txn.exec_prepared( "my_insert", (x->isIdSet() ? std::to_string(x-getId()) : "null"), ....);
This could work in some cases but when I try to insert "null" as string into a smallint field I get an sql error (what is reasonable though).
As types differ I can't use the ?: operator to return string/int/etc... on true branch and nullptr on false branch.
I couldn't find a proper documentation about the library. They have a doc here but if you want to find out more about a certain function there's literally nothing there.
Honestly even the deprecated .prepared(...) function doesn't work properly with 6.4 for me. I tried txn.prepared("whatever")(y->z->getA(), y->z != nullptr).exec() form and I got segmentation fault when y was a valid non-null pointer type and z was null-pointer. I expected the function wouldn't touch the value before checking on the condition but apparently it's not the case.
I have a prepared statement with 8 parameters 6 of them being a pointer type (shared_ptr) and it would be extremely messy if I have to come up with a ridiculous solution to check all the parameters one-by-one and having to write 200 lines just to be able to call this function properly.
Anyone out there having a proper solution? As I mentioned I'm a newbie so I might miss an important part there. Help me out please ^^

I know this question is old. But here's what you do:
const auto& id{ x->isIdSet()
? std::optional<std::string>{x-getId()} : std::nullopt };
auto result{ txn.exec_prepared("my_insert", id, ....) };

Related

Generating Random String of Numbers and Letters Using Go's "testing/quick" Package

I've been breaking my head over this for a few days now and can't seem to be able to figure it out. Perhaps it's glaringly obvious, but I don't seem to be able to spot it. I've read up on all the basics of unicode, UTF-8, UTF-16, normalisation, etc, but to no avail. Hopefully somebody's able to help me out here...
I'm using Go's Value function from the testing/quick package to generate random values for the fields in my data structs, in order to implement the Generator interface for the structs in question. Specifically, given a Metadata struct, I've defined the implementation as follows:
func (m *Metadata) Generate(r *rand.Rand, size int) (value reflect.Value) {
value = reflect.ValueOf(m).Elem()
for i := 0; i < value.NumField(); i++ {
if t, ok := quick.Value(value.Field(i).Type(), r); ok {
value.Field(i).Set(t)
}
}
return
}
Now, in doing so, I'll end up with both the receiver and the return value being set with random generated values of the appropriate type (strings, ints, etc. in the receiver and reflect.Value in the returned reflect.Value).
Now, the implementation for the Value function states that it will return something of type []rune converted to type string. As far as I know, this should allow me to then use the functions in the runes, unicode and norm packages to define a filter which filters out everything which is not part of 'Latin', 'Letter' or 'Number'. I defined the following filter which uses a transform to filter out letters which are not in those character rangetables (as defined in the unicode package):
func runefilter(in reflect.Value) (out reflect.Value) {
out = in // Make sure you return something
if in.Kind() == reflect.String {
instr := in.String()
t := transform.Chain(norm.NFD, runes.Remove(runes.NotIn(rangetable.Merge(unicode.Letter, unicode.Latin, unicode.Number))), norm.NFC)
outstr, _, _ := transform.String(t, instr)
out = reflect.ValueOf(outstr)
}
return
}
Now, I think I've tried just about anything, but I keep ending up with a series of strings which are far from the Latin range, e.g.:
𥗉똿穊
𢷽嚶
秓䝏小𪖹䮋
𪿝ท솲
𡉪䂾
ʋ𥅮ᦸ
堮𡹯憨𥗼𧵕ꥆ
𢝌𐑮𧍛併怃𥊇
鯮
𣏲𝐒
⓿ꐠ槹𬠂黟
𢼭踁퓺𪇖
俇𣄃𔘧
𢝶
𝖸쩈𤫐𢬿詢𬄙
𫱘𨆟𑊙
欓
So, can anybody explain what I'm overlooking here and how I could instead define a transformer which removes/replaces non-letter/number/latin characters so that I can use the Value function as intended (but with a smaller subset of 'random' characters)?
Thanks!
Confusingly the Generate interface needs a function using the type not a the pointer to the type. You want your type signature to look like
func (m Metadata) Generate(r *rand.Rand, size int) (value reflect.Value)
You can play with this here. Note: the most important thing to do in that playground is to switch the type of the generate function from m Metadata to m *Metadata and see that Hi Mom! never prints.
In addition, I think you would be better served using your own type and writing a generate method for that type using a list of all of the characters you want to use. For example:
type LatinString string
const latin = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01233456789"
and then use the generator
func (l LatinString) Generate(rand *rand.Rand, size int) reflect.Value {
var buffer bytes.Buffer
for i := 0; i < size; i++ {
buffer.WriteString(string(latin[rand.Intn(len(latin))]))
}
s := LatinString(buffer.String())
return reflect.ValueOf(s)
}
playground
Edit: also this library is pretty cool, thanks for showing it to me
The answer to my own question is, it seems, a combination of the answers provided in the comments by #nj_ and #jimb and the answer provided by #benjaminkadish.
In short, the answer boils down to:
"Not such a great idea as you thought it was", or "Bit of an ill-posed question"
"You were using the union of 'Letter', 'Latin' and 'Number' (Letter || Number || Latin), instead of the intersection of 'Latin' with the union of 'Letter' and 'Number' ((Letter || Number) && Latin))
Now for the longer version...
The idea behind me using the testing/quick package is that I wanted random data for (fuzzy) testing of my code. In the past, I've always written the code for doing things like that myself, again and again. This meant a lot of the same code across different projects. Now, I could of course written my own package for it, but it turns out that, even better than that, there's actually a standard package which does just about exactly what I want.
Now, it turns out the package does exactly what I want very well. The codepoints in the strings which it generates are actually random and not just restricted to what we're accustomed to using in everyday life. Now, this is of course exactly the thing which you want in doing fuzzy testing in order to test the code with values outside the usual assumptions.
In practice, that means I'm running into two problems:
There's some limits on what I would consider reasonable input for a string. Meaning that, in testing the processing of a Name field or a URL field, I can reasonably assume there's not going to be a value like 'James Mc⌢' (let alone 'James Mc🙁') or 'www.🕸site.com', but just 'James McFrown' and 'www.website.com'. Hence, I can't expect a reasonable system to be able to support it. Of course, things shouldn't completely break down, but it also can't be expected to handle the former examples without any problems.
When I filter the generated string on values which one might consider reasonable, the chance of ending up with a valid string is very small. The set of possible characters in the set used by the testing/quick is just so large (0x10FFFF) and the set of reasonable characters so small, you end up with empty strings most of the time.
So, what do we need to take away from this?
So, whilst I hoped to use the standard testing/quick package to replace my often repeated code to generate random data for fuzzy testing, it does this so well that it provides data outside the range of what I would consider reasonable for the code to be able to handle. It seems that the choice, in the end, is to:
Either be able to actually handle all fuzzy options, meaning that if somebody's name is 'Arnold 💰💰' ('Arnold Moneybags'), it shouldn't go arse over end. Or...
Use custom/derived types with their own Generator. This means you're going to have to use the derived type instead of the basic type throughout the code. (Comparable to defining a string as wchar_t instead of char in C++ and working with those by default.). Or...
Don't use testing/quick for fuzzy testing, because as soon as you run into a generated string value, you can (and should) get a very random string.
As always, further comments are of course welcome, as it's quite possible I overlooked something.

Generation of ABAP report in runtime possible?

Is there any Function module that can generate ABAP code.
For eg: FM takes tables name and join conditions as input and generate ABAP code corresponding to that.
Thanks
You should consider using SAPQuery. SAP documentation here: https://help.sap.com/saphelp_erp60_sp/helpdata/en/d2/cb3efb455611d189710000e8322d00/content.htm
1. Generic reports are possible.
Your problem is, that You will have to draw a strict frame of what is
generic and what not, this means, some stuff MUST be that generic, that
it will deal with WHATEVER You want to do before ( mostly the selection ) ,
during ( mostly manipulation ---> I would offer a badi for that ), and output.
This means, that there is at least the output-step, which can be valid for ALL
data resulting from the steps before.
Consider a generic ALV-table_output, there are a lot of examples in the repo.
If You want to be the stuff printed out simple as list, this might include
more work, like, how big is the structure, when Dou You wrap a line, and so on, consider using a flag which allows to toggle the type of output .
2. Generic reports are a transportable object.
This refers to point one. Define clear stages and limits. What does the report do, and what is it not able to do. Because, even if it is in customer's namespace, each modification still will be put into transport-layers. Therefore a strict definition of features/limits is necessary so that the amount of transports due to "oh, but we also need that"-statements will not become infinite.
2. Generic reports are strict.
What does that mean ? You might want to parse the passed data ( table names, join-binding, selection-parameter-values ) and throw exceptions, if not properly set. Much work. You should offer a badi for that. If You do not do this, expect a dump. let it dump. In the end the user of Your report-api should know ( by documentation perhaps) how to call it. If not, a dynamic SQL-dump will be the result.
3. Generic reports might benefit from badis/exits.
This is self explanaining, I think. Especially generic/dynamic selection/modification/displaying of data should be extendable in terms of
custom-modifications. When You inspect, what a f4-search-help exit works like, You will understand, what I mean.
4. Generic coding is hard to debug, mostly a blackbox.
Self explaining, in the code-section below I can mark some of those sections.
5. Generic coding has some best prectice examples in the repo.
Do not reinvent the wheel. Check, how the se16n works by debugging it,
check how se11 works by debugging it. Check, what the SQL-Query-builder
looks like in the debugger. You will get the idea very soon,
and the copy-paste should be the most simple part of Your work.
6. That are the basic parts of what You might use.
Where clause determination and setting the params.
data lt_range type rsds_trange.
data ls_range_f type rsds_frange.
data lt_where type rsds_twhere.
data ls_where like line of lt_where.
ls_range_f = value #( sign = _sign
option = _option
low = _low
high = _high ).
.
.
.
append ls_frange to lt_range.
.
.
.
call function 'FREE_SELECTIONS_RANGE_2_WHERE'
exporting
field_ranges = lt_range
importing
where_clauses = lt_where.
You have the parameter, let us create the select-result-table.
data(lt_key) = value abap_keydescr_tab( for line in _joinfields)
( name = fieldname ) ).
data(lo_structdescr) = cast cl_abap_structdescr( cl_abap_structdescr=>describe_by_name( _struct_name ) ).
data(lo_tabledescr) = cl_abap_tabledescr=>create( line_type = lo_structdescr p_key = lt_key ).
create data ro_data type handle lo_tabledescr.
.
.
.
select (sel_st)
from (sel_bind)
into corresponding fields of table t_data
where (dyn_where).
Then assign the seelct-table-result-reference to the generic table of this select.
Do You need more hints ?
Yes, such possibility exists, but not by means of function modules. INSERT REPORT statement allows generating report by populating its code from internal text table:
INSERT REPORT prog FROM itab
[MAXIMUM WIDTH INTO wid]
{ [KEEPING DIRECTORY ENTRY]
| { [PROGRAM TYPE pt]
[FIXED-POINT ARITHMETIC fp]
[UNICODE ENABLING uc] }
| [DIRECTORY ENTRY dir] }.

How to execute SQL Script which may or may not return data?

This is an extension of an old question of mine where the answer wasn't quite what I was asking. What I'm doing is executing SQL Script on an MS SQL Server database. This script may or may not return any recordsets. The problem is that the way that ADO components work, at least to my knowledge, I can only explicitly request one or the other.
If I know a query will return data, I use TADOQuery.Open
If I know a query will not return data, I use TADOConnection.Execute
If I don't know whether query will return data or not... ???
How can I execute any query and read the response to determine whether it has any recordsets or not so I can read that recordset?
What I've tried:
Calling TADOQuery.Open, but raises exception if there's no recordset
Calling TADOQuery.ExecSql, but never returns any data
Calling TADOConnection.Execute, but never returns any data
Using Option 3 and reverting to Option 1 on exceptions, but this is double the work (script files over 38,000 lines) and kinda nasty.
Using TADOCommand.Execute, but keeps raising "Parameter object is improperly defined. Inconsistent or incomplete information was provided" on creating some stored procedures (which otherwise don't happen when using TADOConnection.Execute).
Calling TADOConnection.Execute overload which returns _Recordset, but then the TADOConnection.Errors returns empty (and I depend on this).
Just as some background, the purpose is to implement something like the SQL Query tool in the SQL Server Management Studio. It allows you to execute any SQL script, and if it returns any recordsets, it displays them, otherwise it just displays output messages. The tool I'm working on automatically breaks SQL Script at GO statements, and executes each individual block separately. Some of those blocks might return data, and others might not. It's already obvious that I cannot make this determination prior to execution, so I'm looking for a way to go ahead with the execution and observe the result. TADOConnection.Execute provides some useful information, including the Errors (or output messages).
As of now, the only option I have is to supply an option in the user interface to allow the user to choose which type of execution to use - but this is what I'm trying to eliminate.
EDIT
The TADOCommand.Execute method is the closest to what I want. However, it fails on some bits of script which otherwise work perfectly fine using TADOConnection.Execute. See #5 above in "What I've tried". I almost wrote that as my answer, until I found this error happens on almost everything.
EDIT
After posting my answer below, I then came to learn that the Errors property no longer returns anything when I use this other overload of Execute. See #6 above in "What I've tried".
Calling...
ADOConnection1.Execute('select * from something', cmdText, []);
...does not return anything in ADOConnection1.Errors, whereas...
var
R: Integer;
begin
ADOConnection1.Execute('select * from something', R);
...does return messages in ADOConnection1.Errors, which is what I need, but yet, doesn't return any recordsets.
EDIT: Not the right solution
I discovered my solution finally after digging even deeper. The answer is to use the TADOConnection.Execute() overload which supports returning the recordset:
function TADOConnection.Execute(const CommandText: WideString;
const CommandType: TCommandType = cmdText;
const ExecuteOptions: TExecuteOptions = []): _Recordset;
Then, just assign the resulting _Recordset to the Recordset property of supported dataset components.
var
RS: _Recordset;
begin
RS := ADOConnection1.Execute('select * from something', cmdText, []);
if Assigned(RS) then begin
ADODataset1.Recordset:= RS;
...
end;
end;
The downside is that you cannot use the other overload which supports returning the RowsAffected. Also, nothing is returned in the Errors property of the TADOConnection when using this overload version of Execute, whereas the other one does. But that other doesn't return a recordset.

calling script_execute with a variable

I'm using GameMaker:Studio Pro and trying to execute a script stored in a variable as below:
script = close_dialog;
script_execute(script);
It doesn't work. It's obviously looking for a script named "script". Anyone know how I can accomplish this?
This question's quite old now, but in case anyone else ends up here via google (as I did), here's something I found that worked quite well and avoids the need for any extra data structures as reference:
scriptToCall = asset_get_index(scr_scriptName);
script_execute(scriptToCall);
The first line here creates the variable scriptToCall and then assigns to it Game Maker's internal ID number for the script you want to call. This allows script_execute to correctly find the script from the ID, which doesn't work if you try to pass it a string containing the script name.
I'm using this to define which scripts should be called in a particular situation from an included txt file, hence the need to convert a string into an addressable script ID!
You seem to have some confusion over how Game Maker works, so I will try to address this before I get around to the actual question.
GML is a rather simple-minded beast, it only knows two data types: strings and numbers. Everything else (objects, sprites, scripts, data structures, instances and so on) is represented with a number in your GML code.
For example, you might have an object called "Player" which has all kinds of fancy events, but to the code Player is just a constant number which you can (e.g.) print out with show_message(string(Player));
Now, the function script_execute(script) takes as argument the ID of the script that should be executed. That ID is just a normal number. script_execute will find the script with that ID in some internal table and then run the script.
In other words, instead of calling script_execute(close_dialog) you could just as well call script_execute(14) if you happened to know that the ID of close_dialog is 14 (although that is bad practice, since it make the code difficult to understand and brittle against ID changes).
Now it should be obvious that assigning the numeric value of close_dialog to a variable first and then calling script_execute on that variable is perfectly OK. In the end, script_execute only cares about the number that is passed, not about the name of the variable that this number comes from.
If you are thinking ahead a bit, you might wonder whether you need script_execute at all then, or if you could instead just do this:
script = close_dialog;
script();
In my opinion, it would be perfectly fine to allow this in the language, but it does not work - the function call operator actually does care about the name of the thing you try to call.
Now with that background out of the way, on to your actual question. If close_dialog is actually a script, your suggested code will work fine. If it is an extension function (or a built-in function -- I don't own Studio so what do I know) then it does not actually have an ID, and you can't call it with script_execute. In fact, you can't even assign close_dialog to a variable then because it does not have any value in GML -- all you can do with it then is call it. To work around this though, you could create a script (say, close_dialog_script which only calls close_dialog, which you can then use just as above.
Edit: Since it does not seem to work anyway, check whether you have a different resource by the name of close_dialog (perhaps a button sprite). This kind of conflict could mean that close_dialog gives you the ID of the sprite, not of the script, while calling the script directly would still work.
After much discussion on the forums, I ended up going with this method.
I wrote a script called script_id()
var sid;
sid = 6; //6 = scriptnotfound script :)
switch (argument0) {
case "load_room":
sid = 0;
break;
case "show_dialog":
sid = 1;
break;
case "close_dialog":
sid = 3;
break;
case "scrExample":
sid = 4;
break;
}
return sid;
So now I can call script_execute(script_id("close_dialog"));
I hate it, but it's better than keeping a spreadsheet... in my opinion.
There's also another way, with execute_string();
Should look like this:
execute_string(string(scriptName) + "();");

How to quote values for LuaSQL?

LuaSQL, which seems to be the canonical library for most SQL database systems in Lua, doesn't seem to have any facilities for quoting/escaping values in queries. I'm writing an application that uses SQLite as a backend, and I'd love to use an interface like the one specified by Python's DB-API:
c.execute('select * from stocks where symbol=?', t)
but I'd even settle for something even dumber, like:
conn:execute("select * from stocks where symbol=" + luasql.sqlite.quote(t))
Are there any other Lua libraries that support quoting for SQLite? (LuaSQLite3 doesn't seem to.) Or am I missing something about LuaSQL? I'm worried about rolling my own solution (with regexes or something) and getting it wrong. Should I just write a wrapper for sqlite3_snprintf?
I haven't looked at LuaSQL in a while but last time I checked it didn't support it. I use Lua-Sqlite3.
require("sqlite3")
db = sqlite3.open_memory()
db:exec[[ CREATE TABLE tbl( first_name TEXT, last_name TEXT ); ]]
stmt = db:prepare[[ INSERT INTO tbl(first_name, last_name) VALUES(:first_name, :last_name) ]]
stmt:bind({first_name="hawkeye", last_name="pierce"}):exec()
stmt:bind({first_name="henry", last_name="blake"}):exec()
for r in db:rows("SELECT * FROM tbl") do
print(r.first_name,r.last_name)
end
LuaSQLite3 as well an any other low level binding to SQLite offers prepared statements with variable parameters; these use methods to bind values to the statement parameters. Since SQLite does not interpret the binding values, there is simply no possibility of an SQL injection. This is by far the safest (and best performing) approach.
uroc shows an example of using the bind methods with prepared statements.
By the way in Lua SQL there is an undocumented escape function for the sqlite3 driver in conn:escape where conn is a connection variable.
For example with the code
print ("con:escape works. test'test = "..con:escape("test'test"))
the result is:
con:escape works. test'test = test''test
I actually tried that to see what it'd do. Apparently there is also such a function for their postgres driver too. I found this by looking at the tests they had.
Hope this helps.