I've got a PL/pgSQL function and I'm connecting to Postgres 12.x using Scala library called doobie which uses JDBC underneath. I'd like to understand if the whole call to the function will be treated as a single transaction? I've got default setting of autocommit.
The call to the function is simply:
select * from next_work_item();
All functions in PostgreSQL are always running in a single transaction, no matter what.
Related
From a great reply:
in PostgreSQL, CREATE FUNCTION is indeed a "SQL statement" but is is merely a
"wrapper" to specify a block of code that is executed by something
different than the SQL query "engine". Postgres (unlike other DBMS)
supports multiple "runtime engines" that can execute the block of code
that was passed to the "CREATE FUNCTION" statement - one artifact of
that is that the code is actually a string so CREATE FUNCTION only
sees a string, nothing else.
What are the consequences of "the code is actually a string so CREATE FUNCTION only sees a string, nothing else"?
Is that considered as dynamic SQL? Does it prevent or introduce SQL injection risk, compared to dynamic SQL?
How is that different from other RDBMS (if any?) where "the code is not a string"?
Thanks.
PostgreSQL is highly extensible, and you can for example define your own procedural language to write functions in.
PostgreSQL knows nothing about the language except that it has to call a certain language handler to execute the function.
The way that was chosen to implement this is to simplify pass the code as a string.
This is just an implementation detail and does not make PostgreSQL functions any more or less vulnerable to SQL injection than other RDBMS.
There are several levels on which you have to defend yourself against injection:
The function arguments: Here you should choose non-string data types whenever possible.
The SQL statements within the function: Here you should avoid dynamic SQL whenever possible, and if you have to use dynamic SQL, you should insert variables using the %L pattern of the format function.
Again, this is the same if function bodies are specified as strings or not.
All 3GL+ code is basically a string. The "parameter" passed to CREATE FUNCTION is code (to be executed outide the core SQL engine), which is a string (that's not SQL).
Other RDMS's only support SQL as the function/procedure body.
I have seen many functions, on every function I see they have parenthesis() on its end like,
SELECT SCOPE_IDENTITY();
SELECT IDENT_CURRENT('TableName');
But why some function's do not use with parenthesis() like function
SELECT ##IDENTITY;
##xxxxx are system functions without parameters which should be treated as read-only variables
I believe that these used to be referred to as "server variables" and so had a "variable-like" syntax rather than a function syntax. They have since been re-defined as functions, but maintain the older syntax for backwards-compatibility.
Unfortunately, I cannot find any online resources presently to back-up this claim, and I think the "server variables" definition had been retired by the 2000 release of the product.
If we take the example of ##ROWCOUNT. In SQL Server 2000 BOL, it's described as:
Returns the number of rows affected by the last statement.
...
This variable is set...
(My emphasis).
Compare that with the current documentation, which refers to it having its value set but now avoids referring to it as a variable (or a function). And, of course, it's modern, enhanced, doesn't have to support backwards compatibility sibling ROWCOUNT_BIG which has normal function syntax. And is explicitly referred to as a function.
I do not believe that Microsoft have introduced any new functions using the ## variable syntax since before 2000 was released.
At my work we are limited in which functions are available in our HIVEql environment. Is there a statement that can be run that will list all of the available functions? For example:
SELECT * FROM all_available_functions;
You can use SHOW FUNCTIONS command. It will list all Hive functions and operators.
hive>SHOW FUNCTIONS;
hive>DESCRIBE FUNCTION <function_name>;
hive>DESCRIBE FUNCTION EXTENDED <function_name>;
I am trying to configure Microstrategy to work with MongoDB. The Mstr advised way is to use Simba ODBC driver. The simple connection works fine. The problems start when I want to use functions e.g. get only hour out of the timestamp.
The other approach I tried is to use Apache drill and I face exactly the same problem.
Select code, name from offer
Code and name are attributes of some documents in collection called offer. This works fine.
Select date(interactionDateTime) from interactionrecord
This fails. I tried different syntax postgres - date_part, to_date - Oracle, another one from MySQL..., EXTRACT etc.
You should be able to use the scalar functions listed here: https://msdn.microsoft.com/en-us/library/ms714639(v=vs.85).aspx
To extract the hour out of a time, use the HOUR() scalar function.
I am wondering if anyone can explain why I get different results for the same query string between using the ExecuteSQL function in FM versus querying the database through a database browser (I'm using DBVisualizer).
Specifically, if I run
SELECT COUNT(DISTINCT IMV_ItemID) FROM IMV
in DBVis, I get 2802. In FileMaker, if I evaluate the expression
ExecuteSQL ( "SELECT COUNT(DISTINCT IMV_ItemID) FROM IMV"; ""; "")
then I get 2898. This makes me distrust the ExecuteSQL function. Inside of FM, the IMV table is an ODBC shadow, connected to the central MSSQL database. In DBVis, the application connects via JDBC. However, I don't think that should make any difference.
Any ideas why I get a different count for each method?
Actually, it turns out that when FM executes the SQL, it factors in whitespace, whereas DBVisualizer (not sure about other database browser apps, but I would assume it's the same) do not. Also, since the TRIM() function isn't supported by MSSQL (from what I've seen, at least) it is necessary to make the query inside of the ExecuteSQL statement something like:
SELECT COUNT(DISTINCT(LTRIM(RTRIM(IMV_ItemID)))) FROM IMV
Weird, but it works!
FM keeps a cache of the shadow table's records (for internal field-id-mapping). I'm not sure if the ExecuteSQL() function causes a re-creation of the cache. In other words: maybe the ESS shadow table is out of sync. Try to delete the cache by closing and restarting the FM client or perform a native find first.
You can also try a re-connect to the database server via the Open File script step.
HTH