Calling a stored function (that returns an array of a user-defined type) in oracle across a database link - sql

Normally, I call my function like so:
SELECT *
FROM TABLE(
package_name.function(parameters)
)
I'm trying to call this function across a database link. My intuition is that the following is the correct syntax, but I haven't gotten it to work:
SELECT *
FROM TABLE(
package_name.function#DBLINK(parameters)
)
> ORA-00904: "PACKAGE_NAME"."FUNCTION": invalid identifier
I've tried moving around the database link to no effect. I've tried putting it after the parameter list, after the last parenthesis, after the package name...I've also tried all of the above permutations including the schema name before the package name. I'm running out of ideas.
This is oracle 10g. I'm suspicious that the issue may be that the return type of the function is not defined in the schema in which I'm calling it, but I feel like I should be getting a different error if that were the case.
Thanks for your help!

What you're trying is the correct syntax as far as I know, but in any case it would not work due to the return type being user-defined, as you suspect.
Here's an example with a built-in pipelined function. Calling it locally works, of course:
SELECT * FROM TABLE(dbms_xplan.display_cursor('a',1,'ALL'));
Returns:
SQL_ID: a, child number: 1 cannot be found
Calling it over a database link:
SELECT * FROM TABLE(dbms_xplan.display_cursor#core('a',1,'ALL'));
fails with this error:
ORA-30626: function/procedure parameters of remote object types are not supported
Possibly you are getting the ORA-904 because the link goes to a specific schema that does not have access to the package. But in any case, this won't work, even if you define an identical type with the same name in your local schema, because they're still not the same type from Oracle's point of view.
You can of course query a view remotely, so if there is a well-defined set of possible parameters, you could create one view for each parameter combination and then query that, e.g.:
CREATE VIEW display_cursor_a_1_all AS
SELECT * FROM TABLE(dbms_xplan.display_cursor('a',1,'ALL'))
;
If the range of possible parameter values is too large, you could create a procedure that creates the needed view dynamically given any set of parameters. Then you have a two-step process every time you want to execute the query:
EXECUTE package.create_view#remote(parameters)
SELECT * FROM created_view#remote;
You have to then think about whether multiple sessions might call this in parallel and if so how to prevent them from stepping on each other.

Related

TIMESTAMP_FORMAT not working with OFFSET in DB2

I'm trying to do pagination in DB2. I wouldn't like to do it with subquery, but OFFSET is not working with TIMESTAMP_FORMAT.
Use of function TIMESTAMP_FORMAT in QSYS2 not valid. Data mapping error on member
I've found this question, but seems here the problem is with content of column and it's not my case, as values are alright and TIMESTAMP_FORMAT works without OFFSET.
I didn't look for some other way to not use TIMESTAMP_FORMAT, as I need to create pagination on queries written not by me, but by client.
The query looks like this.
SELECT DATE(TIMESTAMP_FORMAT(CHAR("tablename"."date"),'YYMMDD'))
FROM tableName
OFFSET 10 ROWS
I get
"[SQL0583] Use of function TIMESTAMP_FORMAT in QSYS2 not valid."
I'm not sure how OFFSET can relate to TIMESTAMP_FORMAT, but when I replace the select with select * it works fine.
I wonder why there is a conflict between OFFSET and TIMESTAMP_FORMAT and is there a way to bypass this without subquery.
From Listing of SQL Messages:
SQL0583
Function &1 in &2 cannot be invoked where specified because it is
defined to be not deterministic or contains an external action.
Functions that are not deterministic cannot be specified in a GROUP BY
clause or in a JOIN clause, or in the default clause for a global
variable.
Functions that are not deterministic or contain an external
action cannot be specified in a PARTITION BY clause or an ORDER BY
clause for an OLAP function and cannot be specified in the select list
of a query that contains an OFFSET clause.
The RAISE_ERROR function
cannot be specified in a GROUP BY or HAVING clause.
I don't know how to check these properties for the QSYS2.TIMESTAMP_FORMAT function (there is no its definition in the QSYS2.SYSROUTINES table), but it looks like improper definition of this function - there is no reason to create it as not deterministic or external action.
You can "deceive" DB2 like this:
CREATE FUNCTION MYSCHEMA.TIMESTAMP_FORMAT(str VARCHAR(4000), fmt VARCHAR(128))
RETURNS TIMESTAMP
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN QSYS2.TIMESTAMP_FORMAT(str, fmt);
SELECT
DATE(MYSCHEMA.TIMESTAMP_FORMAT(CHAR(tablename.date), 'YYMMDD'))
--DATE(QSYS2.TIMESTAMP_FORMAT(CHAR(tablename.date), 'YYMMDD'))
FROM table(values '190412') tableName(date)
OFFSET 10 ROWS;
And use this function instead. It works on my 7.3 at least.
It's a harmless deception, and you may ask IBM support to clarify such a "feature" of QSYS2.TIMESTAMP_FORMAT...
I suspect your problem is bad data...
The default for the IBM interactive tools, STRSQL and ACS Run SQL Scripts, is OPTIMIZE(*FIRSTIO) meaning get the first few rows back as quickly as possible...
With the OFFSET 10 clause you're probably accessing rows initially that you didn't before.
Try the following
create table mytest as (
SELECT DATE(TIMESTAMP_FORMAT(CHAR("tablename"."date"),'YYMMDD')) as mydate
FROM tableName
) with data
If that doesn't error, then yes you've found a bug, open a PMR.
Otherwise, you can see how far along the DB got by looking at the rows in the new table and track down the record with bad data.

Call a SQL function in Nifi using ExecuteSQL or another processor

I am currently using a function in SQL Server to get the max-value of a certain column. I Need this value to generate a specific number of dummy files to insert flowfiles that are created later on.
Is there a way of calling this function via a nifi-processor?
By using ExecuteSQL I Always get error like unable to execute SQL select query or the column "ab" was not found, when using select ab.functionname() (ab is the loginname of the db)
In SQL Server I can just use select ab.functionname() and get the desired results.
If there is no possible way of calling this function, is there another way to create #flowfiles dummyfiles to reserve this place for them in the DB so that no one else could insert or use this ids (not autoincremt, because it is not possible) while the flowfiles are getting processed?
I tried using $flowfile.count and the Counterprocessor, but this did not solve the Problem.
It should look like: INSERT INTO table (id,nr) values (max(id)+1,anynumber) for every flowfiles, unfortunately the ExecuteSQL is not able to do this.
Think this conversation can help you:
https://community.hortonworks.com/questions/26170/does-executesql-processor-allow-to-execute-stored.html
Gist:
You can use ExecuteScript or ExecuteProcess to call appropriate script. For example for ExecuteProcess just call sqlplus command. Choose type of command "sqlplus". In command arguments set something like: user_id/password#dbname #"script_path/someScript.sql". In someScript.sql you put something like:
execute spname(param)
You can write your own processor :) Of course it's more difficulty and often unnecessary

What does "SELECT INTO" do?

I'm reading sql code which has a line that looks like this:
SELECT INTO _user tag FROM login.user WHERE id = util.uuid_to_int(_user_id)::oid;
What exactly does this do? The usual way to use SELECT INTO requires specifying the columns to select after the SELECT token, e.g.
SELECT * INTO _my_new_table WHERE ...;
The database is postgresql.
This line must appear inside of a PL/pgSQL function. In that context the value from column tag is assigned to variable _user.
According to the documentation:
Tip: Note that this interpretation of SELECT with INTO is quite different from PostgreSQL's regular SELECT INTO command, wherein the INTO target is a newly created table.
and
The INTO clause can appear almost anywhere in the SQL command. Customarily it is written either just before or just after the list of select_expressions in a SELECT command, or at the end of the command for other command types. It is recommended that you follow this convention in case the PL/pgSQL parser becomes stricter in future versions.

Check whether field exists in SQLite without fetching them all

I am writing a database abstraction layer that also abstracts some of the different query types. One of them is called "field_exists" - its purpose should be pretty self-explanatory.
And I want to implement that for SQLite.
The problem I am having is that I need to use one query that either returns a row confirming that the field exists or none if it doesn't. Thus, I cannot use the PRAGMA approach.
So, what query can I use to check whether a field exists in SQLite, that fulfills the above criteria?
EDIT: I should add that the query needs to be able to run in PHP code (using PDO).
Also, the query should look something like this (which only works with MySQL):
SHOW COLUMNS FROM table LIKE 'field'
Trying to select a field that doesn't exist will return an exception, then you can catch it and return nothing.
Use the .schema TABLENAME command. It will tell you the command that was issued to create the table. For more info chekcout the SQLite command shell documentation.
If you don't have access to the sqlite command line, you can always query the sqlite_master table. Let's say you want to know the command used to create the table MyTable. You'd issue this:
select sql from sqlite_master where name='MyTable';
This then gives you the sql command that was used to create the table. Then just grep through that output and see if the column you're looking for is in the command used to create the table.
UPDATE 2:
Actually better than the sql I posted above, you can use this:
PRAGMA table_info(*table_name*)
This will show you all the columns in a given table along with their types and other info.

SQL Server reports 'Invalid column name', but the column is present and the query works through management studio

I've hit a bit of an impasse. I have a query that is generated by some C# code. The query works fine in Microsoft SQL Server Management Studio when run against the same database.
However when my code tries to run the same query I get the same error about an invalid column and an exception is thrown. All queries that reference this column are failing.
The column in question was recently added to the database. It is a date column called Incident_Begin_Time_ts .
An example that fails is:
select * from PerfDiag
where Incident_Begin_Time_ts > '2010-01-01 00:00:00';
Other queries like Select MAX(Incident_Being_Time_ts); also fail when run in code because it thinks the column is missing.
Any ideas?
Just press Ctrl + Shift + R and see...
In SQL Server Management Studio, Ctrl+Shift+R refreshes the local cache.
I suspect that you have two tables with the same name. One is owned by the schema 'dbo' (dbo.PerfDiag), and the other is owned by the default schema of the account used to connect to SQL Server (something like userid.PerfDiag).
When you have an unqualified reference to a schema object (such as a table) — one not qualified by schema name — the object reference must be resolved. Name resolution occurs by searching in the following sequence for an object of the appropriate type (table) with the specified name. The name resolves to the first match:
Under the default schema of the user.
Under the schema 'dbo'.
The unqualified reference is bound to the first match in the above sequence.
As a general recommended practice, one should always qualify references to schema objects, for performance reasons:
An unqualified reference may invalidate a cached execution plan for the stored procedure or query, since the schema to which the reference was bound may change depending on the credentials executing the stored procedure or query. This results in recompilation of the query/stored procedure, a performance hit. Recompilations cause compile locks to be taken out, blocking others from accessing the needed resource(s).
Name resolution slows down query execution as two probes must be made to resolve to the likely version of the object (that owned by 'dbo'). This is the usual case. The only time a single probe will resolve the name is if the current user owns an object of the specified name and type.
[Edited to further note]
The other possibilities are (in no particular order):
You aren't connected to the database you think you are.
You aren't connected to the SQL Server instance you think you are.
Double check your connect strings and ensure that they explicitly specify the SQL Server instance name and the database name.
In my case I restart Microsoft SQL Sever Management Studio and this works well for me.
If you are running this inside a transaction and a SQL statement before this drops/alters the table you can also get this message.
I eventually shut-down and restarted Microsoft SQL Server Management Studio; and that fixed it for me. But at other times, just starting a new query window was enough.
If you are using variables with the same name as your column, it could be that you forgot the '#' variable marker. In an INSERT statement it will be detected as a column.
Just had the exact same problem. I renamed some aliased columns in a temporary table which is further used by another part of the same code. For some reason, this was not captured by SQL Server Management Studio and it complained about invalid column names.
What I simply did is create a new query, copy paste the SQL code from the old query to this new query and run it again. This seemed to refresh the environment correctly.
In my case I was trying to get the value from wrong ResultSet when querying multiple SQL statements.
In my case it seems the problem was a weird caching problem. The solutions above didn't work.
If your code was working fine and you added a column to one of your tables and it gives the 'invalid column name' error, and the solutions above doesn't work, try this: First run only the section of code for creating that modified table and then run the whole code.
Including this answer because this was the top result for "invalid column name sql" on google and I didn't see this answer here. In my case, I was getting Invalid Column Name, Id1 because I had used the wrong id in my .HasForeignKey statement in my Entity Framework C# code. Once I changed it to match the .HasOne() object's id, the error was gone.
I've gotten this error when running a scalar function using a table value, but the Select statement in my scalar function RETURN clause was missing the "FROM table" portion. :facepalms:
Also happens when you forget to change the ConnectionString and ask a table that has no idea about the changes you're making locally.
I had this problem with a View, but the exact same SQL code worked perfectly as a query. In fact SSMS actually threw up a couple of other problems with the View, that it did not have with the query. I tried refreshing, closing the connection to the server and going back in, and renaming columns - nothing worked. Instead I created the query as a stored procedure, and connected Excel to that rather than the View, and this solved the problem.