How can I programmatically run arbitrary SQL statements against my Hibernate/HSQL database? - sql

I'm looking for a way to programmatically execute arbitrary SQL commands against my DB.
(Hibernate, JPA, HSQL)
Query.createNativeQuery() doesn't work for things like CREATE TABLE.
Doing LOTS of searching, I thought I could use the Hibernate Session.doWork().
By using the deprecated Configuration.buildSesionFactory() seems to show that doWork won't work.
I get "use lacks privilege or object not found" for all the CREATE TABLE statements.
So, what other technique is there for executing arbitratry SQL statements?
There were some notes on using the underlying JDBC Statement, but I haven't figure out how to get a JDBC Connection object from Hibernate to try that.
Note that the hibernate.hbm2ddl.auto=create setting will NOT work for me, as I have ARRAY[] columns which it chokes on.

I don't think there is any problem executing a create table statement with a Hibernate native query. Just make sure to use Query.executeUpdate(), and not Query.list() or Query.uniqueResult().
If it doesn't work, please tell us what happens when you execute it, and join the full stack trace of the exception and the SQL query you're executing.

"use lacks privilege or object not found" in HSQL may mean anything, for example existence of a table with the same name. Error messages in HSQL are completely misleading. Try listing your tables using DatabaseMetadata - you have probably already created the table.

Related

FM ExecuteSQL returns different results than direct database query

I am wondering if anyone can explain why I get different results for the same query string between using the ExecuteSQL function in FM versus querying the database through a database browser (I'm using DBVisualizer).
Specifically, if I run
SELECT COUNT(DISTINCT IMV_ItemID) FROM IMV
in DBVis, I get 2802. In FileMaker, if I evaluate the expression
ExecuteSQL ( "SELECT COUNT(DISTINCT IMV_ItemID) FROM IMV"; ""; "")
then I get 2898. This makes me distrust the ExecuteSQL function. Inside of FM, the IMV table is an ODBC shadow, connected to the central MSSQL database. In DBVis, the application connects via JDBC. However, I don't think that should make any difference.
Any ideas why I get a different count for each method?
Actually, it turns out that when FM executes the SQL, it factors in whitespace, whereas DBVisualizer (not sure about other database browser apps, but I would assume it's the same) do not. Also, since the TRIM() function isn't supported by MSSQL (from what I've seen, at least) it is necessary to make the query inside of the ExecuteSQL statement something like:
SELECT COUNT(DISTINCT(LTRIM(RTRIM(IMV_ItemID)))) FROM IMV
Weird, but it works!
FM keeps a cache of the shadow table's records (for internal field-id-mapping). I'm not sure if the ExecuteSQL() function causes a re-creation of the cache. In other words: maybe the ESS shadow table is out of sync. Try to delete the cache by closing and restarting the FM client or perform a native find first.
You can also try a re-connect to the database server via the Open File script step.
HTH

SQL Parameters - where does expansion happens

I'm getting a little confused about using parameters with SQL queries, and seeing some things that I can't immediately explain, so I'm just after some background info at this point.
First, is there a standard format for parameter names in queries, or is this database/middleware dependent ? I've seen both this:-
DELETE * FROM #tablename
and...
DELETE * FROM :tablename
Second - where (typically) does the parameter replacement happen? Are parameters replaced/expanded before the query is sent to the database, or does the database receive params and query separately, and perform the expansion itself?
Just as background, I'm using the DevArt UniDAC toolkit from a C++Builder app to connect via ODBC to an Excel spreadsheet. I know this is almost pessimal in a few ways... (I'm trying to understand why a particular command works only when it doesn't use parameters)
With such data access libraries, like UniDAC or FireDAC, you can use macros. They allow you to use special markers (called macro) in the places of a SQL command, where parameter are disallowed. I dont know UniDAC API, but will provide a sample for FireDAC:
ADQuery1.SQL.Text := 'DELETE * FROM &tablename';
ADQuery1.MacroByName('tablename').AsRaw := 'MyTab';
ADQuery1.ExecSQL;
Second - where (typically) does the parameter replacement happen?
It doesn't. That's the whole point. Data elements in your query stay data items. Code elements stay code elements. The two never intersect, and thus there is never an opportunity for malicious data to be treated as code.
connect via ODBC to an Excel spreadsheet... I'm trying to understand why a particular command works only when it doesn't use parameters
Excel isn't really a database engine, but if it were, you still can't use a parameter for the name a table.
SQL parameters are sent to the database. The database performs the expansion itself. That allows the database to set up a query plan that will work for different values of the parameters.
Microsoft always uses #parname for parameters. Oracle uses :parname. Other databases are different.
No database I know of allows you to specify the table name as a parameter. You have to expand that client side, like:
command.CommandText = string.Format("DELETE FROM {0}", tableName);
P.S. A * is not allowed after a DELETE. After all, you can only delete whole rows, not a set of columns.

SQL Server reports 'Invalid column name', but the column is present and the query works through management studio

I've hit a bit of an impasse. I have a query that is generated by some C# code. The query works fine in Microsoft SQL Server Management Studio when run against the same database.
However when my code tries to run the same query I get the same error about an invalid column and an exception is thrown. All queries that reference this column are failing.
The column in question was recently added to the database. It is a date column called Incident_Begin_Time_ts .
An example that fails is:
select * from PerfDiag
where Incident_Begin_Time_ts > '2010-01-01 00:00:00';
Other queries like Select MAX(Incident_Being_Time_ts); also fail when run in code because it thinks the column is missing.
Any ideas?
Just press Ctrl + Shift + R and see...
In SQL Server Management Studio, Ctrl+Shift+R refreshes the local cache.
I suspect that you have two tables with the same name. One is owned by the schema 'dbo' (dbo.PerfDiag), and the other is owned by the default schema of the account used to connect to SQL Server (something like userid.PerfDiag).
When you have an unqualified reference to a schema object (such as a table) — one not qualified by schema name — the object reference must be resolved. Name resolution occurs by searching in the following sequence for an object of the appropriate type (table) with the specified name. The name resolves to the first match:
Under the default schema of the user.
Under the schema 'dbo'.
The unqualified reference is bound to the first match in the above sequence.
As a general recommended practice, one should always qualify references to schema objects, for performance reasons:
An unqualified reference may invalidate a cached execution plan for the stored procedure or query, since the schema to which the reference was bound may change depending on the credentials executing the stored procedure or query. This results in recompilation of the query/stored procedure, a performance hit. Recompilations cause compile locks to be taken out, blocking others from accessing the needed resource(s).
Name resolution slows down query execution as two probes must be made to resolve to the likely version of the object (that owned by 'dbo'). This is the usual case. The only time a single probe will resolve the name is if the current user owns an object of the specified name and type.
[Edited to further note]
The other possibilities are (in no particular order):
You aren't connected to the database you think you are.
You aren't connected to the SQL Server instance you think you are.
Double check your connect strings and ensure that they explicitly specify the SQL Server instance name and the database name.
In my case I restart Microsoft SQL Sever Management Studio and this works well for me.
If you are running this inside a transaction and a SQL statement before this drops/alters the table you can also get this message.
I eventually shut-down and restarted Microsoft SQL Server Management Studio; and that fixed it for me. But at other times, just starting a new query window was enough.
If you are using variables with the same name as your column, it could be that you forgot the '#' variable marker. In an INSERT statement it will be detected as a column.
Just had the exact same problem. I renamed some aliased columns in a temporary table which is further used by another part of the same code. For some reason, this was not captured by SQL Server Management Studio and it complained about invalid column names.
What I simply did is create a new query, copy paste the SQL code from the old query to this new query and run it again. This seemed to refresh the environment correctly.
In my case I was trying to get the value from wrong ResultSet when querying multiple SQL statements.
In my case it seems the problem was a weird caching problem. The solutions above didn't work.
If your code was working fine and you added a column to one of your tables and it gives the 'invalid column name' error, and the solutions above doesn't work, try this: First run only the section of code for creating that modified table and then run the whole code.
Including this answer because this was the top result for "invalid column name sql" on google and I didn't see this answer here. In my case, I was getting Invalid Column Name, Id1 because I had used the wrong id in my .HasForeignKey statement in my Entity Framework C# code. Once I changed it to match the .HasOne() object's id, the error was gone.
I've gotten this error when running a scalar function using a table value, but the Select statement in my scalar function RETURN clause was missing the "FROM table" portion. :facepalms:
Also happens when you forget to change the ConnectionString and ask a table that has no idea about the changes you're making locally.
I had this problem with a View, but the exact same SQL code worked perfectly as a query. In fact SSMS actually threw up a couple of other problems with the View, that it did not have with the query. I tried refreshing, closing the connection to the server and going back in, and renaming columns - nothing worked. Instead I created the query as a stored procedure, and connected Excel to that rather than the View, and this solved the problem.

Getting the SQL from a Doctrine Migration

I have been researching a way to get the SQL statements that are built by a generated Migration file. These extend Doctrine_Migration_Base. Essentially I would like to save the SQL as change scripts.
The execution path leads me to Doctrine_Export which has methods that build the SQL statement and executes them. I have found no way of asking for just them. The export methods found in Doctrine_Export only operate on Doctrine_Record models and not Migration scripts.
From the command line './doctrine migrate version#' the path goes:
Doctrine_Cli::run(cmd)
Doctrine_Task_Migrate::setArguments(args)
Doctrine_Task_Migrate::execute()
Doctrine_Migration::migrate(to)
Doctrine_Migration_Process::Doctrine_Export::various
create, drop, alter methods with sql
equivalents.
Has anyone tackled this before? I really would not like to change Doctrine base files. Any help is greatly appreciated.
Could you make a dev server, and do the migration on that, storing a SQL Trace as you go?you don't have to keep the results, but you would get a list of every command.
Taking into account Rob Farley's suggestion, I modified:
Doctrine_Core::migrate
Doctrine_Task_Migrate::execute
When the execute method is called the optional argument 'dryRun' is checked. If true
then a 'Doctrine_Connection_Profiler' instance is created. The 'dryRun' value is then passed onto
the 'Doctrine_Core::migrate' method. The 'dryRun' value of true allows the changes to rollback when done executing the SQL statements. When the method returns, the profiler is parsed and non-empty SQL statements
not containing 'migration_version' are saved and displayed to the terminal.

SQL Command ISNULL for ODBC Connection

I'm connected to an OpenEdge DataServer via ODBC (not our product, we are just accessing their database, I hardly have any information and certainly no help from the other side).
Anyhow, I just need to execute a simple Select, add a couple of rows and I need the equivalent of an IsNull statement.
Basically I'd like to execute
SELECT ISNULL(NULL,'test')
This fails with a Syntax Error. I've looked around at something they misleadingly call a "documentation" but there are only references to SP_SQL_ISNULL but I can't get that to work either. I'm fit in T-SQL, so any pointers in any direction appreciated, even if it's just a RTFM with a link to TFM :)
Thanks
Thanks to Catalin and this question I got on the right track. I kept thinking I needed a OpenEdge specific function but actually I needed to use only ODBC SQL syntax.
To get what
ISNULL(col,4)
does you can use
COALESCE(col,4)
which "returns the data type of expression with the highest data type precedence. If all expressions are nonnullable, the result is typed as nonnullable."MSDN
Basically it will convert to 4 if the value is null (and therefore not convertable).
I am not 100% sure, but I think ODBC driver expects a valid SQL statement, and not an DBMS specific SQL statement, like the one you provided.