How do I get the SQL Schema creation statements with Play! Framework? - schema

This question shows how to get Play! to show SQL statments. I followed on the accepted solution (jpa.debugSQL=true), but I still don't see the SQL statements that are used to create the tables themselves in the log.
How can I get those statements? (I'm currently using the in-memory database that comes with Play!, all default settings)
Note - if one of the SQL Schema statements goes wrong, it is displayed as an error in the log.

Check in application.conf the value of your property:
application.log=INFO
It may be hiding the output.
If you are using a log4j.properties file you may want, as Zenklys says, check the appenders set up in there.

You should use a log4j.properties file. If you define a logger on debug level on hibernate package you should be able to get the SQL statements.

Related

Mulesoft not able to pass dynamic SQL queries based on environments

Hello for demonstration purposes I trimmed out my actual sql query.
I have a SQL query
SELECT *
FROM dbdev.training.courses
where dbdev is my DEV database table name. When I migrate to TEST env, I want my query to dynamically change to
SELECT *
FROM dbtest.training.courses
I tried using input parameters like {env: p('db_name')} and using in the query as
SELECT * FROM :env.training.courses
or
SELECT * FROM (:env).training.courses
but none of them worked. I don't want my SQL query in properties file.
Can you please suggest a way to write my SQL query dynamically based on environment?
The only alternative way is to deploy separate jars for different environments with different code.
You can set the value of the property to a variable and then use the variable with string interpolation.
Warning: creating dynamic SQL queries using any kind of string manipulation may expose your application to SQL injection security vulnerabilities.
Example:
#['SELECT * FROM $(vars.database default "dbtest").training.courses']
Actually, you can do a completely dynamic or partially dynamic query using the MuleSoft DB connector.
Please see this repo:
https://github.com/TheComputerClassroom/dynamicSQLGETandPATCH
Also, I'm about to post an update that allows joins.
At a high level, this is a "Query Builder" where the code that builds the query is written in DataWeave 2. I'm working on another version that allows joins between entities, too.
If you have questions, feel free to reply.
One way to do it is :
Create a variable before DB Connector:
getTableName - ${env}.training.courses
Write SQL Query :
Select * from $(getTableName);

How to use this weird .sql file?

I have a very strange 'reload.sql' file that I need to use to build a database.
It references about 200 XXX.dat files with straight-up readable data (although useless without explanations regarding the meaning of the fields).
I have tried msssql server, mysql workbench (on a server local-hosted on wamp), and directly accessing it through DBeaver and IBConsole, but I cannot manage to execute/build it.
It uses a weird syntax. There are elements like
begin
...
end
go
that hinted me towards T-SQL, but using sqlcmd on it gave me thousands upon thousands of errors regarding keywords.
Specifically, the very first batch of executable lines says
SET OPTION date_order = 'YMD'
go
SET OPTION PUBLIC.preserve_source_format = 'OFF'
go
SET TEMPORARY OPTION tsql_outer_joins = 'ON'
go
SET TEMPORARY OPTION st_geometry_describe_type = 'binary'
go
SET TEMPORARY OPTION st_geometry_on_invalid = 'Ignore'
go
SET TEMPORARY OPTION non_keywords = 'attach,compressed,detach,kerberos,nchar,nvarchar,refresh,varbit'
go
which generates about 150 errors 'Incorrect syntax near OPTION keyword' on its own, and according to google is part of a 'rexx' procedure but 'date_order' should then be 'DATFMT', right?
Another track is that of SyBase, but I cannot for the life of me get it to work (through my trials I did manage to build a .db file, that, well, is useless to me since I can't build it either..).
I've tried accessing it through ODBC pilots as well but none worked (the paradox ODBC did not crash, but said there was an error with a FROM clause, which are generated automatically...).
I need to know a way to build a database from this file or directly access the data it references, which I can't really post since it contains private medical data.
Also what madman came up with this.
The very first google link (for me anyway) against 'st-geometry-describe-option' shows this is a SAP SQL Anywhere database i.e. http://dcx.sybase.com/1200/en/dbadmin/st-geometry-describe-option.html
So I would suggest starting from the SQL Anywhere documentation and you will need to install the database software beforehand.

SQL71501 - How to get rid of this error?

We're using two schemas in our project (dbo + kal).
When we are trying to create a view with the following SQL statement, Visual Studio shows as an error in the error list.
CREATE VIEW [dbo].[RechenketteFuerAbkommenOderLieferantenView]
AS
SELECT
r.Id as RechenkettenId,
r.AbkommenId,
r.LieferantId,
rTerm.GueltigVon,
rTerm.GueltigBis,
rs.Bezeichnung,
rs.As400Name
FROM
[kal].[Rechenkette] r
JOIN
[kal].[RechenketteTerm] rTerm ON rTerm.RechenketteId = r.Id
JOIN
[kal].[Basisrechenkette] br ON rTerm.BasisrechenketteId = br.Id
JOIN
[kal].[Rechenkettenschema] rs ON rs.Id = br.Id
WHERE
r.RechenkettenTyp = 0
The error message looks like this:
SQL71501: Computed Column: [dbo].[RechenketteFuerAbkommenOderLieferantenView].[AbkommenId] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects:
[kal].[Basisrechenkette].[r]::[AbkommenId], [kal].[Rechenkette].[AbkommenId], [kal].[Rechenkette].[r]::[AbkommenId], [kal].[Rechenkettenschema].[r]::[AbkommenId] or [kal].[RechenketteTerm].[r]::[AbkommenId].
Publishing the view and working is just fine, but its quite annoying to see the error message all the time when building our project having all the serious errors get lost in the shuffle of those sql errors.
Do you have any idea, what the problem might be?
I just found the solution. Although I can't read your (what appears to be German) enough to know if you're referring to system views, if so, a database reference to master must be provided. Otherwise, adding any other required database references should solve the problem.
This is described here for system views: Resolve reference to object information schema tables
and for other database references.
Additional information is provided here: Resolving ambiguous references in SSDT project for SQL Server
For me I was seeing SQL71501 on a user defined table type. It turned out that the table type's sql file in my solution wasn't set as build. As soon as I changed the build action from None to Build, the error dissapeared.
I know this is an old question but it was the first one that popped up when searching for the error.
In my case the errors were preventing me from executing the SqlSchemaCompare in Visual Studio 2017. The error however was for a table/index of a table that was not part of the solution any more. A simple clean/rebuild did not help.
A reload of the visual studio solution did the trick.
We have a project that contains a view that references a table valued function in another database. After adding the database reference that is required to resolve the fields used from the remote database, we were still getting this error. I found that the table valued function was defined by using "SELECT * FROM ..." which was old code created by someone not familiar with good coding practices. I replaced the "*" portion with the enumerated fields needed and compiled that function, then re-created the dacpac for that database to capture the resulting schema, and incorporated the new dacpac as the database reference. Woo Hoo! the ambiguous references went away! Seems that SSDT engine cannot (or does not) always have the ability to reach down into the bowels of the referenced dacpac to come back with all the fields. For sure, the projects I work on are normally quite large, so I think it makes sense to give the tools all the help you can when asking them to validate your code.
Although this is an old topic, it is highly ranked on search engines, so I will share the solution that worked for me.
I faced the same error code with a CREATE TYPE statement, which was in a script file in my Visual Studio 2017 SQL Server project, because I couldn't find how to add a user-defined type specifically from the interface.
The solution is that, in Visual Studio, there are many programmability file types, other than the ones you can see through a right-click > Add. Just select New Element and use the search field to find the element you are trying to create.
From the last paragraph of the blog post Resolving ambiguous references in SSDT project for SQL Server, which was linked in the answer https://stackoverflow.com/a/33225020/15405769 :
In my case, when I double clicked the file and opened it I found that
one of the references to ColumnX was not using the two part name and
thus SSDT was unable to determine which table it belonged to and
furthermore whether the column existed in the table. Once I added the
two part name. Bingo! I was down to no errors!
In my case, I got this error when I was trying to export the datatier application. The error was related to the link on a database user. To solve the problem, you need to log in to the server with read rights on system users.
In my case I just double click on the error and it will take me to the exact error on procedure and I noticed that table column is deleted or renamed but in SP its still using the old column name.
If you build an SSDT project you can get an error which says:
“SQL71502: Function: [XXX].[XXX] has an unresolved reference to object [XXX].[XXX].”
If the code that is failing is trying to use something in the “sys” schema or the “INFORMATION_SCHEMA” schema then you need to add a database reference to the master dacpac:
Add a database reference to master:
Under the project, right-click References.
Select Add database reference….
Select System database.
Ensure master is selected.
Press OK.
Note that it might take a while for VS to update.
(Note this was copied verbatim from the stack overflow question with my screenshots added: https://stackoverflow.com/questions/18096029/unresolved-reference-to-obj… - I will explain more if you get past the tldr but it is quite exciting! )
NOT TLDR:
I like this question on stack overflow as it has a common issue that anyone who has a database project that they import into SSDT has faced. It might not affect everyone, but a high percentage of databases will have some piece of code that references something that doesn't exist.
The question has a few little gems in it that I would like to explore in a little more detail because I don't feel that a comment on stack overflow really does them justice.
If we look at the question it starts like this:
If you're doing this from within Visual Studio, make sure that the file is set to "Build" within the properties.
I've had this numerous times and it really gets me everytime. SQL Build is case sensitive even though your collation isn't. Check the case is correct in agreement with the object and schema names that are referenced!

How can I programmatically run arbitrary SQL statements against my Hibernate/HSQL database?

I'm looking for a way to programmatically execute arbitrary SQL commands against my DB.
(Hibernate, JPA, HSQL)
Query.createNativeQuery() doesn't work for things like CREATE TABLE.
Doing LOTS of searching, I thought I could use the Hibernate Session.doWork().
By using the deprecated Configuration.buildSesionFactory() seems to show that doWork won't work.
I get "use lacks privilege or object not found" for all the CREATE TABLE statements.
So, what other technique is there for executing arbitratry SQL statements?
There were some notes on using the underlying JDBC Statement, but I haven't figure out how to get a JDBC Connection object from Hibernate to try that.
Note that the hibernate.hbm2ddl.auto=create setting will NOT work for me, as I have ARRAY[] columns which it chokes on.
I don't think there is any problem executing a create table statement with a Hibernate native query. Just make sure to use Query.executeUpdate(), and not Query.list() or Query.uniqueResult().
If it doesn't work, please tell us what happens when you execute it, and join the full stack trace of the exception and the SQL query you're executing.
"use lacks privilege or object not found" in HSQL may mean anything, for example existence of a table with the same name. Error messages in HSQL are completely misleading. Try listing your tables using DatabaseMetadata - you have probably already created the table.

Getting the SQL from a Doctrine Migration

I have been researching a way to get the SQL statements that are built by a generated Migration file. These extend Doctrine_Migration_Base. Essentially I would like to save the SQL as change scripts.
The execution path leads me to Doctrine_Export which has methods that build the SQL statement and executes them. I have found no way of asking for just them. The export methods found in Doctrine_Export only operate on Doctrine_Record models and not Migration scripts.
From the command line './doctrine migrate version#' the path goes:
Doctrine_Cli::run(cmd)
Doctrine_Task_Migrate::setArguments(args)
Doctrine_Task_Migrate::execute()
Doctrine_Migration::migrate(to)
Doctrine_Migration_Process::Doctrine_Export::various
create, drop, alter methods with sql
equivalents.
Has anyone tackled this before? I really would not like to change Doctrine base files. Any help is greatly appreciated.
Could you make a dev server, and do the migration on that, storing a SQL Trace as you go?you don't have to keep the results, but you would get a list of every command.
Taking into account Rob Farley's suggestion, I modified:
Doctrine_Core::migrate
Doctrine_Task_Migrate::execute
When the execute method is called the optional argument 'dryRun' is checked. If true
then a 'Doctrine_Connection_Profiler' instance is created. The 'dryRun' value is then passed onto
the 'Doctrine_Core::migrate' method. The 'dryRun' value of true allows the changes to rollback when done executing the SQL statements. When the method returns, the profiler is parsed and non-empty SQL statements
not containing 'migration_version' are saved and displayed to the terminal.