how to confirm that jdbctemplate excuted query succesfully - hsqldb

I am using hsqldb for database. i am using jdbctemplate for sqlqueries. i just want to know how i can confirm that jdbctemplate executed query successfully, as i can't see the result in database, because my database is hsqldb.
Thank in advance

JdbcTemplate.update(..) returns the number of updated rows as an integer. Check if that is greater than zero or not:
if(jdbcTemplate.update("insert into mytable..") > 0) {
// all ok
} else {
// not inserted anything
}

Instead of using HSQLDB as a pure memory DB, you can write out the contents on disc by initializing HSQLDB with the following URL:
jdbc:hsqldb:file:/opt/db/testdb
I presume you are using a "memory" URL like this (all contents as you notice are gone after the JVM shuts down):
jdbc:hsqldb:mem:mycooldb
When you shut down the database after the test, you can either view the resulting script-file using a texteditor, or start the HSQLDB-manager contained in the main HSQLDB jar.
java -jar hsqldb-version.jar

Related

Is it possible to load a pre-populated database from local resource using sqldelight

I have a relatively large db that may take 1 to 2 minutes to initialise, is it possible to load a pre-populated db when using sqldelight (kotlin multiplatform) instead of initialising the db on app launch?
Yes, but it can be tricky. Not just for "Multiplatform". You need to copy the db to the db folder before trying to init sqldelight. That probably means i/o on the main thread when the app starts.
There is no standard way to do this now. You'll need to put the db file in assets on android and in a bundle on iOS and copy them to their respective folders before initializing sqldelight. Obviously you'll want to check if the db exists first, or have some way of knowing this is your first app run.
If you're planning on shipping updates that will have newer databases, you'll need to manage versions outside of just a check for the existance of the db.
Although not directly answering your question, 1 to 2 minutes is really, really long for sqlite. What are you doing? I would first make sure you're using transactions properly. 1-2 minutes of inserting data would (probably) result in a huge db file.
Sorry, but I can't add any comments yet, which would be more appropriate...
Although not directly answering your question, 1 to 2 minutes is
really, really long for sqlite. What are you doing? I would first make
sure you're using transactions properly. 1-2 minutes of inserting data
would (probably) result in a huge db file.
Alternatively, my problem due to which I had to use a pre-populated database was associated with the large size of .sq files (more than 30 MB text of INSERTs per table), and SqlDeLight silently interrupted the generation, without displaying error messages.
You'll need to put the db file in assets on android and in a bundle on
iOS and copy them to their respective folders before initializing
sqldelight.
Having to load a db from resources on both android and ios feels a lot
of work + it means the shared project wont be the only place where the
data is initialised.
Kotlin MultiPlatform library Moko-resources solves the issue of a single source for a database in a shared module. It works for KMM the same way for Android and iOS.
Unfortunately, using this feature are almost not presented in the samples of library. I added a second method (getDriver) to the expected class DatabaseDriverFactory to open the prepared database, and implemented it on the platform. For example, for androidMain:
actual class DatabaseDriverFactory(private val context: Context) {
actual fun createDriver(schema: SqlDriver.Schema, fileName: String): SqlDriver {
return AndroidSqliteDriver(schema, context, fileName)
}
actual fun getDriver(schema: SqlDriver.Schema, fileName: String): SqlDriver {
val database: File = context.getDatabasePath(fileName)
if (!database.exists()) {
val inputStream = context.resources.openRawResource(MR.files.dbfile.rawResId)
val outputStream = FileOutputStream(database.absolutePath)
inputStream.use { input: InputStream ->
outputStream.use { output: FileOutputStream ->
input.copyTo(output)
}
}
}
return AndroidSqliteDriver(schema, context, fileName)
}
}
MR.files.fullDb is the FileResource from the class generated by the library, it is associated with the name of the file located in the resources/MR/files directory of the commonMain module. It property rawResId represents the platform-side resource ID.
The only thing you need is to specify the path to the DB file using the driver.
Let's assume your DB lies in /mnt/my_best_app_dbs/super.db. Now, pass the path in the name property of the Driver. Something like this:
val sqlDriver: SqlDriver = AndroidSqliteDriver(Schema, context, "/mnt/my_best_app_dbs/best.db")
Keep in mind that you might need to have permissions that allow you to read a given storage type.

Changing the GemFire query ResultSender batch size

I am experiencing a performance issue related to the default batch size of the query ResultSender using client/server config. I believe the default value is 100.
If I run a simple query to get keys (with some order by columns due to the PARTITION Region type), this default batch size causes too many chunks being sent back for even 1000 records. In my tests, even the total query time is only less than 100 ms, however, the app takes more than 10 seconds to process those chunks.
Reading between the lines in your problem statement, it seems you are:
Executing an OQL query on a PARTITION Region (PR).
Running the query inside a Function as recommended when executing queries on a PR.
Sending batch results (as opposed to streaming the results).
I also assume since you posted exclusively in the #spring-data-gemfire channel, that you are using Spring Data GemFire (SDG) to:
Execute the query (e.g. by using the SDG GemfireTemplate; Of course, I suppose you could also be using the GemFire Query API inside your Function directly, too)?
Implemented the server-side Function using SDG's Function annotation support?
And, are possibly (indirectly) using SDG's BatchingResultSender, as described in the documentation?
NOTE: The default batch size in SDG is 0, NOT 100. Zero means stream the results individually.
Regarding #2 & #3, your implementation might look something like the following:
#Component
class MyApplicationFunctions {
#GemfireFunction(id = "MyFunction", batchSize = "1000")
public List<SomeApplicationType> myFunction(FunctionContext functionContext) {
RegionFunctionContext regionFunctionContext =
(RegionFunctionContext) functionContext;
Region<?, ?> region = regionFunctionContext.getDataSet();
if (PartitionRegionHelper.isPartitionRegion(region)) {
region = PartitionRegionHelper.getLocalDataForContext(regionFunctionContext);
}
GemfireTemplate template = new GemfireTemplate(region);
String OQL = "...";
SelectResults<?> results = template.query(OQL); // or `template.find(OQL, args);`
List<SomeApplicationType> list = ...;
// process results, convert to SomeApplicationType, add to list
return list;
}
}
NOTE: Since you are most likely executing this Function "on Region", the FunctionContext type will actually be a RegionFunctionContext in this case.
The batchSize attribute on the SDG #GemfireFunction annotation (used for Function "implementations") allows you to control the batch size.
Of course, instead of using SDG's GemfireTemplate to execute queries, you can, of course, use the GemFire Query API directly, as mentioned above.
If you need even more fine grained control over "result sending", then you can simply "inject" the ResultSender provided by GemFire to the Function, even if the Function is implemented using SDG, as shown above. For example you can do:
#Component
class MyApplicationFunctions {
#GemfireFunction(id = "MyFunction")
public void myFunction(FunctionContext functionContext, ResultSender resultSender) {
...
SelectResults<?> results = ...;
// now process the results and use the `resultSender` directly
}
}
This allows you to "send" the results however you see fit, as required by your application.
You can batch/chunk results, stream, whatever.
Although, you should be mindful of the "receiving" side in this case!
The 1 thing that might not be apparent to the average GemFire user is that GemFire's default ResultCollector implementation collects "all" the results first before returning them to the application. This means the receiving side does not support streaming or batching/chunking of the results, allowing them to be processed immediately when the server sends the results (either streamed, batched/chunked, or otherwise).
Once again, SDG helps you out here since you can provide a custom ResultCollector on the Function "execution" (client-side), for example:
#OnRegion("SomePartitionRegion", resultCollector="myResultCollector")
interface MyApplicationFunctionExecution {
void myFunction();
}
In your Spring configuration, you would then have:
#Configuration
class ApplicationGemFireConfiguration {
#Bean
ResultCollector myResultCollector() {
return ...;
}
}
Your "custom" ResultCollector could return results as a stream, a batch/chunk at a time, etc.
In fact, I have prototyped a "streaming" ResultCollector implementation that will eventually be added to SDG, here.
Anyway, this should give you some ideas on how to handle the performance problem you seem to be experiencing. 1000 results is not a lot of data so I suspect your problem is mostly self-inflicted.
Hope this helps!
John,
Just to clarify, I use client/server topology(actually wan, but that is not important in here). My client is a spring boot web app which has kendo grid as ui. Users can filter/sort on any combination of the columns, which will be passed to the spring boot app for generating dynamic OQL and create the pagination. Till now, except for being dynamic, my OQL queries are quite straight forward. I do not want to introduce server side functions due to the complexity of our global deployment process. But I can if you think that is something I have to do.
Again, thanks for your answers.

Calling java code from Apache Derby

I've written a simple method in Java:
package com.fidel.extensions;
public class Extensions {
public static String capitalize(String input) {
return input.toUpperCase();
}
}
I then registered it as a function in Apache Derby.
create function capitalize(inputString varchar(255))
returns varchar(255)
parameter style JAVA
no sql language JAVA
external name 'com.fidel.extensions.Extensions.capitalize'
In order to give the database access to that code, this page suggests I have two choices:
Install the jar into the database
Add the jar to CLASSPATH
This is the text from that article:
The compiled Java for a procedure (or function) may be stored in the
database using the standard SQL procedure SQLJ.INSTALL_JAR or may be
stored outside the database in the class path of the application.
If I use the INSTALL_JAR approach to embed the jar into the database, my queries work fine. For example:
select capitalize('hello') from SYSIBM.SYSDUMMY1
However I don't actually want to store the jar in the database. I would like derby to look in my CLASSPATH variable to find it.
So I've added it to my CLASSPATH using the following:
export CLASSPATH=${CLASSPATH}:/home/fidel/dev/DbExtensions/extensions.jar
But when I run the same query, I get this error message:
The class 'com.fidel.extensions.Extensions' does not exist or is
inaccessible.
I'm using Netbean's SQL editor, which I assume would pick up the CLASSPATH I've set.
Has anyone managed to reference code in an external jar, via the CLASSPATH?
ps. I know I can use the UCASE/UPPER methods. But the code above is just an example
pps. I am able to get the query to work by adding the jar to the Driver list, but I don't think that's the correct thing to do.
Services -> Drivers -> Java DB (Embedded) -> Customize -> Add

SQL request on a newly saved Grails object

I don't understand why my request returns me an empty array with the code below.
Using grails and an H2 database
Animal lion = new Animal()
lion.save()
println lion.id
println sql.rows("select * from animal")
The outputs are
1
[]
Why do I get an empty array ?
If I go and check in the memory database at
localhost/Zoo/dbconsole
I get the line as I should be having. Is there some kind of a time limit that I have to wait before doing my sql request ?
Is this in Grails? If so, try:
lion.save( flush: true )
It's probably that Hibernate hasn't flushed the changes to the database before you do your select (especially as it looks like the above code is all in the same transaction).

Grails transactions (not GORM based but using Groovy Sql)

My Grails application is not using GORM but instead uses my own SQL and DML code to read and write the database (The database is a huge normalized legacy one and this was the only viable option).
So, I use the Groovy Sql Class to do the job. The database calls are done in Services that are called in my Controllers.
Furthermore, my datasource is declared via DBCP in Tomcat - so it is not declared in Datasource.groovy.
My problem is that I need to write some transaction code, that means to open a transaction and commit after a series of successful DML calls or rollback the whole thing back in case of an error.
I thought that it would be enough to use groovy.sql.Sql#commit() and groovy.sql.Sql#rollback() respectively.
But in these methods Javadocs, the Groovy Sql documentation clearly states
If this SQL object was created from a DataSource then this method does nothing.
So, I wonder: What is the suggested way to perform transactions in my context?
Even disabling autocommit in Datasource declaration seems to be irrelevant since those two methods "...do nothing"
The Groovy Sql class has withTransaction
http://docs.groovy-lang.org/latest/html/api/groovy/sql/Sql.html#withTransaction(groovy.lang.Closure)
public void withTransaction(Closure closure)
throws java.sql.SQLException
Performs the closure within a transaction using a cached connection. If the closure takes a single argument, it will be called with the connection, otherwise it will be called with no arguments.
Give it a try.
Thanks James. I also found the following solution, reading http://grails.org/doc/latest/guide/services.html:
I declared my service as transactional
static transactional = true
This way, if an Error occurs, the previously performed DMLs will be rolled back.
For each DML statement I throw an Error describing the message. For example:
try{
sql.executeInsert("""
insert into mytable1 (col1, col2) values (${val1}, ${val2})
""")
catch(e){
throw new Error("you cant enter empty val1 or val2")
}
try{
sql.executeInsert("""
insert into mytable2 (col1, col2) values (${val1}, ${val2})
""")
catch(e){
throw new Error("you cant enter empty val1 or val2. The previous insert is rolledback!")
}
Final gotcha! The service when called from the controller, must be in a try catch, as follows:
try{
myService.myMethod(params)
}catch(e){
//http://jts-blog.com/?p=9491
Throwable t = e instanceof UndeclaredThrowableException ? e.undeclaredThrowable : e
// use t.toString() to send info to user (use in view)
// redirect / forward / render etc
}