Is any facility like Prepared statement supported in IgniteCache API to avoid query parsing each time? I saw that a Jira issue has been raised for this , and it says its resolved in version 1.5.0.final,
https://issues.apache.org/jira/browse/IGNITE-1856 , but i could not find any documentation for this in Apache Ignite site. I know that we can use prepared statement by connecting via JDBC Connection but that does not suit my use case.
My Code looks like below ,this query will be called again and again with different parameters,
IgniteCache<Integer,Subscriber> subscriberCache= rocCachemanager.getCache("subscriberCache");
SqlQuery<Integer, Subscriber> sql = new SqlQuery(Subscriber.class,
"from Subscriber where Subscriber.MSISDNNo=? and Subscriber.status='Active'");
sql.setArgs("SomeNumber");
QueryCursor<Entry<Integer,Subscriber>> cursor =ss.query(sql);
Statements are cached automatically, no action required. If your query text does not change, only parameters do, Ignite will not parse the query again.
Related
We are using the new DataStream feature introduced in NetScaler 9 (we're on v10) to do content switching (described here: http://support.citrix.com/proddocs/topic/netscaler/ns-dbproxy-wrapper-con.html). We have a read-only virtual server that balances across several read-only MySql slaves. We use our Content Switching to send all "Selects" over to the read-only server.
the policy is configured as such:
mysql.req.query.command.contains("select")
our users send multi-part queries to our database server. Most often they are simple, like:
use database;
select col1 from table1;
Sometimes they will put comments at the head of the query. for example:
-- this is my query
select col1 from table1;
What we've found is that if the query simply starts with a select, everything works swimmingly. However, in the cases where there is a use statement or comments preceding the query, the content swticher fails to detect that this is a select query and it bypasses our read-only virtual server.
I am about to tell all of our developers that they must fully alias every table in every query and avoid use statements (yes, this is a good thing anyway), and also that they cannot use comments in their sql (that's just silly).
Does anyone know how I can configure my NetScaler DataStream Content Switching to ignore comments and use statements?
The decision on where to send the query is done on the first line received after successful authentication... so ignoring the comment won't work.
You could setup a responder policy which sends back an error message saying "Please don't use SQL Comments in commands sent to the Load Balanced VIP". A bit draconian, but your devs would get the message fairly quick.. but there's no way to ignore the comment, but still base a decision on the select statement. However, I was under the impression that the select statement is up to the first semi colon... so in your example above, it should (in theory) still find the select statement. I'd need to test that to be certain of the behaviour however.
Also - the USE statement is critical. This is the DB on which all subsequent commands are issued.
It would be best practice to NOT use the USE statement, but instead, change the select statement to:
select col1 from database.table1;
Once the USE statement is seen, it prevents any subsequent commands being pipelined down the same connection... So if there are a lot of Use statements, you will not get to enjoy the connection multiplexing functionality that comes with DataStream.
We learned that Block Level comments are acceptable, but single line comments are not.
This is properly ignored:
/* my comment */
These comment styles are treated as part of the query:
-- my comment
# my comment
kind of ridiculous when having SET autocommit=0 is perfectly reasonable. What about in that situation.
I am wondering if anyone can explain why I get different results for the same query string between using the ExecuteSQL function in FM versus querying the database through a database browser (I'm using DBVisualizer).
Specifically, if I run
SELECT COUNT(DISTINCT IMV_ItemID) FROM IMV
in DBVis, I get 2802. In FileMaker, if I evaluate the expression
ExecuteSQL ( "SELECT COUNT(DISTINCT IMV_ItemID) FROM IMV"; ""; "")
then I get 2898. This makes me distrust the ExecuteSQL function. Inside of FM, the IMV table is an ODBC shadow, connected to the central MSSQL database. In DBVis, the application connects via JDBC. However, I don't think that should make any difference.
Any ideas why I get a different count for each method?
Actually, it turns out that when FM executes the SQL, it factors in whitespace, whereas DBVisualizer (not sure about other database browser apps, but I would assume it's the same) do not. Also, since the TRIM() function isn't supported by MSSQL (from what I've seen, at least) it is necessary to make the query inside of the ExecuteSQL statement something like:
SELECT COUNT(DISTINCT(LTRIM(RTRIM(IMV_ItemID)))) FROM IMV
Weird, but it works!
FM keeps a cache of the shadow table's records (for internal field-id-mapping). I'm not sure if the ExecuteSQL() function causes a re-creation of the cache. In other words: maybe the ESS shadow table is out of sync. Try to delete the cache by closing and restarting the FM client or perform a native find first.
You can also try a re-connect to the database server via the Open File script step.
HTH
I'm looking for a way to programmatically execute arbitrary SQL commands against my DB.
(Hibernate, JPA, HSQL)
Query.createNativeQuery() doesn't work for things like CREATE TABLE.
Doing LOTS of searching, I thought I could use the Hibernate Session.doWork().
By using the deprecated Configuration.buildSesionFactory() seems to show that doWork won't work.
I get "use lacks privilege or object not found" for all the CREATE TABLE statements.
So, what other technique is there for executing arbitratry SQL statements?
There were some notes on using the underlying JDBC Statement, but I haven't figure out how to get a JDBC Connection object from Hibernate to try that.
Note that the hibernate.hbm2ddl.auto=create setting will NOT work for me, as I have ARRAY[] columns which it chokes on.
I don't think there is any problem executing a create table statement with a Hibernate native query. Just make sure to use Query.executeUpdate(), and not Query.list() or Query.uniqueResult().
If it doesn't work, please tell us what happens when you execute it, and join the full stack trace of the exception and the SQL query you're executing.
"use lacks privilege or object not found" in HSQL may mean anything, for example existence of a table with the same name. Error messages in HSQL are completely misleading. Try listing your tables using DatabaseMetadata - you have probably already created the table.
This question shows how to get Play! to show SQL statments. I followed on the accepted solution (jpa.debugSQL=true), but I still don't see the SQL statements that are used to create the tables themselves in the log.
How can I get those statements? (I'm currently using the in-memory database that comes with Play!, all default settings)
Note - if one of the SQL Schema statements goes wrong, it is displayed as an error in the log.
Check in application.conf the value of your property:
application.log=INFO
It may be hiding the output.
If you are using a log4j.properties file you may want, as Zenklys says, check the appenders set up in there.
You should use a log4j.properties file. If you define a logger on debug level on hibernate package you should be able to get the SQL statements.
I would like to send updates to a remote endpoint over http.
I have found that joseki serves as such an endpoint.
However, how do i send update queries to this endpoint, if I only know the uri of the endpoint?
// To do a select-query you can use this:
QueryExecution qe = QueryExecutionFactory.sparqlService(serviceURI, query);
// (Sidenote:) does the next line have the desired effect of setting the binding?
// After all, sparqlService has no alternative with initialBindang as parameter
qe.setInitialBinding(initialBinding);
result = qe.execSelect();
// But updates do not support this kind of sparqlService method
// Illegal:
// UpdateAction.sparqlServiceExecute(serviceURI, query);
// I can use the following:
UpdateAction.parseExecute(updateQuery, dataset, initialBinding);
// But dataset is a Dataset object, not the uri.
// I don't believe this is the correct way to overcome this:
Dataset dataset = DatasetFactory.create(serviceURI);
Otherwise I would like to hear how to do remote update queries to endpoints for which only the URI is known.
Update:
Resorted to a local jena in the end. This kind of RDF-endpoint accepts insert and delete statements.
I did not succeed in finding a remote RDF-endpoint that would accept modifying queries.
Otherwise I would like to hear how to
do remote update queries to endpoints
for which only the URI is known.
This is handled a little differently depending on the endpoint server. There is a draft sparql/update protocol. But as it is draft and fairly new support is a little variant.
Genrally you can write sparql update queries a little like you would write SQL insert or update statements.
The update commands are Modify, Insert, Delete, Load, Clear but not every imlpementation supports all of these.
As endpoints are often public there is usually some authentication needed before the action is allowed, this is not defined in the spec so is implementation specific.
It is advised that a different URL is used for update statements so that http authentication can be used. 4store, uses /sparql for queries and /update for update queries.
The W3C draft has some examples of how to construct sparql update queries.
Joseki doesn't support remote updates. You probably should have a look at its successor, Fuseki, which supports SPARQL Update.