GraphDB Running Queries not showing up in the Monitor Tab in Workbench and not executing - graphdb

Iam using GraphDB 10.0 and trying to run an insert statement in Workbench. This query starts running. Then I switch to the monitor tab which has no records. So I never see this query there. When I return back to the Sparql tab which contains my insert statement, I see no longer the progress circle instead the following info message :
'No results from previous run. Click Run or press Ctrl/Cmd-Enter to
execute the current query or update.'
This query eventually never gets executed.
In the logs, this query turns up in the query-log but not in the slow-query-log where I see the other successful queries.
Can someone suggest why I see this behaviour?
Here is the query :
PREFIX dsr: <https://sigir.com/model/>
insert
{GRAPH <https://sigir.com/datafactory/measures>
{
?ntwgiri a dsr:NetWeight ;
dsr:value ?ntwg_float .
}
} where {
service <repository:measures> {
SELECT distinct ?ntwgiri ?ntwg_float
where {
?ntwgiri a dsr:NetWeight ;
dsr:value ?ntwg_float .
}
}
}

Related

Snowflake Query Killed: "SQL execution canceled"

I've got a Talend job running with a couple of dataflows running in parallel against a Snowflake database. An update statement against Table A is causing an update on Table B to fail with the following error:
Transaction 'uuid-of-transaction', id 'a-very-long-integer-id', is being committed, SQL execution canceled.
Call END_OPERATION(999,'String1','String2','String3','String4','Success','0')
UPDATE TableB SET BATCH_KEY = 1234, LOAD_DT = current_timestamp::timestamp_ntz, KEY_HASH = MD5(TO_VARCHAR(ARRAY_CONSTRUCT(col1))), ROW_HASH = MD5(TO_VARCHAR(ARRAY_CONSTRUCT(col2, col3))) WHERE BATCH_KEY = -1 OR BATCH_KEY IS NULL;
The code for END_OPERATION is here:
var cmd =
"CALL END_OPERATION(:1,:2,:3,:4,:5,:6,null);";
try {
snowflake.execute (
{sqlText: cmd,binds: [BATCH_KEY,ENTITY,LAYER,SRC,OPERATION,OPERATION_STATUS].map(function(param){return param === undefined ? null : param})},
);
return "Succeeded.";
}
catch (err) {
return "Failed: " + err;
}
var cmd =
"UPDATE TableA SET OPERATION_STATUS=:6,END_DT=current_timestamp,ROW_COUNT=IFNULL(:7,ROW_COUNT) WHERE BATCH_KEY=:1 AND ENTITY_NAME=:2 AND LAYER_NAME=:3 AND SRC=:4 AND OPERATION_NAME=:5";
try {
snowflake.execute (
{sqlText: cmd,binds: [BATCH_KEY,ENTITY,LAYER,SRC,OPERATION,OPERATION_STATUS,ROW_COUNT].map(function(param){return param === undefined ? null : param})},
);
return "Succeeded.";
}
catch (err) {
return "Failed: " + err;
}
I'm failing to understand why the UPDATE statement against TableB is getting killed. It's getting killed nearly immediately.
Here we need to review the flow of all SQL statements coming from the Talend job within the same session in which the failing SQL command is run as well as all the statements coming from the other parallel job.
From the Query History we can get the SessionID of the session. From the History section of the Snowflake UI we can make a search based upon the SessionID. This will list all the commands run through this particular session.
We can review all the commands in their chronological order by sorting over the start_date column and try to observe the sequence of SQL statements.
Your point is indeed valid that an update on TableA should not affect an Update on TableB but after reviewing all the statements of both the sessions (we read that the Talend job is running a couple of dataflows in parallel) we may come across some SQL statement in one session which has taken a lock on tableB before the Update command is submitted against it from the other session.
Another thing which can be reviewed here is how the transaction is managed by the workflow. Within the same list of SQL queries in that session we need to check for any statements which sets the parameter Autocommit at the session level. If Autocommit it set to FALSE at the start of the session then the session will not release any of the table locks until an explicit commit is submitted.
Since the situation here sounds a bit unusual and complex, we may have to dig a little more deeper to review the execution logs of both the queries and for that we may have to contact the Snowflake support.

Apache Ignite sql query returns only cache contents, not complete results from database

My Ignite nodes (2 server nodes - let's call them A and B) are configured as follows:
ccfg.setCacheMode(CacheMode.PARTITIONED);
ccfg.setAtomicityMode(CacheMode.TRANSACTIONAL);
ccfg.setReadThrough(true);
ccfg.setWriteThrough(true);
ccfg.setWriteBehindEnabled(true);
ccfg.setWriteBehindBatchSize(10000);
Node A is started first, from command line as follows:
apache-ignite-fabric-2.2.0-bin>bin/ignite.bat config/default-config.xml
Node B is started from java code by running
public static void main(String[] args) throws Exception {
Ignite ignite = Ignition.start(ServerConfigurationFactory.createConfiguration());
ignite.cache("MyCache").loadCache(null);
...
}
(jar containing ServerConfigurationFactory is put in the apache-ignite-fabric-2.2.0-bin\libs directory so Node A and B are on the same cluster..otherwise there is an error)
I have a query that is supposed to return 9061 results from the database. After the cache loading process in Node B, I went to the Web Console and ran a simple count SQL statement against the caches. There is a button "Execute on selected node" that allows you to choose a specific cache to query. I queried Node A and got a count of 2341, and on Node B I get a count of 2064. If I just use the "Execute" button I get 4405 which is just the total of node A and B. Obviously they are missing 4656 records (9061 total records in db - 4405 in nodes A and B). I also ran the same count query in Java code using SqlFieldsQuery and I also get 4405.
Since readThrough is set to true I expected Ignite to also return results that are not in memory. But this is not the case because it just returns whatever is on the cache. Am I doing something wrong here? Thank you.
Read though works only for key-value APIs, so SQL engine assumes that all required data is preloaded from database prior to running a query.
If your data set doesn't fit in memory and you can't preload all the data, you can use native Ignite persistence storage: https://apacheignite.readme.io/docs/distributed-persistent-store

Issue raising an error in after trigger

I'm having an issue raising an error in an after trigger, and I don't see any reason why I can raise an error one way, but not the other. Let me give you an example.
The following trigger will fail and raise the following error:
Error:Apex trigger tstTrigger2 caused an unexpected exception, contact
your administrator: tstTrigger2 : execution of AfterUpdate caused by:
System.FinalException: SObject row does not allow errors:
Trigger.tstTrigger2 : line 19, column 1
trigger tstTrigger2 on Account (after update)
{
Set<Id> AccountIds = Trigger.newMap.keySet();
List<Account > accountsToProcess = [Select Id, Name from Account Where Id IN : AccountIds];
for(Account act: accountsToProcess)
{
act.addError('doesn't work');
}
}
However, raising an error this way works. Please note there is always ever only 1 record in the keyset, at least in this test scenario.
trigger tstTrigger on Account (after update)
{
Set<Id> AccountIds = Trigger.newMap.keySet();
List<Account > accountsToProcess = [Select Id, Name from Account Where Id IN : AccountIds];
Trigger.new[0].addError('However, this works?');
}
Any explanation of why the first one is failing, and the second one is not is greatly appreciated. As well, if you could point me to the best way to implement this so that it's bulkified that would be great. Thanks!
addError() doesn't roll back your insertion it will just prevent the further execution of the script, so the data is never inserted if an you throw an error on UI.
By doing this
Trigger.new[0].addError('However, this works?');
You're simply throwing an error on the first record in the list thereby stopping anything processing.
Something like this will solve your first code snip
trigger tstTrigger2 on Account (after update)
{
Map<ID, Account> accountMap = Trigger.newMap;
for(ID act: accountMap.keySet())
{
accountMap.get(act).addError('doesnt work');
}
}
You were querying out the account Ids and by that time they were already committed to the database, which won't allow errors to be flagged on the records

Translate JPQL query to plain SQL

I have a problem in my ejb java application .. and trying to solve it I stumble on this issue.
in my ejb class, I have a finder method
/* #ejb.finder signature="java.util.Collection findByAllMultiPlan(<list
of arguments>)"
*
* query="SELECT OBJECT(e)
* FROM ElementTbl as e
* WHERE <LOTS OF STUFF>"
That method fails sometimes, not all the time. (most of time when ran in a loop over a large data inputs) It can fail on a 8th, 10th or 35th iteration, etc..
It fails with the "Finder" exception
javax.ejb.EJBException: nested exception is: javax.ejb.FinderException:
Exception in findByCriteria while preparing or executing query:
statement: 'weblogic.jdbc.wrapper.PreparedStatement_com_inet_tds_r#bd44a1'
java.sql.SQLException: The transaction is no longer active - status:
'Rolling Back. [Reason=weblogic.transaction.internal.TimedOutException:
Transaction timed out after 33 seconds
BEA1-655E9CABEFC3D5D9F078]'. No further JDBC access is allowed within this
transaction.java.sql.SQLException: The transaction is no longer active -
status: 'Rolling Back.
[Reason=weblogic.transaction.internal.TimedOutException: Transaction timed
out after 33 seconds BEA1-655E9CABEFC3D5D9F078]'. No further JDBC access is
allowed within this transaction.
In a log files, I see query executed in weblogic jpql format
SELECT WL0.mreUidID, WL0.mreDtmCreated, WL0.mreCdeCustType, WL0.mreIn...
.. many,many fields ...
FROM tblElement WL0 WHERE ( ( ( ( ( ( ( ( ( WL0.elVchID= ? ) AND (
((((((((WL0.elVchAnotherId = ? ) .. and a lot of fields encoded with
"WL0.field =?"
There is NO output of values that are set in place of ?
So, question is: Is it possible to translate that JPQL language into plain SQL so to run in query analyzer?
Again, query is very large to be parsing it every time I need to test a particular scenario... it will take me close to more than an hour to fill in everything..
There must be a java method or utility to get access to this from within my EJB class.
Please help !!!

Redshift/Java: SQL execute hangs and never returns

The application that im working on runs a sequence of queries on AWS Redshift. Some of the queries take longer to execute due to the data volume.
The queries seem to finish on Redshift when i check the execution details on the server. However, the java application seems to hang indefinitely without throwing any exception or even terminating.
Here's the code that executes the query.
private void execSQLStrings(String[] queries, String dataset, String dbType) throws Exception {
Connection conn = null;
if (dbType.equals("redshift")) {
conn=getRedshiftConnection();
} else if (dbType.equals("rds")){
conn=getMySQLConnection();
}
Statement stmt=conn.createStatement();
String qry=null;
debug("Query Length: " + queries.length);
for (int ii=0;ii<queries.length;++ii) {
qry=queries[ii];
if (dataset != null) {
qry=qry.replaceAll("DATASET",dataset);
}
debug(qry);
stmt.execute(qry);
}
stmt.close();
conn.close();
}
I cant post the query that im running at the moment but it has multiple table joins and group by conditions and its an 800m row table. The query takes about 7~8 mins to complete on the server.
You need to update the DSN Timeout and/ or KeepAlive settings to make sure that your connections stay alive.
Refer: http://docs.aws.amazon.com/redshift/latest/mgmt/connecting-firewall-guidance.html