Can anyone please let me know the syntax for the DELETE batch operation using SAP UI5. I have done the insert batch operation. Its working fine. But delete batch operation is not working.
My INSERT Batch operation syntax
update_entry.YEAR = year;
update_entry.COUNTRY_ID = country;
update_entry.CUSTOMER = name;
var batch_single = insert_model.createBatchOperation('/customers',"POST",update_entry);
batch_changes.push(batch_single) ;
insert_model.addBatchChangeOperations(batch_changes);
insert_model.submitBatch(function() {
update_success == "successful" ;}, function() {
update_success == "unsuccessful";}, true);
insert_model.refresh();
I have modified the above code for the DELETE batch operation as below
var batch_single = insert_model.createBatchOperation('/customers',"DELETE",update_entry);
But the above syntax is not working. Could anyone help me with the issue.
Thanks
Sathish
As opposed to the create operation, you'll need to pass the ID to the delete operation and not "entry":
var batch_single = insert_model.createBatchOperation('/customers(1234)',"DELETE");`
I think changing from "POST" to "DELETE" will work, because you need a DELETE-request to push data to your backend.
First, check out this thread: SAPUI5 - Batch Operations - how to do it right?
I think the main-difference is your entity in createBatchOperation "/customers" - I think you have to change it to your service (e.g. "/sap/opu/odata/sap/MY_SERVICE/?$batch"). I found that then the batch is triggered, starting in this sequence:
1) /IWBEP/IF_MGW_CORE_SRV_RUNTIME~CHANGESET_BEGIN: SAP Proposal EXIT.
2) /iwbep/if_mgw_appl_srv_runtime~delete_entity. (n-times)
3)/iwbep/if_mgw_core_srv_runtime~changeset_end: SAP Proposal COMMIT
WORK.
Second dont use '' and "" in the same statement (during createBatchOperation) - always use the same (if possible).
insert_model.createBatchOperation("/customers","POST",update_entry);
Regards,
zY
Related
I am attempting to use clairvoyant's db-cleanup dag to clear metadata in our xcom table, but when I run it, I receive the following warning, printed thousands of times before I manually stop the job in order to not take down our mysql instance:
SAWarning: Loading context for <BaseXCom at 0x7f26f789b370> has changed within a load/refresh handler, suggesting a row refresh operation took place. If this event handler is expected to be emitting row refresh operations within an existing load or refresh operation, set restore_load_context=True when establishing the listener to ensure the context remains unchanged when the event handler completes.
The other cleanup tasks work fine, but it is the xcom table in particular I am having trouble with. We have hundreds/thousands of active dags and so the xcom table is constantly being written to nearly every second or two. I think that is what is causing this error, the fact that the data is continually changing while it is being queried.
I have been unable to find the cause of this or any examples of how this can be resolved. I tried adding a "restore_load_context":True line as per SQLAlchemy docs but it did not work.
Here are the snippets I attempted to add to the database object and the cleanup task:
{
"airflow_db_model": XCom,
"age_check_column": XCom.execution_date,
"keep_last": False,
"keep_last_filters": None,
"keep_last_group_by": None,
"restore_load_context":True
},
....
def cleanup_function(**context):
logging.info("Retrieving max_execution_date from XCom")
max_date = context["ti"].xcom_pull(
task_ids=print_configuration.task_id, key="max_date"
)
max_date = dateutil.parser.parse(max_date) # stored as iso8601 str in xcom
airflow_db_model = context["params"].get("airflow_db_model")
state = context["params"].get("state")
age_check_column = context["params"].get("age_check_column")
keep_last = context["params"].get("keep_last")
keep_last_filters = context["params"].get("keep_last_filters")
keep_last_group_by = context["params"].get("keep_last_group_by")
restore_load_context = context["params"].get("restore_load_context")
In order to not paste too much code here, I am using the same code in the db-cleanup dag. Has anyone encountered this and found a way to resolve?
I am very inexperienced with sqlalchemy and am entirely unsure where else to place this code or how to go about it.
I need to check if my res from dbSendQuery() is over.
My code is like this:
db <- dbConnect(drv=SQLite(),flags=SQLITE_RW,dbname="db.sqlite",synchronous = "off")
dbBegin(db)
res <- dbSendQuery(db,"Update Operation SET Name = 'teste' where Id = 1")
if("my SendQuery is over"){
dbClearResult(res)
dbCommit(db)
dbDisconnect(db)
}
I need to know when it is over to send this to commit and then disconnect.
UPDATE 1
Im my sistem, for this example above can happen with more than 1 users simultaneous.
Then, when the first connection in db is over, i need to finish him requisition and provide to the other connection the possibility to write your query.
dbSendQuery() always awaits completion. You can double-check by calling dbGetRowsAffected(res) .
For SQL statements that are run for the side effect and do not return a value, dbSendStatement() is preferred.
The synchronous = "off" argument to dbConnect() is a misnomer, this defines when and how the data is written to disk; no multi-threading is involved here.
I have some questions when I build a JanusGraph Mixed index.
This is my code:
mgmt = graph.openManagement();
idx = mgmt.getGraphIndex('zhh1_index');
prop = mgmt.getPropertyKey('zhang');
mgmt.addIndexKey(idx, prop);
prop = mgmt.getPropertyKey('uri');
mgmt.addIndexKey(idx, prop);
prop = mgmt.getPropertyKey('age');
mgmt.addIndexKey(idx, prop);
mgmt.commit();
mgmt.awaitGraphIndexStatus(graph, 'zhh1_index').status(SchemaStatus.REGISTERED).call();
mgmt = graph.openManagement();
mgmt.updateIndex(mgmt.getGraphIndex('zhh1_index'),SchemaAction.ENABLE_INDEX).get();
mgmt.commit();
vertex2=graph.addVertex(label,'zhh1');
vertex2.property('zhang','male');
vertex2.property('uri','/zhh1/zhanghh');
vertex2.property('age','18');
vertex3=graph.addVertex(label,'zhh1');
vertex3.property('zhang','male');
vertex3.property('uri','/zhh1/zhangheng');
When the program executes this line:
mgmt.awaitGraphIndexStatus(graph, 'zhh1_index').status(SchemaStatus.REGISTERED).call();
the the log prints these information (and about 30s later, an exception like this: the sleep was interrupt):
GraphIndexStatusReport[success=false, indexName='zhh1_index', targetStatus=ENABLED, notConverged={jiyq=INSTALLED, zhang=INSTALLED, uri=INSTALLED, age=INSTALLED}, converged={}, elapsed=PT1M0.096S]
I was so confused about this!
It keeps printing a lot for all indexes I have. Am I doing anything wrong? How to avoid such message?
When I execute the following statement separately, the following exception is reported:
exception:java.util.concurrent.ExecutionException:
mgmt.updateIndex(mgmt.getGraphIndex('zhh1_index'),SchemaAction.ENABLE_INDEX).get();
org.apache.tinkerpop.gremlin.driver.exception.ResponseException: Cannot invoke method get() on null object
Your index seems to be stuck in the INSTALLED state, which may happening due to a few reasons: please see this post and look at my answer-- specifically bullet numbers 2,3, and 5.
When did you buildMixedIndex() ?
REINDEX procedure may be required.
I'm using hsqldb to create cached tables and indexed tables.
The data being stored has pretty high frequency so I need to use a connection pool.
Also because there is a lot of data I do not call checkpoint on every commit, but rather expect the data to be flushed after 50,000 rows are inserted.
So the thing is that I can see the .data file is growing but when I connect with hsqldb client I don't see the tables and the data.
So I had 2 simple tests, one inserted single row and one inserted 60,000 rows to new table. In both cases I couldn't see the result in any hsqldb client.
(Note that I use shutdown=true)
So when I add checkpoint after each commit, it solve the problem.
Also if specify in the connection string to use log, it solves the problem (I don't want the log in production though). Also not using pooled connection solved the problem and last is using pooled data source and explicitly close it before shutdown.
So I guess that some connections in the connection pool are not being closed, preventing from the db to somehow commit the changes and make them available for the client. But then, why couldn't I see the result even with 60,000 rows?
I also would expect the pool to be closed automatically...
What am I doing wrong? What is happening behind the scene?
The code to get the data source looks like this:
Class.forName("org.hsqldb.jdbcDriver");
String url = "jdbc:hsqldb:" + m_dbRoot + dbName + "/db" + ";hsqldb.log_data=false;shutdown=true;hsqldb.nio_data_file=false";
ConnectionFactory connectionFactory = new DriverManagerConnectionFactory(url, user, password);
GenericObjectPool connectionPool = new GenericObjectPool();
KeyedObjectPoolFactory stmtPool = new GenericKeyedObjectPoolFactory(null);
new PoolableConnectionFactory(connectionFactory, connectionPool, stmtPool, null, false, true);
DataSource ds = new PoolingDataSource(connectionPool);
And I'm using this Pooled data source to create table:
Connection c = m_dataSource.getConnection();
Statement st = c.createStatement();
String script = String.format("CREATE CACHED TABLE IF NOT EXISTS %s (id %s NOT NULL, entity %s NOT NULL, PRIMARY KEY (id));", m_tableName, m_idGenerator.getIdType(), TABLE_ENTITY_TYPE);
st.execute(script);
c.close;
st.close();
And insert rows:
Connection c = m_dataSource.getConnection();
c.setAutoCommit(false);
Statement stmt = c.prepareStatement(m_sqlInsert);
stmt.setObject(1, id);
stmt.setBinaryStream(2, Serializer.Helper.serialize(m_serializer, entity));
stmt.executeUpdate();
stmt.close();
stmt = null;
c.commit();
c.close();
stmt.close();
so the above seems to add data but it cannot be seen.
When I explicitly called
connectionPool.close();
Then and only then I could see the result.
I also tried to use JDBCDataSource and it worked as well.
So what is going on? And what is the right way to do this?
Your method of accessing the database from outside your application process is simply wrong.
Only one java process is supposed to connect to the file: database.
In order to achieve your aim, launch an HSQLDB server within your application, using exactly the same JDBC URL. Then connect to this server from the external client.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/listeners-chapt.html#lsc_app_start
Update: The OP commented that the external client was used after the application had stopped. Because you have turned the log off with hsqldb.log_data=false, nothing is persisted permanently. You need to perform an explicit CHECKPOINT or SHUTDOWN when your application completes its work. You cannot rely on shutdown=true at all, even without connection pooling.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/deployment-chapt.html#dec_bulk_operations
I am using STS + Grails 1.3.7 and doing the batch insertion for thousands instances of a domain class.
It is very slow because Hibernate simply batch all the SQL statements into one JDBC call instead of combining the statements into one.
How can I make them into one large statement?
What you can do is to flush the hibernate session each 20 insert like this :
int cpt = 0
mycollection.each{
cpt ++
if(cpt > 20){
mycollection.save(flush:true)
}
else{
mycollection.save()
}
}
The flushing of hbernate session executes the SQL statement each 20 inserts.
This is the easiest method but you can find more interessant way to do it in Tomas lin blog. He is explaining exactly what you want to do : http://fbflex.wordpress.com/2010/06/11/writing-batch-import-scripts-with-grails-gsql-and-gpars/
Using the withTransaction() method on the domain classes makes the inserts much faster for batch scripts. You can build up all of the domain objects in one collection, then insert them in one block.
For example:
Player.withTransaction{
for (p in players) {
p.save()
}
}
You can see this line in Hibernate doc:
Hibernate disables insert batching at the JDBC level transparently if you use an identity identifier generator.
When I changed the type of generator, it worked.