I try to do a raw sql query, becos I need to insert a particular id when injecting data in the SQL db
I set the IDENTITY INSERT flag in C# : SET IDENTITY_INSERT [MyDb-Dev].[dbo].[companies] ON
and when I run the query, it complains the flag is not set properly
Exception thrown: 'Microsoft.Data.SqlClient.SqlException' in Microsoft.EntityFrameworkCore.Relational.dll
Cannot insert explicit value for identity column in table 'companies' when IDENTITY_INSERT is set to OFF.
I tried :
removing [MyDb-Dev]. from the first query but still the same
running setIdentityInsert("companies", "ON"); TWICE and it never triggers any exception
here is my code (it never throws any exception so I guess it works)
private void setIdentityInsert(string table,string value)
{
try
{
var sql = #"SET IDENTITY_INSERT [MyDb-Dev].[dbo].[" + table + "] " + value;
_context.Database.ExecuteSqlRaw(sql);
_logger.LogInformation(sql);
}
catch (Exception e)
{
_logger.LogWarning(e.Message);
}
}
How can I figure if the SET IDENTITY_INSERT query worked correctly ?
Why would that query run without affecting the SQL flag ?
thanks for your precious help
The reason why this fails is because of a different session. Each request is its own session and SET IDENTITY_INSERT = ON is valid only for the current version. You will have to rework using string construction to append your query along side the IDENTITY_INSERT = ON
ok, entity is really not the best partner in this story
one got to do
_context.Database.OpenConnection();
before the first query
Related
I've got a Talend job running with a couple of dataflows running in parallel against a Snowflake database. An update statement against Table A is causing an update on Table B to fail with the following error:
Transaction 'uuid-of-transaction', id 'a-very-long-integer-id', is being committed, SQL execution canceled.
Call END_OPERATION(999,'String1','String2','String3','String4','Success','0')
UPDATE TableB SET BATCH_KEY = 1234, LOAD_DT = current_timestamp::timestamp_ntz, KEY_HASH = MD5(TO_VARCHAR(ARRAY_CONSTRUCT(col1))), ROW_HASH = MD5(TO_VARCHAR(ARRAY_CONSTRUCT(col2, col3))) WHERE BATCH_KEY = -1 OR BATCH_KEY IS NULL;
The code for END_OPERATION is here:
var cmd =
"CALL END_OPERATION(:1,:2,:3,:4,:5,:6,null);";
try {
snowflake.execute (
{sqlText: cmd,binds: [BATCH_KEY,ENTITY,LAYER,SRC,OPERATION,OPERATION_STATUS].map(function(param){return param === undefined ? null : param})},
);
return "Succeeded.";
}
catch (err) {
return "Failed: " + err;
}
var cmd =
"UPDATE TableA SET OPERATION_STATUS=:6,END_DT=current_timestamp,ROW_COUNT=IFNULL(:7,ROW_COUNT) WHERE BATCH_KEY=:1 AND ENTITY_NAME=:2 AND LAYER_NAME=:3 AND SRC=:4 AND OPERATION_NAME=:5";
try {
snowflake.execute (
{sqlText: cmd,binds: [BATCH_KEY,ENTITY,LAYER,SRC,OPERATION,OPERATION_STATUS,ROW_COUNT].map(function(param){return param === undefined ? null : param})},
);
return "Succeeded.";
}
catch (err) {
return "Failed: " + err;
}
I'm failing to understand why the UPDATE statement against TableB is getting killed. It's getting killed nearly immediately.
Here we need to review the flow of all SQL statements coming from the Talend job within the same session in which the failing SQL command is run as well as all the statements coming from the other parallel job.
From the Query History we can get the SessionID of the session. From the History section of the Snowflake UI we can make a search based upon the SessionID. This will list all the commands run through this particular session.
We can review all the commands in their chronological order by sorting over the start_date column and try to observe the sequence of SQL statements.
Your point is indeed valid that an update on TableA should not affect an Update on TableB but after reviewing all the statements of both the sessions (we read that the Talend job is running a couple of dataflows in parallel) we may come across some SQL statement in one session which has taken a lock on tableB before the Update command is submitted against it from the other session.
Another thing which can be reviewed here is how the transaction is managed by the workflow. Within the same list of SQL queries in that session we need to check for any statements which sets the parameter Autocommit at the session level. If Autocommit it set to FALSE at the start of the session then the session will not release any of the table locks until an explicit commit is submitted.
Since the situation here sounds a bit unusual and complex, we may have to dig a little more deeper to review the execution logs of both the queries and for that we may have to contact the Snowflake support.
I am trying to insert data into Hive (NON-ACID) table using hive-jdbc connection. It works if I execute a single SQL query in a 'statement'. If I try to batch the SQL using 'addBatch', I get an error 'method not supported'. I am using hive-jdbc 2.1 and HDP 2.3. Is there a way to batch multiple SQL into a single 'statement' using hive-jdbc?
As Ben mentioned, the addBatch() method is not supported in hive jdbc.
You can insert multiple data in one statement, for example:
String batchInsertSql = "insert into name_age values (?,?),(?,?)";
preparedStatement = connection.prepareStatement(batchInsertSql);
preparedStatement.setString(1, "tom");
preparedStatement.setInt(2, 10);
preparedStatement.setString(3, "sam");
preparedStatement.setInt(4, 20);
preparedStatement.execute();
Unfortunately there is just an interface for method addBatch from Hive-JDBC, there is NO implementation ...
public void addBatch() throws SQLException {
// TODO Auto-generated method stub
throw new SQLException("Method not supported");
}
Try this, works for me:
INSERT INTO <table_name> VALUES ("Hello", "World"), ("This", "works"), ("Be", "Awesome")
This will run as one map-reduce job, hence will save time as well.
It will create three rows with the mentioned values.
Use StringBuilder to loop over the values and keep appending to the query String, and then execute that String.
when i was trying to delete all records form Oracle database using the following code i got this exception,
QUERYY:: delete from DMUSER.CAMERA_DATA1
java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist
Actually here I wanted to create a data mining application using oracle SQL developer and the netbeans IDE. So my workflow is looks like as follow in oracle SQL developer,
The code part that I have used to delete a record from database as follows,
public void deleteData()throws SQLException {
Statement stmt = null;
String query = "delete from DMUSER.CAMERA_DATA1";
System.out.println("QUERYY:: " + query);
try {
stmt = getConnection().createStatement();
int rs = stmt.executeUpdate(query);
if (rs > 0) {
System.out.println("<-------------------Record Deleted--------------->");
}
} catch (SQLException e) {
e.printStackTrace();
} finally {
if (stmt != null) {
stmt.close();
}
}
}
I'm very new to the environment and searched many related questions even in stack but couldn't find exact answer which makes my work successful. Please help me to solve this.
QUERYY:: delete from DMUSER.CAMERA_DATA1
java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not
exist
You need to check, if CAMERA_DATA1 table/view exists in DMUSER schema or not.
Try connecting to same database and schema and check, if table exist or not. If not, then you need to create this table/view in same schema.
Referring to your screenshot provided, I can see CAMERA_DATA table instead of CAMERA_DATA1. So, you can either correct the SQL query to below
String query = "delete from DMUSER.CAMERA_DATA";
The application that im working on runs a sequence of queries on AWS Redshift. Some of the queries take longer to execute due to the data volume.
The queries seem to finish on Redshift when i check the execution details on the server. However, the java application seems to hang indefinitely without throwing any exception or even terminating.
Here's the code that executes the query.
private void execSQLStrings(String[] queries, String dataset, String dbType) throws Exception {
Connection conn = null;
if (dbType.equals("redshift")) {
conn=getRedshiftConnection();
} else if (dbType.equals("rds")){
conn=getMySQLConnection();
}
Statement stmt=conn.createStatement();
String qry=null;
debug("Query Length: " + queries.length);
for (int ii=0;ii<queries.length;++ii) {
qry=queries[ii];
if (dataset != null) {
qry=qry.replaceAll("DATASET",dataset);
}
debug(qry);
stmt.execute(qry);
}
stmt.close();
conn.close();
}
I cant post the query that im running at the moment but it has multiple table joins and group by conditions and its an 800m row table. The query takes about 7~8 mins to complete on the server.
You need to update the DSN Timeout and/ or KeepAlive settings to make sure that your connections stay alive.
Refer: http://docs.aws.amazon.com/redshift/latest/mgmt/connecting-firewall-guidance.html
I am trying to write to a PostgreSQL database table from MATLAB. I have got the connection working using JDBC and created the table, but I am getting a BatchUpdateException when I try to insert a record.
The MATLAB query to insert the data is:
user_table = 'rm_user';
colNames = {user_id};
data = {longRecords(iterator)};
fastinsert(conn, user_table, colNames, data);
The exception says:
java.sql.BatchUpdateException: Batch entry 0 INSERT INTO rm_user (user_id) VALUES ( '4') was aborted. Call getNextException to see the cause.
But I don't know how to call getNextException from MATLAB.
Any ideas what's causing the problem or how I can find out more about the exception?
EDIT
Turns out I was looking at documentation for a newer version of MATLAB than mine. I have changed from fastinsert to insert and it is now working. However, I'm still interested in knowing if there is a way I could use getNextException from MATLAB.
This should work:
try
user_table = 'rm_user';
colNames = {user_id};
data = {longRecords(iterator)};
fastinsert(conn, user_table, colNames, data);
catch err
err.getNextException ()
end
Alternatively, just look at the caught error, it should contain the same information.
Also, Matlab has a function lasterr which will give you the last error without a catch statement. The function is deprecated, but you can find the documentation for replacements at the link provided.