is there any way to get the values of affected rows using RETURNING INTO ?
I have to insert the same rows x times and get the ids of inserted rows.
The query looks like below:
public static final String QUERY_FOR_SAVE =
"DECLARE " +
" resultId NUMBER ; " +
"BEGIN " +
" INSERT INTO x " +
" (a, b, c, d, e, f, g, h, i, j, k, l, m) " +
" values (sequence.nextVal, :a, :b, :c, :d, :e, :f, :g, :h, :i, :j, :k, :l) " +
" RETURNING a INTO :resultId;" +
"END;";
Now i can add thise query to batch, in JAVA loop using addBatch
IntStream.range(0, count)
.forEach(index -> {
try {
setting parameters...
cs.addBatch();
} catch (SQLException e) {
e.printStackTrace();
}
});
cs.executeBatch();
Is there any way to return an array or list from batch like this ?
I can execute those insert x times using just sql but in this case i also wondering how to return an array of ids.
Thanks in advance
I'm assuming this is about Oracle. To my knowledge, this isn't possible, but you can run a bulk insertion using FORALL in your anonymous PL/SQL block, as described in this article I wrote, recently:
https://blog.jooq.org/2018/05/02/how-to-run-a-bulk-insert-returning-statement-with-oracle-and-jdbc/
This is a self-contained JDBC example from the article that inserts an array of values and bulk collects the results back into the JDBC client:
try (Connection con = DriverManager.getConnection(url, props);
Statement s = con.createStatement();
// The statement itself is much more simple as we can
// use OUT parameters to collect results into, so no
// auxiliary local variables and cursors are needed
CallableStatement c = con.prepareCall(
"DECLARE "
+ " v_j t_j := ?; "
+ "BEGIN "
+ " FORALL j IN 1 .. v_j.COUNT "
+ " INSERT INTO x (j) VALUES (v_j(j)) "
+ " RETURNING i, j, k "
+ " BULK COLLECT INTO ?, ?, ?; "
+ "END;")) {
try {
// Create the table and the auxiliary types
s.execute(
"CREATE TABLE x ("
+ " i INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,"
+ " j VARCHAR2(50),"
+ " k DATE DEFAULT SYSDATE"
+ ")");
s.execute("CREATE TYPE t_i AS TABLE OF NUMBER(38)");
s.execute("CREATE TYPE t_j AS TABLE OF VARCHAR2(50)");
s.execute("CREATE TYPE t_k AS TABLE OF DATE");
// Bind input and output arrays
c.setArray(1, ((OracleConnection) con).createARRAY(
"T_J", new String[] { "a", "b", "c" })
);
c.registerOutParameter(2, Types.ARRAY, "T_I");
c.registerOutParameter(3, Types.ARRAY, "T_J");
c.registerOutParameter(4, Types.ARRAY, "T_K");
// Execute, fetch, and display output arrays
c.execute();
Object[] i = (Object[]) c.getArray(2).getArray();
Object[] j = (Object[]) c.getArray(3).getArray();
Object[] k = (Object[]) c.getArray(4).getArray();
System.out.println(Arrays.asList(i));
System.out.println(Arrays.asList(j));
System.out.println(Arrays.asList(k));
}
finally {
try {
s.execute("DROP TYPE t_i");
s.execute("DROP TYPE t_j");
s.execute("DROP TYPE t_k");
s.execute("DROP TABLE x");
}
catch (SQLException ignore) {}
}
}
Related
I am using the SqliteModernCpp library. I have a data access object pattern, including the following function:
void movie_data_access_object::update_movie(movie to_update)
{
// connect to the database
sqlite::database db(this->connection_string);
// execute the query
std::string query = "UPDATE movies SET title = " + to_update.get_title() + " WHERE rowid = " + std::to_string(to_update.get_id());
db << query;
}
Essentially, I want to update the record in the database whose rowid (the PK) has the value that the object to_update has in its parameter (which is returned by get_id()).
This code yields an SQL logic error. What is the cause of this?
It turned out single quotes (') within the query string being created were missing. The line should be:
std::string query = "UPDATE movies SET title = '" + to_update.get_title() + "' WHERE rowid = " + std::to_string(to_update.get_id());
Since there is no UPDATE example in the official docs on github, This is how UPDATE queries should be implemented with prepared statements and binding
#define MODERN_SQLITE_STD_OPTIONAL_SUPPORT
#include "sqlite_modern_cpp.h"
struct Book {
int id;
string title;
string details;
Book(int id_, string title_, string details_):
id(std::move(id_)),
title(std::move(title_)),
details(std::move(details_)) {}
}
int main() {
Book book = Book(0, "foo", "bar")
sqlite::database db("stackoverflow.db");
// Assuming there is a record in table `book` that we want to `update`
db <<
" UPDATE book SET "
" title = ?, "
" details = ? "
" WHERE id = ?; "
<< book.title
<< book.details
<< book.id;
return 0;
}
I have about 20 columns in one row and not all columns are required to be filled in when row created also i dont want to cardcode name of every column in SQL query and on http.post request on frontend. All values are from form. My code:
var colNames, values []string
for k, v := range formData {
colNames = append(colNames, k)
values = append(values, v)
}
Now i have 2 arrays: one with column names and second with values to be inserted. I want to do something like this:
db.Query("insert into views (?,?,?,?,?,?) values (?,?,?,?,?,?)", colNames..., values...)
or like this:
db.Query("insert into views " + colNames + " values" + values)
Any suggestions?
Thanks!
I assume your code examples are just pseudo code but I'll state the obvious just in case.
db.Query("insert into views (?,?,?,?,?,?) values (?,?,?,?,?,?)", colNames..., values...)
This is invalid Go since you can only "unpack" the last argument to a function, and also invalid MySQL since you cannot use placeholders (?) for column names.
db.Query("insert into views " + colNames + " values" + values)
This is also invalid Go since you cannot concatenate strings with slices.
You could fromat the slices into strings that look like this:
colNamesString := "(col1, col2, col3)"
valuesString := "(val1, val2, val3)"
and now your second code example becomes valid Go and would compile but don't do this. If you do this your app becomes vulnerable to SQL injection and that's something you definitely don't want.
Instead do something like this:
// this can be a package level global and you'll need
// one for each table. Keep in mind that Go maps that
// are only read from are safe for concurrent use.
var validColNames = map[string]bool{
"col1": true,
"col2": true,
"col3": true,
// ...
}
// ...
var colNames, values []string
var phs string // placeholders for values
for k, v := range formData {
// check that column is valid
if !validColNames[k] {
return ErrBadColName
}
colNames = append(colNames, k)
values = append(values, v)
phs += "?,"
}
if len(phs) > 0 {
phs = phs[:len(phs)-1] // drop the last comma
}
phs = "(" + phs + ")"
colNamesString := "(" + strings.Join(colNames, ",") + ")"
query := "insert into views " + colNamesString + phs
db.Query(query, values...)
I have a merge into statement as this :
private static final String UPSERT_STATEMENT = "MERGE INTO " + TABLE_NAME + " tbl1 " +
"USING (SELECT ? as KEY,? as DATA,? as LAST_MODIFIED_DATE FROM dual) tbl2 " +
"ON (tbl1.KEY= tbl2.KEY) " +
"WHEN MATCHED THEN UPDATE SET DATA = tbl2.DATA, LAST_MODIFIED_DATE = tbl2.LAST_MODIFIED_DATE " +
"WHEN NOT MATCHED THEN " +
"INSERT (DETAILS,KEY, DATA, CREATION_DATE, LAST_MODIFIED_DATE) " +
"VALUES (SEQ.NEXTVAL,tbl2.KEY, tbl2.DATA, tbl2.LAST_MODIFIED_DATE,tbl2.LAST_MODIFIED_DATE)";
This is the execution method:
public void mergeInto(final JavaRDD<Tuple2<Long, String>> rows) {
if (rows != null && !rows.isEmpty()) {
rows.foreachPartition((Iterator<Tuple2<Long, String>> iterator) -> {
JdbcTemplate jdbcTemplate = jdbcTemplateFactory.getJdbcTemplate();
LobCreator lobCreator = new DefaultLobHandler().getLobCreator();
while (iterator.hasNext()) {
Tuple2<Long, String> row = iterator.next();
String details = row._2();
Long key = row._1();
java.sql.Date lastModifiedDate = Date.valueOf(LocalDate.now());
Boolean isSuccess = jdbcTemplate.execute(UPSERT_STATEMENT, (PreparedStatementCallback<Boolean>) ps -> {
ps.setLong(1, key);
lobCreator.setBlobAsBytes(ps, 2, details.getBytes());
ps.setObject(3, lastModifiedDate);
return ps.execute();
});
System.out.println(row + "_" + isSuccess);
}
});
}
}
I need to upsert multiple of this statement inside of PLSQL, bulks of 10K if possible.
what is the efficient way to save time : execute 10K statements at once, or how to execute 10K statements in the same transaction?
how should I change the method for support it?
Thanks,
Me
the most efficient way would be one that bulk-loads your data into the database. In comparison to one-by-one uploads (as in your example), I'd expect performance gains of at least 1 or 2 orders of magnitude ("bigger" data means less to be gained by bulk-inserting).
you could use a technique as described in this answer to bulk-insert your records into a temporary table first and then perform a single merge statement using the temporary table.
We are using session isolation serializable in our application. The intended behavior is that when a user is going insert a new row, it should-should check for the presence of the row with the same key and update the same if row found. But I have found multiple rows created for the same key in SQL server. Is this issue with isolation or the way we are handling the case?
Following is the code I am using,
private int getNextNumber(String objectName, Connection sqlConnection) throws SQLException {
// TODO Auto-generated method stub
int number = 0;
try{
sqlConnection.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
System.out.println("##### Transaction isolation set : " + sqlConnection.getTransactionIsolation());
Statement stmt = sqlConnection.createStatement();
ResultSet rs = stmt.executeQuery("select * from [dbo].[db] where DocumentNumber = '" + objectName.toString() + "' FOR UPDATE");
while(rs.next()) {
printNumber = rs.getInt("PrintNumber");
}
System.out.println("#### Print number found from sql is : " + printNumber);
if(printNumber == 0) {
printNumber = printNumber + 1;
stmt.execute("INSERT INTO [dbo].[db] (number, DocumentNumber) VALUES (1 ,'" + objectName.toString() + "')");
} else {
number = number + 1;
stmt.execute("UPDATE [dbo].[db] SET Number =" + number + " WHERE DocumentNumber ='" + objectName.toString() + "'");
}
//sqlConnection.commit();
}catch(Exception e) {
sqlConnection.rollback();
e.printStackTrace();
} finally {
sqlConnection.commit();
}
return number;
}
Thanks,
Kishor Koli
It's an issue with the way your database is set up. You need a unique constraint to enforce uniqueness. You can check at insert time all you like but a unique constraint is the only way it's going to work 100% so it's just a waste of time selecting before inserting in the hope you'll prevent a duplicate. Insert, catch the exception/error or proceed.
I insert the data from CSV file into MySql database, especially into one table.
I use CSVRead, and the CSV file format is :
ts,val
2013-03-31T23:45:00-04:00 New_York,10
And the table is hisdata(ts, val).
Here is my code:
try{
try {
CSVReader reader = new CSVReader(new FileReader(csvFile));
List<String[]> csvList;
csvList = reader.readAll();
System.out.println("Start: size is " + csvList.size());
for(int i = 0; i<csvList.size(); i++){
String[] eachStr = csvList.get(i);
int j = 0;
//insert(ts, val) into hisdata of sql
String sql = "INSERT INTO hisdata" + "(ts, val)" + " VALUES"
+ "('" + eachStr[j] + "', '" + eachStr[j+1] + "')";
Statement st = (Statement) conn.createStatement();
count = st.executeUpdate(sql);
}
System.out.println("access table is inserted: " + count
+ " records");
reader.close();
} catch(IOException e){
System.out.println("Error: " + e.getMessage());
}
conn.close();
} catch (SQLException e) {
System.out.println("insert is failure " + e.getMessage());
}
I think probably, the import data is too large. When I did size(), size is 8835.
Basically, I set connector. Then read CSV file and insert data line by line. Finally, I closed reader and connection.
Here is the Console print out:
Sql Connection starts
Driver loaded
Database connected
Start: size is 8835
insert is failure Data truncated for column 'val' at row 1
Is the problem the data is too large.
Please give help to solve this problem.
Add System.out.println(sql); execute the result in mysql.
if data is too large, increase column length, or get column length then reduce your data with substring.