Does Sql batch query execution involve multiple exhange of data between server and client? - sql

Batch query execution from what I have read from multiple sources online have been stating that it enables grouping multiple statements together and executing it at once, thereby eliminating multiple back and forth communication.
Some sources that claim this are:
https://www.tutorialspoint.com/jdbc/jdbc-batch-processing.htm#:~:text=Batch%20Processing%20allows%20you%20to,communication%20overhead%2C%20thereby%20improving%20performance.
http://tutorials.jenkov.com/jdbc/batchupdate.html
https://www.baeldung.com/jdbc-batch-processing
All of these talk about single network trip etc, however going through the source code of H2 or Sqlite, it does look like its executing one by one. Albeit with autocommit disabled.
Eg: Sqlite
final synchronized int[] executeBatch(long stmt, int count, Object[] vals, boolean autoCommit) throws SQLException {
if (count < 1) {
throw new SQLException("count (" + count + ") < 1");
}
final int params = bind_parameter_count(stmt);
int rc;
int[] changes = new int[count];
try {
for (int i = 0; i < count; i++) {
reset(stmt);
for (int j = 0; j < params; j++) {
rc = sqlbind(stmt, j, vals[(i * params) + j]);
if (rc != SQLITE_OK) {
throwex(rc);
}
}
rc = step(stmt);
if (rc != SQLITE_DONE) {
reset(stmt);
if (rc == SQLITE_ROW) {
throw new BatchUpdateException("batch entry " + i + ": query returns results", changes);
}
throwex(rc);
}
changes[i] = changes();
}
}
finally {
ensureAutoCommit(autoCommit);
}
reset(stmt);
return changes;
}
Eg: H2
public int[] executeBatch() throws SQLException {
try {
debugCodeCall("executeBatch");
if (batchParameters == null) {
// Empty batch is allowed, see JDK-4639504 and other issues
batchParameters = Utils.newSmallArrayList();
}
batchIdentities = new MergedResult();
int size = batchParameters.size();
int[] result = new int[size];
SQLException first = null;
SQLException last = null;
checkClosedForWrite();
for (int i = 0; i < size; i++) {
Value[] set = batchParameters.get(i);
ArrayList<? extends ParameterInterface> parameters =
command.getParameters();
for (int j = 0; j < set.length; j++) {
Value value = set[j];
ParameterInterface param = parameters.get(j);
param.setValue(value, false);
}
try {
result[i] = executeUpdateInternal();
// Cannot use own implementation, it returns batch identities
ResultSet rs = super.getGeneratedKeys();
batchIdentities.add(((JdbcResultSet) rs).result);
} catch (Exception re) {
SQLException e = logAndConvert(re);
if (last == null) {
first = last = e;
} else {
last.setNextException(e);
}
result[i] = Statement.EXECUTE_FAILED;
}
}
batchParameters = null;
if (first != null) {
throw new JdbcBatchUpdateException(first, result);
}
return result;
} catch (Exception e) {
throw logAndConvert(e);
}
}
From the above code I see that there are multiple calls to the database, with each having its own result set. How does batch execution actually work?

Related

Rate limiting with redis

I'm using elasticache redis for rate limit and use Redisson as the client, the related code is:
public CompletableFuture<List<Long>> incrementSingleKeys(
List<String> keys, List<Long> increments, List<Long> ttls) {
RBatch batch = redissonClient.createBatch(BatchOptions.defaults());
for (int i = 0; i < keys.size(); i++) {
batch.getAtomicLong(keys.get(i)).addAndGetAsync(increments.get(i));
}
return batch
.executeAsync()
.thenCompose(
(counters) -> {
List<String> keysToSet = Lists.newArrayList();
List<Long> TTLsToSet = Lists.newArrayList();
for (int i = 0; i < counters.size(); i++) {
if (counters.get(i) == increments.get(i)) { // only set ttl for new keys
keysToSet.add(keys.get(i));
TTLsToSet.add(ttls.get(i));
}
}
if (!keysToSet.isEmpty()) { // Call setTTLS
return setTTLs(keysToSet, TTLsToSet)
.thenApply(
(r) -> counters
);
} else {
return CompletableFuture.completedFuture(counters);
}
});
}
public CompletableFuture<List<Boolean>> setTTLs(List<String> keys, List<Long> TTLs) {
CompletableFuture<List<Boolean>> future = new CompletableFuture<>();
Stopwatch timer = Stopwatch.createStarted();
RBatch batch = redissonClient.createBatch(BatchOptions.defaults());
for (int i = 0; i < keys.size(); i++) {
batch.getBucket(keys.get(i)).expireAsync(TTLs.get(i), TimeUnit.MILLISECONDS);
}
batch
.executeAsync()
.whenComplete(
(list, ex) -> {
if (ex != null) {
future.complete(
Collections.nCopies(size, false); // fail open
)
} else {
future.complete(
list.stream()
.map(entry -> (entry instanceof Boolean ? (Boolean) entry : false))
.collect(Collectors.toList()),
);
}
});
return future;
}
Basically, I'll set ttl only for new keys. The issue is that sometimes the increment batch call is successful but setTTL call timeout, which results in a permanent key and could lead to incorrect rate limiting. One workaround is to always get and set ttl whenever increment happens, but this would affect the performance. Is there any other solution?

Null pointer exception on Processing (ldrValues)

My code involves both Processing and Arduino. 5 different photocells are triggering 5 different sounds. My sound files play only when the ldrvalue is above the threshold.
The Null Pointer Exception is highlighted on this line
for (int i = 0; i < ldrValues.length; i++) {
I am not sure which part of my code should be changed so that I can run it.
import processing.serial.*;
import processing.sound.*;
SoundFile[] soundFiles = new SoundFile[5];
Serial myPort; // Create object from Serial class
int[] ldrValues;
int[] thresholds = {440, 490, 330, 260, 450};
int i = 0;
boolean[] states = {false, false, false, false, false};
void setup() {
size(200, 200);
println((Object[])Serial.list());
String portName = Serial.list()[3];
myPort = new Serial(this, portName, 9600);
soundFiles[0] = new SoundFile(this, "1.mp3");
soundFiles[1] = new SoundFile(this, "2.mp3");
soundFiles[2] = new SoundFile(this, "3.mp3");
soundFiles[3] = new SoundFile(this, "4.mp3");
soundFiles[4] = new SoundFile(this, "5.mp3");
}
void draw()
{
background(255);
//serial loop
while (myPort.available() > 0) {
String myString = myPort.readStringUntil(10);
if (myString != null) {
//println(myString);
ldrValues = int(split(myString.trim(), ','));
//println(ldrValues);
}
}
for (int i = 0; i < ldrValues.length; i++) {
println(states[i]);
println(ldrValues[i]);
if (ldrValues[i] > thresholds[i] && !states[i]) {
println("sensor " + i + " is activated");
soundFiles[i].play();
states[i] = true;
}
if (ldrValues[i] < thresholds[i]) {
println("sensor " + i + " is NOT activated");
soundFiles[i].stop();
states[i] = false;
}
}
}
You're approach is shall we say optimistic ? :)
It's always assuming there was a message from Serial, always formatted the right way so it could be parsed and there were absolutely 0 issues buffering data (incomplete strings, etc.))
The simplest thing you could do is check if the parsing was successful, otherwise the ldrValues array would still be null:
void draw()
{
background(255);
//serial loop
while (myPort.available() > 0) {
String myString = myPort.readStringUntil(10);
if (myString != null) {
//println(myString);
ldrValues = int(split(myString.trim(), ','));
//println(ldrValues);
}
}
// double check parsing int values from the string was successfully as well, not just buffering the string
if(ldrValues != null){
for (int i = 0; i < ldrValues.length; i++) {
println(states[i]);
println(ldrValues[i]);
if (ldrValues[i] > thresholds[i] && !states[i]) {
println("sensor " + i + " is activated");
soundFiles[i].play();
states[i] = true;
}
if (ldrValues[i] < thresholds[i]) {
println("sensor " + i + " is NOT activated");
soundFiles[i].stop();
states[i] = false;
}
}
}else{
// print a helpful debugging message otherwise
println("error parsing ldrValues from string: " + myString);
}
}
(Didn't know you could parse a int[] with int(): nice!)

Reading 3rd data data from excel using HSSF

I am trying to read the excel data, there are 2 data exist in the excel but my program is trying to read 3rd data which is null. Can some one please help me on this. below is my code.
public static Object[][] readExcel(String filePath, String sheetName)
throws IOException {
String[][] sheetData = null;
FileInputStream inputStream = new FileInputStream(filePath);
workBook = new HSSFWorkbook(inputStream);
sheet = workBook.getSheet(sheetName);
int k, l;
int rowCount = sheet.getLastRowNum()-sheet.getFirstRowNum();
System.out.println(rowCount);
int readRowCount = sheet.getFirstRowNum();
Row r = sheet.getRow(1);
int totalCol = r.getLastCellNum();
System.out.println(totalCol);
sheetData = new String[rowCount + 1][totalCol];
k = 0;
for (int i = readRowCount + 1; i <= rowCount; i++, k++) {
l = 0;
for (int j = 0; j < totalCol; j++, l++) {
try {
sheetData[k][l] = getCellData(i, j);
System.out.println(sheetData[k][l]);
} catch (Exception e) {
e.printStackTrace();
}
}
}
return (sheetData);
}
public static String getCellData(int RowNum, int ColNum) throws Exception {
try {
Cell = sheet.getRow(RowNum).getCell(ColNum);
String CellData = Cell.getStringCellValue();
return CellData;
} catch (Exception e) {
return "";
}
}
enter image description here

MS Access [Microsoft][ODBC Driver Manager] Invalid cursor state

I had the error in this code snippet:
private String[][] connectToDB(String query) throws ClassNotFoundException{
String[][] results = null;
try {
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
String db = "jdbc:odbc:Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=E:/EACA_AgroVentures1.accdb";
conn = DriverManager.getConnection(db);
stmt = conn.prepareStatement(query);
ResultSet rs = stmt.executeQuery();
ResultSetMetaData rsm = rs.getMetaData();
rs.beforeFirst();
int columns = rsm.getColumnCount();
int rows = getRowCount(rs);
//int rows = rs.getFetchSize();
int rowCount = 0;
results = new String[rows][columns];
//System.out.println(rows+" "+columns);
while((rs!=null) && (rs.next())){
for(int i = 1; i < columns; i++){
results[rowCount][i-1] = rs.getString(i); // --> ERROR SHOWS HERE
//System.out.println(rowCount+","+i+" = "+rs.getString(i));
}
rowCount++;
}
rs.getStatement().close();
conn.close();
} catch (SQLException ex) {
Logger.getLogger(MainFrame.class.getName()).log(Level.SEVERE, null, ex);
}
return results;
}
My query consists of the following:
private void loadMR(){
try {
String query = "SELECT dealerCode, SUM(kg) AS totalKG, SUM(price) AS totalPrice, returnDate, BID FROM meatReturns GROUP BY BID, dealerCode, returnDate;";
Object[][] result = connectToDB(query);
// some more code below..
I tried using the first code with some other query given in another method:
private void loadDealers(){
try {
String query = "SELECT * FROM Dealers";
Object[][] result = connectToDBWithRows(
query);
// some more code..
and it runs perfectly well. What is going on here? How can i fix this problem?
UPDATE: the only difference of connectToDBWithRows and connectToDB is the while loop that manages the resultSet
// Snippet from connectToDBWithRows()
while((rs!=null) && (rs.next())){
for(int i = 0; i < columns; i++){
if (i == 0){
// Do nothing
}else{
results[rowCount][i] = rs.getString(i);
//System.out.println(rowCount+","+i+" = "+rs.getString(i));
}
}
rowCount++;
}
and this is my getRowCount() method
private int getRowCount(ResultSet resultSet){
int size = 0;
try {
resultSet.last();
size = resultSet.getRow();
resultSet.beforeFirst();
}
catch(Exception ex) {
return 0;
}
return size;
}
I've noticed that sometimes, Access needs you to specify the table name when referring to columns in sql statements. Try the following:
private void loadMR(){
try {
String query = "SELECT meatReturns.dealerCode, SUM(meatReturns.kg) AS totalKG, SUM(meatReturns.price) AS totalPrice, meatReturns.returnDate, meatReturns.BID FROM meatReturns GROUP BY meatReturns.BID, meatReturns.dealerCode, meatReturns.returnDate";
Object[][] result = connectToDBWithRows(query);

Get List of artifacts in an Ivy Repositoty

I manage an Ivy repository with an extensive number of artifacts and i have been requested to list all third party librarys which we have a hundred odd. does anyone know of a a way to retrieve a list of artifacts from an ivy repo?
i found no such way to do this, i wrote a script in java to get the results, thought i might share the answer if people wanted it in the future, it is also formatted to be copied straight into an excel document.
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Iterator;
import java.util.List;
public class ListArtifacts {
public static void main(String[] args) {
Collection<File> all = new ArrayList<File>();
addTree(new File("."), all);
String delimeter = "\\.";
List<String> remove = new ArrayList<String>();
List<String> everything = new ArrayList<String>();
remove.add("pom");
remove.add("jar");
remove.add("xml");
remove.add("txt");
remove.add("sha1");
remove.add("md5");
remove.add("metadata");
remove.add("tar");
remove.add("gz");
remove.add("zip");
remove.add("rar");
FileWriter fWriter = null;
BufferedWriter writer = null;
try {
fWriter = new FileWriter("info.txt");
writer = new BufferedWriter(fWriter);
Iterator itr = all.iterator();
while (itr.hasNext() == true){
String[] split;
String temp = itr.next().toString();
split = temp.split(delimeter);
int i = 0;
int j = 0;
boolean flag = false;
while (i < split.length){
while (j < remove.size()){
if (split[i].equals(remove.get(j))){
flag = true;
}
j++;
}
j = 0;
i++;
}
if (flag == false){
String output = "";
int k=0;
boolean flag2 = false;
boolean hasVersion = false;
while (k < split.length){
if (flag2 == true){
output += ".";
flag2 = false;
}
output = output + split[k].toString();
boolean lastInt = false;
try{
String last = split[k].substring(split[k].length() - 1);
if (isInteger(last) == true)
lastInt = true;
}catch(Exception e){}
if ((isInteger(split[k].toString()) == true) || (lastInt == true)){
flag2 = true;
hasVersion = true;
}
k++;
}
if (hasVersion == true){
everything.add(output.substring(1));
writer.append(output.substring(1));
writer.newLine();
}
}
}
int i = 0;
String delim = "\\\\";
String finalOutput = "";
String toSplit = "";
while (i < everything.size()){
toSplit = everything.get(i);
String[] split2 = toSplit.split(delim);
finalOutput = split2[0] + "\t";
int j = 1;
while (j < split2.length-2){
finalOutput += split2[j] + ".";
j++;
}
finalOutput += split2[split2.length-2] + "\t";
finalOutput += split2[split2.length-1];
writer.append(finalOutput);
writer.newLine();
i++;
}
writer.close();
} catch (Exception e) {
}
System.out.println(all);
}
public static boolean isInteger(String input )
{
try
{
Integer.parseInt(input);
return true;
}
catch(Exception e)
{
return false;
}
}
static void addTree(File file, Collection<File> all) {
File[] children = file.listFiles();
if (children != null) {
for (File child : children) {
all.add(child);
addTree(child, all);
}
}
}
}
im sure this can be done much more cleanly, but i did no higher thinking just did the first thing i thought of with no revision.
If you are looking to do this using ivy during a build, the report task should help you get a report of all the JARs you are using.
If you are trying to fetch these details from the repository manager (covering all possible users), could you answer the question from #oers? Repository managers often offer some API that you can use to get reports about the artifacts that they store.