How to Iterate through ResultSet - resultset

My SELECT_QUERY_RETURNS_LIST returns 5 results, But following while loop prints only 4.
jdbcTemplate.query(SELECT_QUERY_RETURNS_LIST, new RowCallbackHandler() {
public void processRow(ResultSet resultSet) throws SQLException {
int count = 1;
while (resultSet.next()) {
String payload = resultSet.getString(1);
LOGGER.info("My result {}...",count++);
}
}
});
Logically It is correct as spring jdbc RowCallbackHandler tells
rs - the ResultSet to process (pre-initialized for the current row)
In firstline Itself we told resultSet.next(), So It starts from second record which results in printing 4 records. And following code works as my expcectation
jdbcTemplate.query(SELECT_QUERY_RETURNS_LIST, new RowCallbackHandler() {
public void processRow(ResultSet resultSet) throws SQLException {
int count = 1;
String payload = resultSet.getString(1);
LOGGER.info("My result {}...",count++);
while (resultSet.next()) {
payload = resultSet.getString(1);
LOGGER.info("My result {}...",count++);
}
}
});
So please tell solution to minimize code before while loop.

Issue has been resolved by using do while instead of while loop
jdbcTemplate.query(SELECT_QUERY_RETURNS_LIST, new RowCallbackHandler() {
public void processRow(ResultSet resultSet) throws SQLException {
int count = 1;
do {
payload = resultSet.getString(1);
LOGGER.info("My result {}...",count++);
} while (resultSet.next());
}
});

You don't need to write a loop with RowCallbackHandler interface. It is used inside JdbcTemplate.RowCallbackHandlerResultSetExtractor and there is already a while loop over ResultSet entries.
The only problem is that there is no index in RowCallbackHandler.processRow signature, so you have to rely on an external counter, I suppose.

Related

What happens if no records exists when I use Room to query?

I use Room in my Android Studio App, the Code A will crash when no record exists to query
What happens if no records exists when I use Room to query with Code B ?
#Dao
interface RecordDao {
// Code A
#Query("SELECT * FROM record_table where id=:id")
fun getByID(id:Int): Flow<RecordEntity>
// Code B
#Query("SELECT * FROM record_table")
fun getAll(): Flow<List<RecordEntity>>
}
If you look at the code generated by room for the RecordDao_Impl class implementation of RecordDao, you'll notice multiple things:
The getById function code returns null when no matching records exist in the table, and since your 'Code A' function return type is not null, kotlin throws a NullPointerException for the first function.
The getAll function code returns a Flow object with an ArrayList, in case it found any records then it adds them to the list, otherwise it just emits the empty ArrayList to the Flow object, therefore, you'll always get a flow object with a list inside it regardless if room found matching records or not, so no exception is thrown.
You can understand this a bit more if you look at the code generated for the two functions here:
#Override
public Flow<RecordEntity> getByID(final int id) {
final String _sql = "SELECT * FROM record_table where id=?";
final RoomSQLiteQuery _statement = RoomSQLiteQuery.acquire(_sql, 1);
int _argIndex = 1;
_statement.bindLong(_argIndex, id);
return CoroutinesRoom.createFlow(__db, false, new String[]{"record_table"}, new Callable<RecordEntity>() {
#Override
public RecordEntity call() throws Exception {
final Cursor _cursor = DBUtil.query(__db, _statement, false, null);
try {
final int _cursorIndexOfId = CursorUtil.getColumnIndexOrThrow(_cursor, "id");
final RecordEntity _result;
if (_cursor.moveToFirst()) {
final int _tmpId;
_tmpId = _cursor.getInt(_cursorIndexOfId);
_result = new RecordEntity(_tmpId);
} else {
_result = null;
}
return _result;
} finally {
_cursor.close();
}
}
#Override
protected void finalize() {
_statement.release();
}
});
}
#Override
public Flow<List<RecordEntity>> getAll() {
final String _sql = "SELECT * FROM record_table";
final RoomSQLiteQuery _statement = RoomSQLiteQuery.acquire(_sql, 0);
return CoroutinesRoom.createFlow(__db, false, new String[]{"record_table"}, new Callable<List<RecordEntity>>() {
#Override
public List<RecordEntity> call() throws Exception {
final Cursor _cursor = DBUtil.query(__db, _statement, false, null);
try {
final int _cursorIndexOfId = CursorUtil.getColumnIndexOrThrow(_cursor, "id");
final List<RecordEntity> _result = new ArrayList<RecordEntity>(_cursor.getCount());
while (_cursor.moveToNext()) {
final RecordEntity _item;
final int _tmpId;
_tmpId = _cursor.getInt(_cursorIndexOfId);
_item = new RecordEntity(_tmpId);
_result.add(_item);
}
return _result;
} finally {
_cursor.close();
}
}
#Override
protected void finalize() {
_statement.release();
}
});
}

Chronicle Queue performance when using ByteBuffer

I'm using Chronicle Queue as a DataStore which will be written to once but read many many times. I'm trying to get the best performance (time to read x number of records).
My data set (for my test) is about 3 million records , where each record consists of a bunch of longs and doubles. I initially started with "Highest-level" API which was obviously slow , then self-describing" data as mentioned in this Chronicle Documentation and finally using "raw data" which gave the best performance.
Code as below:(Corresponding write() code is omitted for brevity)
public List<DomainObject> read()
{
final ExcerptTailer tailer = _cq.createTailer();
List<DomainObject> result = new ArrayList<>();
for (; ; ) {
try (final DocumentContext ctx = tailer.readingDocument()) {
Wire wire = ctx.wire();
if(wire != null) {
wire.readBytes(in -> {
final long var1= in.readLong();
final int var2= in.readInt();
final double var3= in.readDouble();
final int var4= in.readInt();
final double var5= in.readDouble();
final int var6= in.readInt();
final double var7= in.readDouble();
result.add(DomainObject.create(var1, var2, var3, var4, var5, var6, var7);
});
}else{
return result;
}
}
}
}
However to improve my Application performance ,I started using ByteBuffer instead of a "DomainObject" and thus modified by read method as below:
public List<ByteBuffer> read()
{
final ExcerptTailer tailer = _cq.createTailer();
List<ByteBuffer> result = new ArrayList<>();
for (; ; ) {
try (final DocumentContext ctx = tailer.readingDocument()) {
Wire wire = ctx.wire();
if(wire != null) {
ByteBuffer bb = ByteBuffer.allocate(56);
wire.readBytes(in -> {
in.read(bb); });
result.add(bb);
}else{
return result;
}
}
}
}
Above code listing took an average of 550 ms vs 270ms for the first listing.
I also tried using Bytes.elasticByteBuffer as mentioned in this post but it was way slower
I'm guessing the second code listing is slower because it has to loop through the entire byte array.
So my question is - Is there a more performant way to read bytes from Chronicle Queue into a ByteBuffer? My data will always be 56 bytes with 8 bytes for each data item.
I suggest you use Chronicle-Bytes instead of raw ByteBuffer. Chronicle's Bytes class is a wrapper on top of ByteBuffer but much easier to use.
The problem with your code is you create a bunch of objects instead of stream-processing. I suggest you read with something like:
public void read(Consumer<Bytes> consumer) {
final ExcerptTailer tailer = _cq.createTailer();
for (; ; ) {
try (final DocumentContext ctx = tailer.readingDocument()) {
if (ctx.isPresent()) {
consumer.accept(ctx.wire().bytes());
} else {
break;
}
}
}
}
And your writing method could look like:
public void write(BytesMarshallable o) {
try (DocumentContext dc = _cq.acquireAppender().writingDocument()) {
o.writeMarshallable(dc.wire().bytes());
}
}
And then your consumer could be like:
private BytesMarshallable reusable = new BusinessObject(); //your class here
public accept(Bytes b) {
reusable.readMarshallable(b);
// your business logic here
doSomething(reusable);
}

Netbeans,SQL query after connection

After successful connection with sql server (using wamp), ive added a simple table with 1 column and one value, so im trying to take that value and update the jlist. It shows no error and it doesnt work,forgive me if im done someting stupid. (Also im writing this inside a jcombobox event handler)
private void jComboBox1ActionPerformed(java.awt.event.ActionEvent evt) {
String a = null;
String que="SELECT * FROM game";
Connection dn=null;
Statement st=null;
ResultSet sm=null;
String db="jdbc:mysql://localhost:3306/project";
try{
dn=DriverManager.getConnection(db, "user", "");
st=dn.prepareStatement(que);
sm=st.executeQuery(que);
while(sm.next())
{
a=sm.getString(1);
}
DefaultListModel n=new DefaultListModel();
n.addElement(a);
jList1.setModel(n);
}
catch(SQLException e)
{
}
}

Pig - passing Databag to UDF constructor

I have a script which is loading some data about venues:
venues = LOAD 'venues_extended_2.csv' USING org.apache.pig.piggybank.storage.CSVLoader() AS (Name:chararray, Type:chararray, Latitude:double, Longitude:double, City:chararray, Country:chararray);
Then I want to create UDF which has a constructor that is accepting venues type.
So I tried to define this UDF like that:
DEFINE GenerateVenues org.gla.anton.udf.main.GenerateVenues(venues);
And here is the actual UDF:
public class GenerateVenues extends EvalFunc<Tuple> {
TupleFactory mTupleFactory = TupleFactory.getInstance();
BagFactory mBagFactory = BagFactory.getInstance();
private static final String ALLCHARS = "(.*)";
private ArrayList<String> venues;
private String regex;
public GenerateVenues(DataBag venuesBag) {
Iterator<Tuple> it = venuesBag.iterator();
venues = new ArrayList<String>((int) (venuesBag.size() + 1)); // possible fails!!!
String current = "";
regex = "";
while (it.hasNext()){
Tuple t = it.next();
try {
current = "(" + ALLCHARS + t.get(0) + ALLCHARS + ")";
venues.add((String) t.get(0));
} catch (ExecException e) {
throw new IllegalArgumentException("VenuesRegex: requires tuple with at least one value");
}
regex += current + (it.hasNext() ? "|" : "");
}
}
#Override
public Tuple exec(Tuple tuple) throws IOException {
// expect one string
if (tuple == null || tuple.size() != 2) {
throw new IllegalArgumentException(
"BagTupleExampleUDF: requires two input parameters.");
}
try {
String tweet = (String) tuple.get(0);
for (String venue: venues)
{
if (tweet.matches(ALLCHARS + venue + ALLCHARS))
{
Tuple output = mTupleFactory.newTuple(Collections.singletonList(venue));
return output;
}
}
return null;
} catch (Exception e) {
throw new IOException(
"BagTupleExampleUDF: caught exception processing input.", e);
}
}
}
When executed the script is firing error at the DEFINE part just before (venues);:
2013-12-19 04:28:06,072 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1200: <file script.pig, line 6, column 60> mismatched input 'venues' expecting RIGHT_PAREN
Obviously I'm doing something wrong, can you help me out figuring out what's wrong.
Is it the UDF that cannot accept the venues relation as a parameter. Or the relation is not represented by DataBag like this public GenerateVenues(DataBag venuesBag)?
Thanks!
PS I'm using Pig version 0.11.1.1.3.0.0-107.
As #WinnieNicklaus already said, you can only pass strings to UDF constructors.
Having said that, the solution to your problem is using distributed cache, you need to override public List<String> getCacheFiles() to return a list of filenames that will be made available via distributed cache. With that, you can read the file as a local file and build your table.
The downside is that Pig has no initialization function, so you have to implement something like
private void init() {
if (!this.initialized) {
// read table
}
}
and then call that as the first thing from exec.
You can't use a relation as a parameter in a UDF constructor. Only strings can be passed as arguments, and if they are really of another type, you will have to parse them out in the constructor.

get next sequence value from database using hibernate

I have an entity that has an NON-ID field that must be set from a sequence.
Currently, I fetch for the first value of the sequence, store it on the client's side, and compute from that value.
However, I'm looking for a "better" way of doing this. I have implemented a way to fetch the next sequence value:
public Long getNextKey()
{
Query query = session.createSQLQuery( "select nextval('mySequence')" );
Long key = ((BigInteger) query.uniqueResult()).longValue();
return key;
}
However, this way reduces the performance significantly (creation of ~5000 objects gets slowed down by a factor of 3 - from 5740ms to 13648ms ).
I have tried to add a "fake" entity:
#Entity
#SequenceGenerator(name = "sequence", sequenceName = "mySequence")
public class SequenceFetcher
{
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "sequence")
private long id;
public long getId() {
return id;
}
}
However this approach didn't work either (all the Ids returned were 0).
Can someone advise me how to fetch the next sequence value using Hibernate efficiently?
Edit: Upon investigation, I have discovered that calling Query query = session.createSQLQuery( "select nextval('mySequence')" ); is by far more inefficient than using the #GeneratedValue- because of Hibernate somehow manages to reduce the number of fetches when accessing the sequence described by #GeneratedValue.
For example, when I create 70,000 entities, (thus with 70,000 primary keys fetched from the same sequence), I get everything I need.
HOWEVER , Hibernate only issues 1404 select nextval ('local_key_sequence') commands. NOTE: On the database side, the caching is set to 1.
If I try to fetch all the data manually, it will take me 70,000 selects, thus a huge difference in performance. Does anyone know the internal functioning of Hibernate, and how to reproduce it manually?
You can use Hibernate Dialect API for Database independence as follow
class SequenceValueGetter {
private SessionFactory sessionFactory;
// For Hibernate 3
public Long getId(final String sequenceName) {
final List<Long> ids = new ArrayList<Long>(1);
sessionFactory.getCurrentSession().doWork(new Work() {
public void execute(Connection connection) throws SQLException {
DialectResolver dialectResolver = new StandardDialectResolver();
Dialect dialect = dialectResolver.resolveDialect(connection.getMetaData());
PreparedStatement preparedStatement = null;
ResultSet resultSet = null;
try {
preparedStatement = connection.prepareStatement( dialect.getSequenceNextValString(sequenceName));
resultSet = preparedStatement.executeQuery();
resultSet.next();
ids.add(resultSet.getLong(1));
}catch (SQLException e) {
throw e;
} finally {
if(preparedStatement != null) {
preparedStatement.close();
}
if(resultSet != null) {
resultSet.close();
}
}
}
});
return ids.get(0);
}
// For Hibernate 4
public Long getID(final String sequenceName) {
ReturningWork<Long> maxReturningWork = new ReturningWork<Long>() {
#Override
public Long execute(Connection connection) throws SQLException {
DialectResolver dialectResolver = new StandardDialectResolver();
Dialect dialect = dialectResolver.resolveDialect(connection.getMetaData());
PreparedStatement preparedStatement = null;
ResultSet resultSet = null;
try {
preparedStatement = connection.prepareStatement( dialect.getSequenceNextValString(sequenceName));
resultSet = preparedStatement.executeQuery();
resultSet.next();
return resultSet.getLong(1);
}catch (SQLException e) {
throw e;
} finally {
if(preparedStatement != null) {
preparedStatement.close();
}
if(resultSet != null) {
resultSet.close();
}
}
}
};
Long maxRecord = sessionFactory.getCurrentSession().doReturningWork(maxReturningWork);
return maxRecord;
}
}
Here is what worked for me (specific to Oracle, but using scalar seems to be the key)
Long getNext() {
Query query =
session.createSQLQuery("select MYSEQ.nextval as num from dual")
.addScalar("num", StandardBasicTypes.BIG_INTEGER);
return ((BigInteger) query.uniqueResult()).longValue();
}
Thanks to the posters here: springsource_forum
I found the solution:
public class DefaultPostgresKeyServer
{
private Session session;
private Iterator<BigInteger> iter;
private long batchSize;
public DefaultPostgresKeyServer (Session sess, long batchFetchSize)
{
this.session=sess;
batchSize = batchFetchSize;
iter = Collections.<BigInteger>emptyList().iterator();
}
#SuppressWarnings("unchecked")
public Long getNextKey()
{
if ( ! iter.hasNext() )
{
Query query = session.createSQLQuery( "SELECT nextval( 'mySchema.mySequence' ) FROM generate_series( 1, " + batchSize + " )" );
iter = (Iterator<BigInteger>) query.list().iterator();
}
return iter.next().longValue() ;
}
}
If you are using Oracle, consider specifying cache size for the sequence. If you are routinely create objects in batches of 5K, you can just set it to a 1000 or 5000. We did it for the sequence used for the surrogate primary key and were amazed that execution times for an ETL process hand-written in Java dropped in half.
I could not paste formatted code into comment. Here's the sequence DDL:
create sequence seq_mytable_sid
minvalue 1
maxvalue 999999999999999999999999999
increment by 1
start with 1
cache 1000
order
nocycle;
To get the new id, all you have to do is flush the entity manager. See getNext() method below:
#Entity
#SequenceGenerator(name = "sequence", sequenceName = "mySequence")
public class SequenceFetcher
{
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "sequence")
private long id;
public long getId() {
return id;
}
public static long getNext(EntityManager em) {
SequenceFetcher sf = new SequenceFetcher();
em.persist(sf);
em.flush();
return sf.getId();
}
}
POSTGRESQL
String psqlAutoincrementQuery = "SELECT NEXTVAL(CONCAT(:psqlTableName, '_id_seq')) as id";
Long psqlAutoincrement = (Long) YOUR_SESSION_OBJ.createSQLQuery(psqlAutoincrementQuery)
.addScalar("id", Hibernate.LONG)
.setParameter("psqlTableName", psqlTableName)
.uniqueResult();
MYSQL
String mysqlAutoincrementQuery = "SELECT AUTO_INCREMENT as id FROM information_schema.tables WHERE table_name = :mysqlTableName AND table_schema = DATABASE()";
Long mysqlAutoincrement = (Long) YOUR_SESSION_OBJ.createSQLQuery(mysqlAutoincrementQuery)
.addScalar("id", Hibernate.LONG)
.setParameter("mysqlTableName", mysqlTableName)
.uniqueResult();
Interesting it works for you. When I tried your solution an error came up, saying that "Type mismatch: cannot convert from SQLQuery to Query". --> Therefore my solution looks like:
SQLQuery query = session.createSQLQuery("select nextval('SEQUENCE_NAME')");
Long nextValue = ((BigInteger)query.uniqueResult()).longValue();
With that solution I didn't run into performance problems.
And don't forget to reset your value, if you just wanted to know for information purposes.
--nextValue;
query = session.createSQLQuery("select setval('SEQUENCE_NAME'," + nextValue + ")");
Spring 5 has some builtin helper classes for that:
org/springframework/jdbc/support/incrementer
Here is the way I do it:
#Entity
public class ServerInstanceSeq
{
#Id //mysql bigint(20)
#SequenceGenerator(name="ServerInstanceIdSeqName", sequenceName="ServerInstanceIdSeq", allocationSize=20)
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator="ServerInstanceIdSeqName")
public Long id;
}
ServerInstanceSeq sis = new ServerInstanceSeq();
session.beginTransaction();
session.save(sis);
session.getTransaction().commit();
System.out.println("sis.id after save: "+sis.id);
Your idea with the SequenceGenerator fake entity is good.
#Id
#GenericGenerator(name = "my_seq", strategy = "sequence", parameters = {
#org.hibernate.annotations.Parameter(name = "sequence_name", value = "MY_CUSTOM_NAMED_SQN"),
})
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "my_seq")
It is important to use the parameter with the key name "sequence_name". Run a debugging session on the hibernate class SequenceStyleGenerator, the configure(...) method at the line final QualifiedName sequenceName = determineSequenceName( params, dialect, jdbcEnvironment ); to see more details about how the sequence name is computed by Hibernate. There are some defaults in there you could also use.
After the fake entity, I created a CrudRepository:
public interface SequenceRepository extends CrudRepository<SequenceGenerator, Long> {}
In the Junit, I call the save method of the SequenceRepository.
SequenceGenerator sequenceObject = new SequenceGenerator();
SequenceGenerator result = sequenceRepository.save(sequenceObject);
If there is a better way to do this (maybe support for a generator on any type of field instead of just Id), I would be more than happy to use it instead of this "trick".