NullPointer with playOrm 1.4.1 when persisting entity - orm

I have mapped entities in playORM and my project was running fine with my entities mapped the way they were. However, after installing playORM 1.4.1, the lastest version released in maven, I got the null pointer bellow.
I want to find the error, but have no clue of where to start looking.
Any hint?
INFO: found meta=User locally
2012-11-09 17:32:22,918 com.alvazan.orm.layer9z.spi.db.cassandra.ColumnFamilyHelper waitForNodesToBeUpToDate
INFO: LOOP until all nodes have same schema version OR timeout in 300000 milliseconds
2012-11-09 17:32:22,939 com.alvazan.orm.layer9z.spi.db.cassandra.ColumnFamilyHelper tryToLoadColumnFamilyImpl
INFO: Well, we did NOT find any column family=User to load in cassandra(from virt=User)
2012-11-09 17:32:22,939 com.alvazan.orm.layer9z.spi.db.cassandra.ColumnFamilyHelper tryToLoadColumnFamilyVirt
INFO: Total time to LOAD column family meta from cassandra=21
java.lang.NullPointerException
at com.alvazan.orm.impl.meta.data.MetaEmbeddedSimple.translateToColumnImpl(MetaEmbeddedSimple.java:105)
at com.alvazan.orm.impl.meta.data.MetaEmbeddedSimple.translateToColumn(MetaEmbeddedSimple.java:93)
at com.alvazan.orm.impl.meta.data.MetaClassSingle.translateToRow(MetaClassSingle.java:82)
at com.alvazan.orm.layer0.base.BaseEntityManagerImpl.putImpl(BaseEntityManagerImpl.java:102)
at com.alvazan.orm.layer0.base.BaseEntityManagerImpl.put(BaseEntityManagerImpl.java:68)
at com.s1mbi0se.dmp.da.dao.UserDao.insertOrUpdateUser(UserDao.java:23)
at com.s1mbi0se.dmp.module.UserModule.persistData(UserModule.java:116)
at com.s1mbi0se.dmp.processor.mapred.SelectorReducer.reduce(SelectorReducer.java:60)
at com.s1mbi0se.dmp.processor.mapred.SelectorReducer.reduce(SelectorReducer.java:1)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)
17:32:22,946 WARN Thread-3 mapred.LocalJobRunner:298 - job_local_0001
java.lang.InterruptedException
at com.s1mbi0se.dmp.processor.mapred.SelectorReducer.reduce(SelectorReducer.java:63)
at com.s1mbi0se.dmp.processor.mapred.SelectorReducer.reduce(SelectorReducer.java:1)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)
2012-11-09 17:32:27,237 com.s1mbi0se.dmp.processor.main.DmpProcessorRunner run

EDIT: This is fixed in master branch and soon to be released. 11/27/12
The log formatting seems a bit off but this is the important part
java.lang.NullPointerException at com.alvazan.orm.impl.meta.data.MetaEmbeddedSimple.translateToColumnImpl(MetaEmbeddedSimple.java:105)
line 105 finds this code...
for(T val : toBeAdded) {
byte[] name = formTheName(val);
Column c = new Column();
c.setName(name);
row.getColumns().add(c);
}
specifically line 105 is the first line so toBeAdded is null for some reason....looking at who called this method.
hmmm, it turns out ONE of your entities has a null list of something. We need to add code in here so if your entity has a null list we create an empty one instead. Can you file a ticket and link to this URL. We can fix this one easily.
NOTE: I have a habit of every entity with a field like so
private List something;
I 100% always define it like this
private List something = new ArrayList();
That avoids Nullpointers all over the place which is why I missed this one :( :(.....anyways, we will fix to allow this null lists.
thanks,
Dean

This is fixed in release 1.4.2 which is available in Maven repo

Related

Migration script fails with IllegalStateException due to SHADOW_TABLE_NAME_SUFFIXES

I've updated the Room version from 2.4.3 to 2.5.0-alpha03 and after the last migration, the JSON generated will once in a while fail with
Caused by: java.lang.IllegalStateException: Cannot parse existing schema file: C:\mypath\com.example.MyDatabase\74.json. If you've modified the file, you might've broken the JSON format, try deleting the file and re-running the compiler.
If you've not modified the file, please file a bug at
https://issuetracker.google.com/issues/new?component=413107&template=1096568
with a sample app to reproduce the issue.
at androidx.room.vo.Database.exportSchema(Database.kt:111)
at androidx.room.DatabaseProcessingStep.process(DatabaseProcessingStep.kt:123)
at androidx.room.compiler.processing.CommonProcessorDelegate.processRound(XBasicAnnotationProcessor.kt:123)
at androidx.room.compiler.processing.javac.JavacBasicAnnotationProcessor.process(JavacBasicAnnotationProcessor.kt:71)
at org.jetbrains.kotlin.kapt3.base.incremental.IncrementalProcessor.process(incrementalProcessors.kt:90)
at org.jetbrains.kotlin.kapt3.base.ProcessorWrapper.process(annotationProcessing.kt:197)
at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment.callProcessor(JavacProcessingEnvironment.java:985) ... 44 more
After checking out the differences between the last schema file 73.json and the new one, 74.json, apart from the changes I've wanted to make, there's this block:
"SHADOW_TABLE_NAME_SUFFIXES": [
"_content",
"_segdir",
"_segments",
"_stat",
"_docsize"
],
"shadowTableNames$delegate": {
"initializer": {},
"_value": {}
},
inside the only ftsVersion block I have. Whatever I write in the migration script doesn't matter, I always get the same issue. What I've found is that SHADOW_TABLE_NAME_SUFFIXES is a static variable from androidx.room.migration.bundle.FtsEntityBundle & if I delete this block from 74.json, I don't get the issue anymore.
Can anyone help me with more info on this and why it could pop up in the schema file?
I've posted a bug report as per the stack trace's advice and it seems to be an issue from Room 2.5.0-alpha02 and 2.5.0-alpha03, which they will fix https://issuetracker.google.com/issues/246751839

Using Optaplanner for VRPPD

I am trying to run the example "optaplanner-mixedvrp-experiment" developed by Geoffrey De Smet and when I run it it throws me the following error:
Caused by: java.lang.IllegalStateException: The entity (MY) has a
variable (previousStandstill) with value (MUNO) which has a
sourceVariableName variable (nextVisit) with a value (WERBOMONT) which
is not null. Verify the consistency of your input problem for that
sourceVariableName variable.
I have not made any change, I have only cloned and executed it, I import and solve it and it throws me this error.
Do you know what could be happening?
I am applying it in the development of a variant of VRP with multiple deliveries and collections, but it throws me the same error. I have activated the FULL_ASSERT mode and nextVisit, previousStandstill, visitIndex are always null
It's been a long time since I looked at that code, so it's using an old version of optaplanner. Our goal is still to clean it up and offer an out of the box example for VRPPD (and probably remove some boilerplate along the way, using the upcoming #CollectionPlanningVariabe etc). That being said, we have multiple users&customers who used that optaplanner-mixedvrp-experiment to successfully build VRPPD implementations.
Which dataset did you try?
FWIW, that IllegalStateException says that when A.previous = B, the B.next is not A. So either the dataset importer didn't import it correctly - before calling solve() - especially if it fails before the first CH step in FULL_ASSERT. Or one of the custom moves corrupted the model.

Ontotext GraphDB Repository cannot be used for queries

I am getting an error message while trying to sparql in a particular repository.
Error :
The currently selected repository cannot be used for queries due to an error:
Page [id=7, ref=1,private=false,deprecated=false] from pso has size of 206 != 820 which is written in the index: PageIndex#244 [OPENED] ref:3 (parent=null freePages=1 privatePages=0 deprecatedPages=0 unusedPages=0)
So I tried to recreate the repository by uploading a new RDF file, but still issue persist. Any solution? Thanks in advance
The error indicates an inconsistency between what is written in the index (pso.index) and the actual page (pso). Is there any chance that the binary files were modified/over-written/partially merged? Under normal operation, you should never get this an error.
The only way to hide this error is to start GraphDB with: ./graphdb -Dthrow.exception.on.index.inconsistency=false. I will recommend doing this only for dumping the repository content into an RDF file, drop the repository, and recreate it.

Invalid operation result set is closed errorcode 4470 sqlstate null - DB2 data extract

I am running a very simple query and trying to extract the results to a text file. The entire query is essentially what is below, I am selecting everything from one single table with one piece of where criteria which is limiting the data to one month's worth. After it has extracted around 1.2 gig this error shows up. Is there any way that I can work around this other than extracting smaller date ranges? I am trying to pull a couple of years worth of data so if I can only get it a few days at a time it will take a lot of manual work.
I am currently using the free trial of a DB2 query tool - Razor SQL if that makes a difference, I can probably purchase different software if it would help. I am trying to get IBM's tool but for some reason it freezes during the download so I am still working on that. I have searched about this error but everything I see seems much more complex than what I am doing and I can't tell if it applies or not. Thanks in advance.
select *
from MyTable
where date_col between date '2014-01-01' and date '2014-01-31'
I stumbled at this error too, found out it is related to db2jcc.jar (type 4) driver.
Excerpt: If there are no items in the result set left (or to begin with), the Result set is closed automatically and therefore the Exception. Suggestion is to handle it in the application, perhaps in my case, I started checking if(rs.next()) but otherwise, there is a work around. Check out the source link below for how you can set some properties to Data source and avoid exception.
Source :
"Invalid operation: result set is closed" error with Data Server Driver for JDBC
In my case, i missed some properties in WAS, after add allowNextOnExhaustedResultSet the issue is fixed.
1.Log in to the WebSphere Application Server administration console.
2.Select Resources > JDBC > Data sources > Application Center DataSource name > Custom properties and click New.
3.In the Name field, enter allowNextOnExhaustedResultSet.
4.In the Value field, type 1.
5.Change the type to java.lang.Integer.
6.Click OK.
Sometimes you need also check whether resultSetHoldability properties exists. Details refer to here.
I encountered this failure also when ugrading from JDBC Type 2 driver (db2java.zip) JDBC type 4 driver (db2jcc4.jar)
Statement statement = results.getStatement();
if (statement != null)
{
connection = statement.getConnection(); // ** failed here
statement.close();
}
Solution was to check if the statement is closed or not as follows.
Changed to:
Statement statement = results.getStatement();
if (statement != null && !statement.isClosed()) {
{
connection = statement.getConnection();
statement.close();
}
Creating property bellow with type Integer it's worked for me:
allowNextOnExhaustedResultSet:
I had the same issue on WAS 7 so i had to add and change few this on Admin Console.
This TeamWorksRuntimeException exception should be fixed by applying APAR JR50863 which is available on top of BPM V8.5.5 or included on BPM V8.5 refresh pack 6.
For the case that the APAR does not solve the problem, try following workaround:
Log in to the WebSphere Application Server admin console
Select Resources > JDBC > Data sources > DataSource name (TeamWorksDB) > Custom properties and click New
In the Name field, enter downgradeHoldCursorsUnderXa
In the Value field, type true
Change the type to java.lang.Boolean
Click OK to save your changes
Select custom property resultSetHoldability
In the Value field, type 1
Click OK to save your changes
Source of the Answer : https://developer.ibm.com/answers/questions/194821/invalid-operation-result-set-is-closed-errorcode-4/
Restarting the app may fix the problem if connection pool lost session to Db2. If using Tomcat then connection pool property of 'testonBorrow' may reestablish the connection to Db2.

QuickFix Trouble - Repeating Groups

My fix engine keeps rejecting messages and I was hoping someone could help me figure out why... I'm receiving the following sample message:
8=FIXT.1.1 9=518 35=AE 34=4 1128=8 49=XXXXXXX 56=YYYYYYY 52=20130322-17:58:37 552=1 54=1 37=Z00097H4ON 11=NOREF 826=0 78=1 79=NOT SPECIFIED 80=100000.000000 5967=129776.520000 453=5 448=BCART6 452=3 447=D 448=BARX 452=1 447=D 448=BARX 452=16 447=D 448=bcart6 452=11 447=D 448=ABCDEFGHI 452=12 447=D 571=6611540 150=F 17=Z00097H4ON 32=100000.000000 38=100000.000000 15=EUR 1056=129776.520000 31=1.2977652 194=1.298120 195=-3.5480 64=20130409 63=W2 60=20130322-17:26:50 75=20130322 1057=Y 460=4 167=FOR 65=OR 55=EUR/USD 10=121
8=FIXT.1.1 9=124 35=3 34=4 49=XXXXXXX 52=20130322-17:58:37.917 56=YYYYYYY 45=4 58=Tag appears more than once 371=448 372=AE 373=13 10=216
But as you can see it's being rejected by the quickfix engine. I am using the 5.0sp1 data dictionary and have configured it in my config file:
[DEFAULT]
ConnectionType=initiator
HeartBtInt=30
ReconnectInterval=10
SocketReuseAddress=Y
FileStorePath=D:\XXX\Interface\ReutersStore
FileLogPath=D:\XXX\Interface\ReutersLog
[SESSION]
BeginString = FIXT.1.1
SenderCompID = XXXXX
TargetCompID= YYYYY
DefaultApplVerId = FIX.5.0
UseDataDictionary=Y
AppDataDictionary=FIX50SP1.xml
StartDay=sunday
StartTime=20:55:00
EndTime=06:05:00
EndDay=saturday
SocketConnectHost= A.B.C.D
SocketConnectPort= 123
Does anyone have any idea why the Engine would be rejecting this message? I know that quickfix is normally able to handle messages with repeating groups, is it a config thing? Any help would be greatly appreciated!
Your message seems to be in order. Try putting this in your config file.
ValidateFieldsOutOfOrder=N
Quickfix by default puts that as Y and the underlying structure storing the tab and field values is unable to see the count before. 453 > 448.
As a sidenote always check these fields. They should point you to the source of the problem.
58=Tag appears more than once
371=448
Maybe it's a shot in the dark, but I had a similar a problem when using a 5.0sp2 dictionary.
I resolved using an updated version of the quickfix library compiled from the library SVN repository. If I remember correctly this was the bug.
It seems that the quickfix library has not been updated since a long time, and for newer version of fix I suggest you to use the "trunk" of the repo.
I had the same problem and i resolved it by tweaking my DataDictionary like the following in message AE TradeCaptureReport