Glassfish eclipselink HashSet error - glassfish

I am using glassfish and JPA (eclipselink tenant)
In OneToOneMapping on the Map insertableField many objects are added in map maintaining one instance per tenant.
But after some time using the system the HashSet insertableField in OneToOneMapping presents its size zero and the internal table contains a value look image two, and when the system will run insertableFields.contains method the field is not localized and thus is not added in the query causing the following error.
See the image
http://i.imgur.com/kAlfgII.png?1 "HashSet error"
The error, any suggestion?
Internal Exception: org.postgresql.util.PSQLException: ERROR: null value
in column "id_user" violates not-null constraint
Detalhe: Failing row contains (null, 0, 1).
INSERT INTO "tenant".cfguser (column1,
column2) VALUES (?, ?)
bind => [1, 0]

Related

Azure Data Flow not deleting the row in AlterRow

I am not sure what wrong data-flow is not deleting the rows it giving this error " Activity dataflow38 failed: "
In Preview Tab it is showing the rows which i want to delete but it not there is no relationship with the table
error
"message": "Job 'c688a5bd-34dd-44e2-8292-724f0ea5f98a failed due to reason: DF-EXEC-1 Conversion failed when converting date and/or time from character string.\ncom.microsoft.sqlserver.jdbc.SQLServerException: Conversion failed when converting date and/or time from character string.\n\tat com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:258)\n\tat com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:256)\n\tat com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:108)\n\tat com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:28)\n\tat com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.doInsertBulk(SQLServerBulkCopy.java:1611)\n\tat com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.access$200(SQLServerBulkCopy.java:58)\n\tat com.microsoft.sqlserver.jdbc.SQLServerBulkCopy$1InsertBulk.doExecute(SQLServerBulkCopy.java:709)\n\tat com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151)\n\tat com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)\n\tat com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.sendBulkLoadBCP(SQLServerBulkCopy.java:739)\n\tat com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:1684)\n\tat com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:669)\n\tat com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions.com$microsoft$azure$sqldb$spark$connect$DataFrameFunctions$$bulkCopy(DataFrameFunctions.scala:127)\n\tat com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:72)\n\tat com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:72)\n\tat org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:951)\n\tat org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:951)\n\tat org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2284)\n\tat org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2284)\n\tat org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)\n\tat org.apache.spark.scheduler.Task.doRunTask(Task.scala:139)\n\tat org.apache.spark.scheduler.Task.run(Task.scala:112)\n\tat org.apache.spark.executor.Executor$TaskRunner$$anonfun$13.apply(Executor.scala:497)\n\tat org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1526)\n\tat org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:503)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n",
"failureType": "UserError",
"target": "dataflow38"
}
This Alter Row delete policy will delete all rows because you are using true() in your expression. Are you sure this is what you want?
This error is likely coming from your Sink field mapping. If you just want to delete rows, then don't set any sink mapping.
Just map your key column. And make sure the data type matches for your key mapping. If it doesn't cast it in a Derived Column.
To make this better, I don't think we should default to auto-mapping if all you're doing is deleting.

Phoenix use jdbcTemplate to batchUpdate while column contain sequence

i'm using phoenix combine with jdbcTemplate to insert data to Hbase.
Phoenix4.7.0
Hbase 1.1.2
One of the column(id) in the table is generated monotonically with a sequence,in general,i use "jdbcTemplate.execute(sql)" method to insert data.
For example , "upsert into table(row id) values( "row"," NEXT VALUE FOR table.id_sequence" )" The second column(id) is generated Automatically with a sequence. it's fine.But when i use "jdbcTemplate.batchUpdate" method ,the same sql to batchInsert,there's something wrong with it.
Caused by: org.apache.phoenix.exception.BatchUpdateExecution: ERROR 1106 (XCL06): Exception while executing batch.
at org.apache.phoenix.jdbc.PhoenixStatement.executeBatch(PhoenixStatement.java:1226)
at org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingStatement.java:297)
at org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingStatement.java:297)
at org.springframework.jdbc.core.JdbcTemplate$4.doInPreparedStatement(JdbcTemplate.java:905)
at org.springframework.jdbc.core.JdbcTemplate$4.doInPreparedStatement(JdbcTemplate.java:890)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:589)
... 45 more
Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type mismatch. BIGINT and VARCHAR for NEXT VALUE FOR AQMDATA_ALL.id_sequence
How can i fix it?
Please add hbase-protocol.jar in classpath. So that it is available to program..
Please reply me if in case any issue.

CSV file input not working together with set field value step in Pentaho Kettle

I have a very simple Pentaho Kettle transformation that causes a strange error. It consists of reading a field X from a CSV, add a field Y, set Y=X and finally write it back to another CSV.
Here you can see the steps and the configuration for them:
You can also download the ktr file from here. The input data is just this:
1
2
3
When I run this transformation, I get this error message:
ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Unexpected error
ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : org.pentaho.di.core.exception.KettleStepException:
Error writing line
Error writing field content to file
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeRowToFile(TextFiIeOutput.java:273)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFileOutput.processRow(TextFiIeOutput.java:195)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
atjava.Iang.Thread.run(Unknown Source)
Caused by: org.pentaho.di.core.exception.KettleStepException:
Error writing field content to file
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeField(TextFileOutput.java:435)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeRowToFile(TextFiIeOutput.java:249)
3 more
Caused by: org.pentaho.di.core.exception.KettleVaIueException:
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.core.row.vaIue.VaIueMetaBase.getBinaryString(VaIueMetaBase.java:2185)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.formatField(TextFiIeOutput.java:290)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeField(TextFileOutput.java:392)
4 more
All of the above lines start with 2015/09/23 12:51:18 - Text file output.0 -, but I edited it out for brevity. I think the relevant, and confusing, part of the error message is this:
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
Some further notes:
If I bypass the set field value step by using the lower hop instead, the transformation finish without errors. This leads me to believe that it is the set field value step that causes the problem.
If I replace the CSV file input with a data frame with the same data (1,2,3) everything works just fine.
If I replace the file output step with a dummy the transformation finish without errors. However, if I preview the dummy, it causes a similar error and the field Y has the value <null> on all three rows.
Before I created this MCVE I got the error on all sorts of seemingly random steps, even when there was no file output present. So I do not think this is related to the file output.
If I change the format from Number to Integer, nothing changes. But if I change it to string the transformations finish without errors, and I get this output:
X;Y
1;[B#49e96951
2;[B#7b016abf
3;[B#1a0760b0
Is this a bug? Am I doing something wrong? How can I make this work?
It's because of lazy conversion. Turn it off. This is behaving exactly as designed - although admittedly the error and user experience could be improved.
Lazy conversion must not be used when you need to access the field value in your transformation. That's exactly what it does. The default should probably be off rather than on.
If your field is going directly to a database, then use it and it will be faster.
You can even have "partially lazy" streams, where you use lazy conversion for speed, but then use select values step, to "un-lazify" the fields you want to access, whilst the remainder remain lazy.
Cunning huh?

Lucene limit file size while saving into data base

I'm new in luene and i want to save the index file into data base but i have this exeption and i would not change the max_allowed_packet but i want to limit the size of file.
Exception in thread "Lucene Merge Thread #0" org.apache.lucene.index.MergePolicy$MergeException: org.apache.lucene.store.jdbc.JdbcStoreException: Failed to execute sql [insert into search_lucene (name_, value_, size_, lf_, deleted_) values ( ?, ?, ?, current_timestamp, ? )]; nested exception is com.mysql.jdbc.PacketTooBigException: Packet for query is too large (1286944 > 1048576). You can change this value on the server by setting the max_allowed_packet' variable.
at org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:309)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:286)
Caused by: org.apache.lucene.store.jdbc.JdbcStoreException: Failed to execute sql [insert into search_lucene (name_, value_, size_, lf_, deleted_) values ( ?, ?, ?, current_timestamp, ? )]; nested exception is com.mysql.jdbc.PacketTooBigException: Packet for query is too large (1286944 > 1048576). You can change this value on the server by setting the max_allowed_packet' variable.
at org.apache.lucene.store.jdbc.support.JdbcTemplate.executeUpdate(JdbcTemplate.java:185)
at org.apache.lucene.store.jdbc.index.AbstractJdbcIndexOutput.close(AbstractJdbcIndexOutput.java:47)
at org.apache.lucene.store.jdbc.index.RAMAndFileJdbcIndexOutput.close(RAMAndFileJdbcIndexOutput.java:81)
at org.apache.lucene.index.CompoundFileWriter.close(CompoundFileWriter.java:203)
at org.apache.lucene.index.SegmentMerger.createCompoundFile(SegmentMerger.java:204)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4263)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3884)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:205)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:260)
Caused by: com.mysql.jdbc.PacketTooBigException: Packet for query is too large (1286944 > 1048576). You can change this value on the server by setting the max_allowed_packet' variable.
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3915)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2598)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2778)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2825)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2156)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2459)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2376)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2360)
at org.apache.lucene.store.jdbc.support.JdbcTemplate.executeUpdate(JdbcTemplate.java:175)
read from http://wiki.apache.org/lucene-java/LuceneFAQ#Can_I_store_the_Lucene_index_in_a_relational_database.3F
Can I store the Lucene index in a relational database?
Lucene does not support that functionality out of the box, but several people have implemented JdbcDirectory's. The reports we have seen so far indicate that performance with such implementations is not great, but it is doable.
To limit the file size you'll need to implement a Directory yourself. The trick is to split every file into parts. Perhaps you can borrow some code from lucene-appengine, which splits the file into multiple SegmentHunks.
I hope you know what you're doing because keeping lucene indexes in a database will be much slower than using the usual memory-mapped files.
essaies :
indexWriter.setUseCompoundFile(false);
LogByteSizeMergePolicy aLogByteSizeMergePolicy = new LogByteSizeMergePolicy();
aLogByteSizeMergePolicy.setMaxMergeMB(2);
aLogByteSizeMergePolicy.setMaxMergeMBForForcedMerge(4);
aLogByteSizeMergePolicy.setUseCompoundFile(false);
//voir aussi setMaxCFSSegmentSizeMB, setMaxMergeDocs, setMergeFactor
indexWriter.setMergePolicy(aLogByteSizeMergePolicy);
//Deprecated. use IndexWriterConfig.setMergePolicy(MergePolicy) instead.

customize error message for nhibernate

when deleting an entity in nhibernate i get an exception with this error message:
delete statement conflicted with column reference constraint ..etc
of course the exception is wrapped in long series of exceptions.
the error message is normal, but can i make nhibernate shows more polite error message to the user ??
in another words:
is there any conventions which with, i can customize the exception ??
I'm using Oracle 11g data base.
Yes, you can implement ISQLExceptionConverter to customize the exceptions thrown by NHibernate.
Here's a complete example.