Migration script fails with IllegalStateException due to SHADOW_TABLE_NAME_SUFFIXES - kotlin

I've updated the Room version from 2.4.3 to 2.5.0-alpha03 and after the last migration, the JSON generated will once in a while fail with
Caused by: java.lang.IllegalStateException: Cannot parse existing schema file: C:\mypath\com.example.MyDatabase\74.json. If you've modified the file, you might've broken the JSON format, try deleting the file and re-running the compiler.
If you've not modified the file, please file a bug at
https://issuetracker.google.com/issues/new?component=413107&template=1096568
with a sample app to reproduce the issue.
at androidx.room.vo.Database.exportSchema(Database.kt:111)
at androidx.room.DatabaseProcessingStep.process(DatabaseProcessingStep.kt:123)
at androidx.room.compiler.processing.CommonProcessorDelegate.processRound(XBasicAnnotationProcessor.kt:123)
at androidx.room.compiler.processing.javac.JavacBasicAnnotationProcessor.process(JavacBasicAnnotationProcessor.kt:71)
at org.jetbrains.kotlin.kapt3.base.incremental.IncrementalProcessor.process(incrementalProcessors.kt:90)
at org.jetbrains.kotlin.kapt3.base.ProcessorWrapper.process(annotationProcessing.kt:197)
at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment.callProcessor(JavacProcessingEnvironment.java:985) ... 44 more
After checking out the differences between the last schema file 73.json and the new one, 74.json, apart from the changes I've wanted to make, there's this block:
"SHADOW_TABLE_NAME_SUFFIXES": [
"_content",
"_segdir",
"_segments",
"_stat",
"_docsize"
],
"shadowTableNames$delegate": {
"initializer": {},
"_value": {}
},
inside the only ftsVersion block I have. Whatever I write in the migration script doesn't matter, I always get the same issue. What I've found is that SHADOW_TABLE_NAME_SUFFIXES is a static variable from androidx.room.migration.bundle.FtsEntityBundle & if I delete this block from 74.json, I don't get the issue anymore.
Can anyone help me with more info on this and why it could pop up in the schema file?

I've posted a bug report as per the stack trace's advice and it seems to be an issue from Room 2.5.0-alpha02 and 2.5.0-alpha03, which they will fix https://issuetracker.google.com/issues/246751839

Related

Using Optaplanner for VRPPD

I am trying to run the example "optaplanner-mixedvrp-experiment" developed by Geoffrey De Smet and when I run it it throws me the following error:
Caused by: java.lang.IllegalStateException: The entity (MY) has a
variable (previousStandstill) with value (MUNO) which has a
sourceVariableName variable (nextVisit) with a value (WERBOMONT) which
is not null. Verify the consistency of your input problem for that
sourceVariableName variable.
I have not made any change, I have only cloned and executed it, I import and solve it and it throws me this error.
Do you know what could be happening?
I am applying it in the development of a variant of VRP with multiple deliveries and collections, but it throws me the same error. I have activated the FULL_ASSERT mode and nextVisit, previousStandstill, visitIndex are always null
It's been a long time since I looked at that code, so it's using an old version of optaplanner. Our goal is still to clean it up and offer an out of the box example for VRPPD (and probably remove some boilerplate along the way, using the upcoming #CollectionPlanningVariabe etc). That being said, we have multiple users&customers who used that optaplanner-mixedvrp-experiment to successfully build VRPPD implementations.
Which dataset did you try?
FWIW, that IllegalStateException says that when A.previous = B, the B.next is not A. So either the dataset importer didn't import it correctly - before calling solve() - especially if it fails before the first CH step in FULL_ASSERT. Or one of the custom moves corrupted the model.

Ontotext GraphDB Repository cannot be used for queries

I am getting an error message while trying to sparql in a particular repository.
Error :
The currently selected repository cannot be used for queries due to an error:
Page [id=7, ref=1,private=false,deprecated=false] from pso has size of 206 != 820 which is written in the index: PageIndex#244 [OPENED] ref:3 (parent=null freePages=1 privatePages=0 deprecatedPages=0 unusedPages=0)
So I tried to recreate the repository by uploading a new RDF file, but still issue persist. Any solution? Thanks in advance
The error indicates an inconsistency between what is written in the index (pso.index) and the actual page (pso). Is there any chance that the binary files were modified/over-written/partially merged? Under normal operation, you should never get this an error.
The only way to hide this error is to start GraphDB with: ./graphdb -Dthrow.exception.on.index.inconsistency=false. I will recommend doing this only for dumping the repository content into an RDF file, drop the repository, and recreate it.

How to edit XML column in SQL Developer?

I would like to edit XML column which is displaying as (XMLTYPE) by SQL Developer editor (I go there by clicking on the field twice, edit, then save).
After that the displayed value changes to sqldev.xml:/home/myuser/.sqldeveloper/tmp/XMLType8226206531089284015.xml
Build after save retrieving next build context...
Build after save building project 1 of 1 queued projects
Compiling...
Ignoring /home/username/.sqldeveloper/tmp/XMLType5691884284875805681.xml; not on source path
[11:45:33 AM] Compilation complete: 0 errors, 1 warnings.
Build after save finished
and when I try to commit:
UPDATE "USERNAME"."TABLENAME" SET WHERE ROWID = 'AABWNKAAEAAABSbAAB' AND ORA_ROWSCN = '6951979'
One error saving changes to table "USERNAME"."TABLENAME":
Row 1: Illegal format in column NEXTCOLUMN.
I tried to look for this error and found people who also had it, but without the solution.
If you have an advice how to report it to Oracle, it will be also helpful.
Hope this will be of help to you:
UPDATE table_name
SET table_column=
UPDATEXML(table_column,
'/sampleTag1/sampleTag2/text()','value2')
WHERE some_column = some value --<< this part is where you put your condition
Here is where you can find more about it:
https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions205.htm
-------------------------
If your problem is with editing through SQL developer manually via integrated editor then it is, as far as my testing and researching can tell, because of the SQL Developer version.
You have noted in your comment that you use version 4.1.x and I have found few places where people confirm that they had the same problem with this version.
I also have 4.1.x version and I have also successfully repeated your error where the developer is referring to my .xml file in my ...\sqldeveloper\tmp folder not being on it's source path :
Compiling... Ignoring C:\Users\trle\AppData\Roaming\SQL
Developer\tmp\XMLType6413036461637067751.xml; not on source path
[4:33:29 PM] Compilation complete: 0 errors, 1 warnings.
I have then downloaded version 19.2.x where there is no such problem and all works just fine.
So my answer to your problem is to download some newer version of SQL developer. In my case 19.2. works.
-------------------------
UPDATE Just tested on version 4.2.x - also works

Blazegraph INSERT DATA crashes with NoSuchMethodError

In Blazegraph I attempted the following query:
INSERT DATA {
<http://my.site/User/instances/1>
:comment
<http://my.site/Comment/instances/16>.
}
and it crashes with the following exception trace:
java.lang.NoSuchMethodError: com.bigdata.rdf.sail.sparql.SPARQLStarUpdateDataBlockParser.read()I
at com.bigdata.rdf.sail.sparql.SPARQLStarUpdateDataBlockParser.checkSparqlStarSyntax(SPARQLStarUpdateDataBlockParser.java:107)
at com.bigdata.rdf.sail.sparql.SPARQLStarUpdateDataBlockParser.parseValue(SPARQLStarUpdateDataBlockParser.java:100)
at org.openrdf.rio.trig.TriGParser.parseGraph(TriGParser.java:158)
at org.openrdf.repository.sail.helpers.SPARQLUpdateDataBlockParser.parseGraph(SPARQLUpdateDataBlockParser.java:87)
at org.openrdf.rio.trig.TriGParser.parseStatement(TriGParser.java:128)
at org.openrdf.rio.turtle.TurtleParser.parse(TurtleParser.java:214)
at com.bigdata.rdf.sail.sparql.UpdateExprBuilder.doUnparsedQuadsDataBlock(UpdateExprBuilder.java:746)
at com.bigdata.rdf.sail.sparql.UpdateExprBuilder.visit(UpdateExprBuilder.java:161)
at com.bigdata.rdf.sail.sparql.UpdateExprBuilder.visit(UpdateExprBuilder.java:119)
at com.bigdata.rdf.sail.sparql.ast.ASTInsertData.jjtAccept(ASTInsertData.java:23)
at com.bigdata.rdf.sail.sparql.Bigdata2ASTSPARQLParser.parseUpdate2(Bigdata2ASTSPARQLParser.java:289)
at com.bigdata.rdf.sail.BigdataSailRepositoryConnection.prepareNativeSPARQLUpdate(BigdataSailRepositoryConnection.java:278)
at com.bigdata.rdf.sail.BigdataSailRepositoryConnection.prepareUpdate(BigdataSailRepositoryConnection.java:182)
at org.openrdf.repository.base.RepositoryConnectionBase.prepareUpdate(RepositoryConnectionBase.java:180)
However normal DELETE INSERT WHERE queries work fine.
Any ideas how to solve?
Simply removed the following line in my build.gradle:
compile('org.openrdf.sesame:sesame-runtime:2.8.6')
and refreshed dependencies. You simply don't need to declare a Sesame dependency if you're using bigdata-core, since Sesame is already a sub-dependency of that.
The reason for the crash was that bigdata-core relies on Sesame version 2.7.12 which was being superseded by my 2.8.6 declaration, which doesn't have an implementation of the method called, and hence the crash.
Thanks to #AKSW for the guidance behind this solution.

QuickFix Trouble - Repeating Groups

My fix engine keeps rejecting messages and I was hoping someone could help me figure out why... I'm receiving the following sample message:
8=FIXT.1.1 9=518 35=AE 34=4 1128=8 49=XXXXXXX 56=YYYYYYY 52=20130322-17:58:37 552=1 54=1 37=Z00097H4ON 11=NOREF 826=0 78=1 79=NOT SPECIFIED 80=100000.000000 5967=129776.520000 453=5 448=BCART6 452=3 447=D 448=BARX 452=1 447=D 448=BARX 452=16 447=D 448=bcart6 452=11 447=D 448=ABCDEFGHI 452=12 447=D 571=6611540 150=F 17=Z00097H4ON 32=100000.000000 38=100000.000000 15=EUR 1056=129776.520000 31=1.2977652 194=1.298120 195=-3.5480 64=20130409 63=W2 60=20130322-17:26:50 75=20130322 1057=Y 460=4 167=FOR 65=OR 55=EUR/USD 10=121
8=FIXT.1.1 9=124 35=3 34=4 49=XXXXXXX 52=20130322-17:58:37.917 56=YYYYYYY 45=4 58=Tag appears more than once 371=448 372=AE 373=13 10=216
But as you can see it's being rejected by the quickfix engine. I am using the 5.0sp1 data dictionary and have configured it in my config file:
[DEFAULT]
ConnectionType=initiator
HeartBtInt=30
ReconnectInterval=10
SocketReuseAddress=Y
FileStorePath=D:\XXX\Interface\ReutersStore
FileLogPath=D:\XXX\Interface\ReutersLog
[SESSION]
BeginString = FIXT.1.1
SenderCompID = XXXXX
TargetCompID= YYYYY
DefaultApplVerId = FIX.5.0
UseDataDictionary=Y
AppDataDictionary=FIX50SP1.xml
StartDay=sunday
StartTime=20:55:00
EndTime=06:05:00
EndDay=saturday
SocketConnectHost= A.B.C.D
SocketConnectPort= 123
Does anyone have any idea why the Engine would be rejecting this message? I know that quickfix is normally able to handle messages with repeating groups, is it a config thing? Any help would be greatly appreciated!
Your message seems to be in order. Try putting this in your config file.
ValidateFieldsOutOfOrder=N
Quickfix by default puts that as Y and the underlying structure storing the tab and field values is unable to see the count before. 453 > 448.
As a sidenote always check these fields. They should point you to the source of the problem.
58=Tag appears more than once
371=448
Maybe it's a shot in the dark, but I had a similar a problem when using a 5.0sp2 dictionary.
I resolved using an updated version of the quickfix library compiled from the library SVN repository. If I remember correctly this was the bug.
It seems that the quickfix library has not been updated since a long time, and for newer version of fix I suggest you to use the "trunk" of the repo.
I had the same problem and i resolved it by tweaking my DataDictionary like the following in message AE TradeCaptureReport