Nullpointerexception when deleting Janusgraph vertex - nullpointerexception

Problem
I have a org.apache.tinkerpop.gremlin.structure.Vertex that I call v, and a JanusGraph instance I call graph.
I attempt to delete v with:
v.remove();
graph.traversal().tx().commit();
but I get a null pointer exception
Caused by: java.lang.NullPointerException
at org.janusgraph.graphdb.database.EdgeSerializer.parseRelation(EdgeSerializer.java:128)
at org.janusgraph.graphdb.database.EdgeSerializer.readRelation(EdgeSerializer.java:73)
at org.janusgraph.graphdb.transaction.RelationConstructor.readRelation(RelationConstructor.java:75)
at org.janusgraph.graphdb.transaction.RelationConstructor$1$1.next(RelationConstructor.java:60)
at org.janusgraph.graphdb.transaction.RelationConstructor$1$1.next(RelationConstructor.java:48)
at org.janusgraph.graphdb.vertices.AbstractVertex.remove(AbstractVertex.java:101)
at org.janusgraph.graphdb.vertices.StandardVertex.remove(StandardVertex.java:101)
at com.brein.common.graph.TinkerPopGraphDatabase.lambda$deleteVertex$4(TinkerPopGraphDatabase.java:500)
at com.brein.common.graph.TinkerPopGraphDatabase.retry(TinkerPopGraphDatabase.java:616)
... 33 more
The following line in the janusgraph-core library is where null pointer occurs:
if (multiplicity.isConstrained()) {
So it would appear that when the library is reading through all the data for the vertex in order to then delete it, it comes across an edge for which there is no multiplicity?
My question is: what might be the reason I'm unable to delete the vertex here, and what are some ideas for how I might explore this issue?
Notes
I am using Janusgraph version 0.2.0
I am using cassandra as the backend data store for the graph database
My graph contains edge labels of Simple and One-to-one types
I get a nullpointer exception also if I try and read the graphSON representation of the vertex

Related

Using Optaplanner for VRPPD

I am trying to run the example "optaplanner-mixedvrp-experiment" developed by Geoffrey De Smet and when I run it it throws me the following error:
Caused by: java.lang.IllegalStateException: The entity (MY) has a
variable (previousStandstill) with value (MUNO) which has a
sourceVariableName variable (nextVisit) with a value (WERBOMONT) which
is not null. Verify the consistency of your input problem for that
sourceVariableName variable.
I have not made any change, I have only cloned and executed it, I import and solve it and it throws me this error.
Do you know what could be happening?
I am applying it in the development of a variant of VRP with multiple deliveries and collections, but it throws me the same error. I have activated the FULL_ASSERT mode and nextVisit, previousStandstill, visitIndex are always null
It's been a long time since I looked at that code, so it's using an old version of optaplanner. Our goal is still to clean it up and offer an out of the box example for VRPPD (and probably remove some boilerplate along the way, using the upcoming #CollectionPlanningVariabe etc). That being said, we have multiple users&customers who used that optaplanner-mixedvrp-experiment to successfully build VRPPD implementations.
Which dataset did you try?
FWIW, that IllegalStateException says that when A.previous = B, the B.next is not A. So either the dataset importer didn't import it correctly - before calling solve() - especially if it fails before the first CH step in FULL_ASSERT. Or one of the custom moves corrupted the model.

Ontotext GraphDB Repository cannot be used for queries

I am getting an error message while trying to sparql in a particular repository.
Error :
The currently selected repository cannot be used for queries due to an error:
Page [id=7, ref=1,private=false,deprecated=false] from pso has size of 206 != 820 which is written in the index: PageIndex#244 [OPENED] ref:3 (parent=null freePages=1 privatePages=0 deprecatedPages=0 unusedPages=0)
So I tried to recreate the repository by uploading a new RDF file, but still issue persist. Any solution? Thanks in advance
The error indicates an inconsistency between what is written in the index (pso.index) and the actual page (pso). Is there any chance that the binary files were modified/over-written/partially merged? Under normal operation, you should never get this an error.
The only way to hide this error is to start GraphDB with: ./graphdb -Dthrow.exception.on.index.inconsistency=false. I will recommend doing this only for dumping the repository content into an RDF file, drop the repository, and recreate it.

Importing data from multi-value D3 database into SQL issues

Trying to use the mv.NET by bluefinity tools. Made some integration packages with it for importing data from a d3 multi-value database into MS SQL 2012 but seem to be having some trouble with the mapping.
For the VOYAGES table have some commentX fields in the D3 application that are acting quite unwieldy and the INSERT fails after a certain number of rows with the following message
>Error: 0xC0047062 at INSERT, mvNET Source[354]: System.Exception: Error #8: dataReader[0] = LTPAC002 ci.BufferColumnIndex = 52, ci.ColumnName = COMMGROUP(Error #8: dataReader[0] = LTPAC002 ci.BufferColumnIndex = 52, ci.ColumnName = COMMGROUP(The value is too large to fit in the column data area of the buffer.))
at mvNETDataSource.mvNETSource.PrimeOutput(Int32 outputs, Int32[] outputIDs, PipelineBuffer[] buffers)
at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper100 wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer100[] buffers, IntPtr ppBufferWirePacket)
Error: 0xC0047038 at INSERT, SSIS.Pipeline: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.The PrimeOutput method on mvNET Source returned error code 0x80131500.The component returned a failure code when the pipeline engine called PrimeOutput().The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.There may be error messages posted before this with more information about the failure.
The value is too large to fit in the column data area of the buffer. -> tried changing the input / outputs types but can't seem to get it right.
In the SQL table the columns are of type ntext.
In the .dtsx job the data type for the columns are of type Unicode String [DT_WSTR] with length 4000 , I guess these are auto-detected.
The import worked for other D3 files like this not sure why it fails for these comment fields.
Running the query on the mv.NET Data Manager ( on the d3 server) times out after 240 seconds so maybe this is the underlying issue?
Any ideas how to proceed? Thank you ~
Most like reason is column COMMGROUP does not have correct data type or some record in source do not fit in output type
To find error record (causing) you have to use on redirect row (property of component failing component ) and get the result set in some txt.csv /or tsv file .
then check data
The exception is being thrown from mv.NET so I suggest you call (or ask your reseller) to call Bluefinity support and ask them about this. You're paying for support, might as well use it. Those programs shouldn't be allowed to throw exceptions like that.
D3 doesn't export Unicode, that might be one issue. But if the Data Manager times-out then I suspect something is wrong in the connectivity into D3. Open a Connection Monitor from the Session Monitor and watch the connection when you make the request. I'm guessing it's either hanging or more probably it's falling into BASIC Debug.
Make sure all D3-side programs related to this are either all Flash-compiled, or all Not Flashed. Your app code will fall into Debug if it's not Flashed but MVNET.BP is.
If it's your program that's in Debug, fix it. If you're not sure which program it is, LIST-RUNTIME-ERRORS in DM.
If it's a MVNET.BP program, again work with Bluefinity. If you are using MVSP for connectivity then the Connection Monitor may be useless, you'll need to change that to an IP (Telnet) connection to see the raw data exchange.

ADLA/U-SQL Error: Vertex user code error

I just have a simple U-SQL that extracts a csv using Extractors.Csv(encoding:Encoding.[Unicode]); and outputs into a lake store table. The file size is small around 600MB and is unicode type. The number of rows is 700K+
These are the columns:
UserId int,
Email string,
AltEmail string,
CreatedOn DateTime,
IsDeleted bool,
UserGuid Guid,
IFulfillmentContact bool,
IsBillingContact bool,
LastUpdateDate DateTime,
IsTermsOfUse string,
UserTypeId string
When I submit this job to my local, it works great without any issues. Once I submit it to ADLA, I get the following error:
Vertex failure triggered quick job abort. Vertex failed: SV1_Extract_Partition[0][0] with error: Vertex user code error.
Vertex failed with a fail-fast error
Vertex SV1_Extract_Partition[0][0].v1 {BA7B2378-597C-4679-AD69-07413A143E47} failed
Error:
Vertex user code error
exitcode=CsExitCode_StillActive Errorsnippet=An error occurred while processing adl://lakestore.azuredatalakestore.net/Data/User.csv
Any help is appreciated!
Since the file is larger than 250MB, you need to make sure that you upload it as a row-oriented file and not a binary file.
Also, please check the reply for the following question to see how you currently can find more details on the error: Debugging u-sql Jobs

What is going wrong with my etl process?

I'm using GoodData's CloudConnect (based on CloverETL) to read a massive json file and write certain elements to a .csv.
Unfortunately, I'm seeing the error pasted below in the console log. Am I running out of memory due to the error, or is that not enough memory the actual error?
ERROR [WatchDog_0] - Component [JSONReader:JSONREADER1] finished with status ERROR.
Java heap space
ERROR [WatchDog_0] - Error details:
org.jetel.exception.JetelRuntimeException: Component [JSONReader:JSONREADER1] finished with status ERROR.
at org.jetel.graph.Node.createNodeException(Node.java:543)
at org.jetel.graph.Node.run(Node.java:522)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
at org.jetel.component.TreeReader$StreamConvertingXPathProcessor.checkThrownException(TreeReader.java:766)
at org.jetel.component.TreeReader$StreamConvertingXPathProcessor.manageThread(TreeReader.java:757)
at org.jetel.component.TreeReader$StreamConvertingXPathProcessor.processInput(TreeReader.java:732)
at org.jetel.component.TreeReader.execute(TreeReader.java:412)
at org.jetel.graph.Node.run(Node.java:493)
... 1 more
Caused by: java.lang.OutOfMemoryError: Java heap space
at net.sf.saxon.tinytree.TinyTree.condense(TinyTree.java:379)
at net.sf.saxon.tinytree.TinyBuilder.close(TinyBuilder.java:177)
at net.sf.saxon.event.ReceivingContentHandler.endDocument(ReceivingContentHandler.java:219)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endDocument(AbstractSAXParser.java:745)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:515)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:848)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:777)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:649)
at net.sf.saxon.event.Sender.sendSAXSource(Sender.java:404)
at net.sf.saxon.event.Sender.send(Sender.java:193)
at net.sf.saxon.event.Sender.send(Sender.java:50)
at net.sf.saxon.Configuration.buildDocument(Configuration.java:2973)
at net.sf.saxon.sxpath.XPathExpression.evaluate(XPathExpression.java:154)
at org.jetel.component.tree.reader.xml.XmlXPathEvaluator.iterate(XmlXPathEvaluator.java:79)
at org.jetel.component.tree.reader.XPathPushParser.handleContext(XPathPushParser.java:104)
at org.jetel.component.tree.reader.XPathPushParser.parse(XPathPushParser.java:84)
at org.jetel.component.TreeReader$StreamConvertingXPathProcessor$PipeParser.work(TreeReader.java:827)
at org.jetel.graph.runtime.CloverWorker.run(CloverWorker.java:87)
... 1 more
This looks like the second case: this error is caused by insufficient memory for your task.
Error occurred during evaluating (one of) your JSONReader component(s).
The JSON seems to be really huge and you should consider splitting this task into smaller ones if possible.
Did you run your transformation locally or on the gooddata server?
It is really hard to advise something specific without knowing details.
Try to use JSONExtract instead if JSONReader - it uses less memory, but also reads JSON files.
From the respective help documents:
JSONReader uses DOM, so the whole input is stored in memory and therefore the component can be memory-greedy.
JSONExtract uses SAX instead of DOM, so it uses less memory than JSONReader