Neo4j Spatial index error duplicates - indexing

I'm having an error when inserting in Neo4j a spatial object with Spring Data:
Caused by: java.lang.RuntimeException: Error adding element 20 wkt POINT(-0.131483 51.513861) to index LocationIndex
at org.neo4j.rest.graphdb.ExecutingRestAPI.addToIndex(ExecutingRestAPI.java:470)
at org.neo4j.rest.graphdb.RestAPIFacade.addToIndex(RestAPIFacade.java:168)
at org.neo4j.rest.graphdb.index.RestIndex.add(RestIndex.java:60)
That's basically because there was a different entry before (that doesn't exist anymore, with the same location). The uniqueness of the constraint means two different objects cannot have the same location, but if you delete the previous one, it seems to remain in the index and collides.
Is there any way of re-indexing the spatial index in neo4j? That means, delete everything and re-index only the existing data.
This is my class field in the Spring Data entity class:
#Indexed(indexType = IndexType.POINT, indexName = "LocationIndex")
private String wkt;
Should I add something to delete from the index when the object is deleted? Or it must be done manually, how?
EDIT
{
"message": "GeometryNode not indexed with an RTree: 21",
"exception": "RuntimeException",
"fullname": "java.lang.RuntimeException",
"stacktrace": [
"org.neo4j.gis.spatial.rtree.RTreeIndex.findLeafContainingGeometryNode(RTreeIndex.java:794)",
"org.neo4j.gis.spatial.rtree.RTreeIndex.remove(RTreeIndex.java:111)",
"org.neo4j.gis.spatial.rtree.RTreeIndex.remove(RTreeIndex.java:100)",
"org.neo4j.gis.spatial.EditableLayerImpl.update(EditableLayerImpl.java:56)",
"org.neo4j.gis.spatial.indexprovider.LayerNodeIndex.add(LayerNodeIndex.java:143)",
"org.neo4j.gis.spatial.indexprovider.LayerNodeIndex.add(LayerNodeIndex.java:41)",
"org.neo4j.server.rest.web.DatabaseActions.addToNodeIndex(DatabaseActions.java:686)",
"org.neo4j.server.rest.web.RestfulGraphDatabase.addToNodeIndex(RestfulGraphDatabase.java:1022)",
"java.lang.reflect.Method.invoke(Method.java:606)",
"org.neo4j.server.rest.transactional.TransactionalRequestDispatcher.dispatch(TransactionalRequestDispatcher.java:139)",
"java.lang.Thread.run(Thread.java:745)"
]
}
Found that he is trying to acquire from DB the Index node to do something. The ID of the Node I'm trying to update in this case is 20, see that he is trying to get 21.
That's because of this (found in neo4j github):
private long extractNodeId( String uri ) throws BadInputException
{
try
{
return Long.parseLong( uri.substring( uri.lastIndexOf( "/" ) + 1 ) );
}
catch ( NumberFormatException | NullPointerException ex )
{
throw new BadInputException( ex );
}
}
I don't understand how this is suppose to work, because in my Database I've seen nodes with ID different than OriginalNode + 1 which point correctly to the OriginalNode's ID and spatial finds them.
Is it only a matter of update? If I create OriginalNode with wkt it creates both nodes all right, but this error only shows up when I'm trying to add wkt information to an existing node.
Thanks!

Related

Is it possible in Cosmos DB to create a singly linked list of documents?

One problem I commonly solve is that of keeping immutable versions of a document rather than editing the document. When asked for the document, retrieve the most recent version.
One way to do this is with timestamps:
doc 0:
{
id: "e69e0bea-77ea-4d97-bedf-d3cca27ae4b6",
correlationId: "d00be916-10e3-415c-aaf6-9acb7c70cf4f",
created: "11/17/2018 2:20:25 AM",
value: "foo"
}
doc 1:
{
id: "37ef6f99-bc87-45bb-87ae-a1b81070cc91",
correlationId: "d00be916-10e3-415c-aaf6-9acb7c70cf4f",
created: "11/17/2018 2:20:44 AM",
value: "bar"
}
doc 2:
{
id: "93fc913e-5ecc-4c59-a130-0e577ed4f2fb",
correlationId: "d00be916-10e3-415c-aaf6-9acb7c70cf4f",
created: "11/17/2018 2:21:51 AM",
value: "baz"
}
The downside of using timestamps is you have to order by the timestamp (O(n*log(n))) to get the Nth most recent version.
I desire to make this O(n) by storing pointers to the previous version, like
{
id: "e69e0bea-77ea-4d97-bedf-d3cca27ae4b6",
previousId: null,
correlationId: "d00be916-10e3-415c-aaf6-9acb7c70cf4f",
created: "11/17/2018 2:20:25 AM",
value: "foo"
}
doc 1:
{
id: "37ef6f99-bc87-45bb-87ae-a1b81070cc91",
previousId: "e69e0bea-77ea-4d97-bedf-d3cca27ae4b6",
correlationId: "d00be916-10e3-415c-aaf6-9acb7c70cf4f",
created: "11/17/2018 2:20:44 AM",
value: "bar"
}
doc 2:
{
id: "93fc913e-5ecc-4c59-a130-0e577ed4f2fb",
previousId: "37ef6f99-bc87-45bb-87ae-a1b81070cc91",
correlationId: "d00be916-10e3-415c-aaf6-9acb7c70cf4f",
created: "11/17/2018 2:21:51 AM",
value: "baz"
}
so it is a linked list like
NULL <- doc0 <- doc1 <- doc2
The only thing stopping me from doing this is that for creating a new version I would need some locking mechanism, like (in pseudo-code)
lock correlationId
get latest
new.previousId = latest.id
insert new
but I'm not sure if it's possible at the database level.
There's no concept of locking, but in your case, you can take advantage of unique key constraints:
Create a partitioned collection, with correlationId as your logical partition key
Add a unique key constraint, with the key based on previousId
At this point, for a given correlationId, if you try to create a new link in the list, and somehow another one was create just before, you'd run into a collision on previousId, and you'd then be able to re-do your operation using the just-created document's id for previousId.
Note: There is an ETag for each document, which helps with concurrency when updating a document, in case you decide to utilize updates at some point.
Did you consider Cosmos DB Graph API. Linked list is effectively a very basic form of graph.
What you are doing looks good, but updating the correlation id can be a mess. With graph API that problem won’t be there.
Updating the answer following the first comment:
This what we can do using SQL API.
The chain can be modeled as:
NULL <- Doc1 <- Doc2 <- Doc3 <- Head.
The Head has same correlationId as other versions documents. Also, correlationId needs to be the partition key of the collection, so that all version of the same documents are placed in the same physical partition.
Now, we can use a stored procedure to update the version of the document. Note that stored procedures are transactional within the scope of a partition key (the reason we wanted correlationId to be the partition key).
Below is the pseudocode of the stored procedure.
Add New version:
Read the Head(H) Document
save the _etag of the Head Document
Follow H to read the current most recent version (CMRV)
Add a document for the new most recent version (NMRV)
Point H to NRMV and NMRV to CMRV
Update H with some dummy information (say number of version) using the _etag saved before
This entire piece is atomic. If another concurrent thread has successfully updated H, the the current stored proc will fail with "Precondition" failed error (due to _etag mismatch), and the entire stored proc will be rolled back.

"Cannot return null for non-nullable type: 'Person' within parent 'Messages' (/getMessages/sendBy)" in GraphQL SDL( aws appsync)

Iam new to graphql.Iam implementing a react-native app using aws appsync.Following is the code i have written in schema
type Messages {
id: ID!
createdAt: String!
updateAt: String!
text: String!
sendBy: Person!
#relation(name: "UserMessages")}
type Person {
id: ID!
createdAt: String!
updateAt: String!
name: String!
messages: [Messages!]!
#relation(name: "UserMessages")}
When i tried to query the sendBy value it is giving me an error saying
query getMessages{
getMessages(id : "a0546b5d-1faf-444c-b243-fab5e1f47d2d") {
id
text
sendBy {
name
}
}
}
{
"data": {
"getMessages": null
},
"errors": [
{
"path": [
"getMessages",
"sendBy"
],
"locations": null,
"message": "Cannot return null for non-nullable type: 'Person' within parent 'Messages' (/getMessages/sendBy)"
}
]
}
Am not understanding that error please help me.Thanks!! in Advance
This might sound silly, but still, developers do this kind of mistakes so did I. In subscription, the client can retrieve only those fields which are outputted in the mutation query. For example, if your mutation query looks like this:
mutation newMessage {
addMessage(input:{
field_1: "",
field_2: "",
field_n: "",
}){
field_1,
field_2
}
}
In the above mutation since we are outputting only field_1 & field_2. A client can retrieve the only subset of these fields.
So if in the schema, for a subscription if you have defined field_3 as required(!), and since you are not outputting field_3 in the above mutation, this will throw the error saying Cannot return null for non-nullable type: field_3.
Looks like the path [getMessages, sendBy] is resolving to a null value, and your schema definition (sendBy: Person!) says sendBy field cannot resolve to null. Please check if a resolver is attached to the field sendBy in type Messages.
If there is a resolver attached, please enable CloudWatch logs for this API (This can be done on the Settings page in Console, select ALL option). You should be able to check what the resolved Request/Response mapping was for the path [getMessages, 0, sendBy].
I encountered a similar issue while working on my setup with CloudFormation. In my particular situation I didn't configure the Projection correctly for the Global Secondary Indexes. Since the attributes weren't projected into the index, I was getting an ID in the response but null for all other values. Updating the ProjectionType to 'ALL' resolved my issue. Not to say that is the "correct" setting but for my particular implementation it was needed.
More on Global Secondary Index Projection for CloudFormation can be found here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dynamodb-projectionobject.html
Attributes that are copied (projected) from the source table into the index. These attributes are additions to the primary key attributes and index key attributes, which are automatically projected.
I had a similar issue.
What happened to me was a problem with an update resolver. I was updating a field that was used as GSI (Global Secondary Index). But I was not updating the GSI, so when query by GSI the index exists but the key for that attribute had changed.
If you are using Dynamo DB, you can start debugging there. You can check the item and see if you have any reference to the primary key or the indexes.
I had a similar issue.
for me the problem was lying with the return type of the schema . As i was doing a query with PK on dynamodb table ..it was returning a list of items or data you can say . but in my schema i had a schema define as a singular struct format .
Error was resolved when i just made the return type in schema as list of items .
like
type mySchema {
[ID]
}
instead of
type mySchema
{
id : ID!
name : String!
details : String!
}
This error is thrown for multiple reasons . so your reason could be else but still i just posted one of the scenarios.

RavenDB update denormalized reference and stale indexes

I have a RavenDB with some collections and about 30 indexes.
I'm trying to perform some mass updates in a specific collection (Profiles) via DatabaseCommands.UpdateByIndex and a PatchRequest, actually my code is something like this:
db.DatabaseCommands.UpdateByIndex("Profiles/ByFinder", new
Raven.Abstractions.Data.IndexQuery { }, new [] { new PatchRequest {
Type = PatchCommandType.Unset, Name = "CreatedById" } });
Where "Profiles/ByFinder" is an index that works on this specific collection.
The strange thing is that ALL the indexes in the DB go in stale state as I perform this command, even the indexes that don't work with the Profiles collection in any way.
Is that the default behaviour, and if so, there's a way to avoid it?
That is by design, whenever you modify a document, all documents are stale until they can verify that this document isn't related to them.

The "X" property on "Y" could not be set to a 'null' value. You must set this property to a non-null value of type 'Int32'

When I run my application and I click a specific button I get the error:
"The "X" property on "Y" could not be set to a 'null' value. You must set this property to a non-null value of type 'Int32'."
Cool so I go to my Entity project, go to Y table, find X column, right-click and go to X's properties and find that Nullable is set to False.
I verify in SQL that in Y table, X is set to allow nulls, and it is.
I then go back to my Entity project, set Nullable to True, save and build and I receive:
Error 3031: Problem in mapping fragments starting at line 4049:Non-nullable column "X" in table "Y" is mapped to a nullable entity property.
I've heard that deleting the table from the .edmx file and then re-adding it is a possibility but have never done that and don't understand the implications enough to feel comfortable in doing that.
I've heard that it could be in the view, could be in the stored procedure...
Also have heard that this is a bug.
Has anyone come across this and found an "across the board" fix or somewhat of a road map of sorts on where to look for this error?
Thanks!
"The "X" property on "Y" could not be set to a 'null' value. You must set this property to a non-null value of type 'Int32'."
In your EDMX, if you go under your Y table and click on X column, right-click, click on Properties, scroll down to Nullable and change from False to True.
If you get a "mapping fragment" error, you'll have to delete the table from the EDMX and re-add it, because in the Model Browser it stores the table properties and the only way to refresh that (that I know of) is to delete the table from the Model Browser under <database>.Store then retrieving it using Update Model from Database.. command.
I just replace data type int to int32?
public Int32 Field{ get; set; }
to
public Int32? Field{ get; set; }
and the problem is solved
My problem was that my Model database was out of sync with the actual (dev) database. So the EDMX thought it was smallint but the actual column was int. I updated the model database to int and the EDMX to Int32 and now it works.
For future readers.
I got this error when I had a multiple result stored procedure.
As seen here:
http://msdn.microsoft.com/en-us/data/jj691402.aspx
If you try to access an item in the first-result, after doing a .NextResult, you may get this error.
From the article:
var reader = cmd.ExecuteReader();
// Read Blogs from the first result set
var blogs = ((IObjectContextAdapter)db)
.ObjectContext
.Translate<Blog>(reader, "Blogs", MergeOption.AppendOnly);
foreach (var item in blogs)
{
Console.WriteLine(item.Name);
}
// Move to second result set and read Posts
reader.NextResult();
var posts = ((IObjectContextAdapter)db)
.ObjectContext
.Translate<Post>(reader, "Posts", MergeOption.AppendOnly);
foreach (var item in posts)
{
Console.WriteLine(item.Title);
}
Now, if before the line
foreach (var item in posts)
you put in this code
Blog foundBlog = blogs.FirstOrDefault();
I think you can simulate the error.
Rule of Thumb:
You still gotta treat this thing like a DataReader (fire-hose).
For my needs, I had to convert to a List<>.
So I changed this:
foreach (var item in blogs)
{
Console.WriteLine(item.Name);
}
to this:
List<Blog> blogsList = blogs.ToList();
foreach (var item in blogsList )
{
Console.WriteLine(item.Name);
}
And I was able to navigate the objects without getting the error.
Here is another way I encountered it.
private void DoSomething(ObjectResult<Blog> blogs, ObjectResult<Post> posts)
{
}
And then after this code (in the original sample)
foreach (var item in posts)
{
Console.WriteLine(item.Title);
}
put in this code:
DoSomething(blogs,posts);
If I called that routine and started accessing items/properties in the blogs and posts, I would encounter the issue. I understand why, I should have caught it the first time.
I verified that the entity was pointing at the correct database.
I then deleted the table from the .edmx file and added it again.
Problem solved.
Check your model & database both should be defined accordingly....
public Int32? X { get; set; } ----> Nullable
Accordingly in DB 'X' should be Nullable = True
or
public Int32 X { get; set; } ----> not Nullable
Accordingly in DB 'X' should be Nullable = false
In my case in created view in DB column that I select that contains null value I change that value by this select statement:
Before my change
select
..., GroupUuId , ..
after my change
select
..., ISNULL(GroupUuId, 0), ...
Sorry for my bad English
This may happen when the database table allows NULL and there are records that have a null value and you try to read this record with EF and the mapping class does not allow a null value.
The solution is either change the database table so that it does not allow null or change your class to allow null.
For me the following Steps corrected the Error:
Remove the 'X'-Property from the 'Y'-Table
Save EDMX
build Database from Model
compile
Add the 'X'-Property to 'Y'-Table again (with non-nullable and int16)
Save EDMX
build Database from Model
compile
to fix the error
Error 3031: Problem in mapping fragments starting at line 4049:Non-nullable column "X" in table "Y" is mapped to a nullable entity property.
open your EDMX file with and xml editor and lookup you table in
edmx:StorageModels
find the propertie which gives the error and set or add
Nullable="false" >> to Nullable="true"
save the edmx, open it in visual studio and build it. problem solved
Got same error but different context, tried to join tables using linq where for one of the tables in database, a non-null column had a null value inserted, updated the value to default and the issue is fixed.

Updating deeply nested documents in ravendb

I am having following document structure and I need to insert values in nested documents.
{
"Level-1": {
"Level-2": {
"Level-3": {
"aaa": "bbb"
"Level-4": {
}
}
}
}
}
how can I get keys every time at any level. There is a function for getting keys
var workingDOc = session.Load<RavenJObject>("xyz/b");
workingDoc.Keys will give me all key for this document But how could I get Keys of second level.when I provide key for nested document . For example now I want all keys for "Level-1".Is there any way? How can I check that the key is of nested document. please help .Thanks in advance
Rajdeep, you can't partially load a document. You can certainly have multiple levels of nested objects withing one single document and depending on your data model this is probably a good idea, however, you will always need to load the document as a whole if you want to do modify it.