I've discovered a strange behaviour when querying an Edge class using OrientDB (community-2.1-rc5). The database is returning the exact same edge with the exact same #rid and the exact same data, twice. My instinct says that this is a bug...
This is the query
SELECT FROM E WHERE #class='LIKES' AND (out IN [#12:0,#12:221]) AND in=#36:1913
And this is what orientDB studio returns
http://s29.postimg.org/hwruv0zif/Captura.png
This makes no sense. If I go to the vertex and query for LIKES relationship it only returns one registry... Anyone faced a problem like this?
This is the database I'm using if it helps
https://www.dropbox.com/sh/pkm28cfer1pwpqb/AAAVGeL1eftOGR4o0todTiAha?dl=0
To get help with this bug, you should make a request to join the google group. StackOverflow is not the best place to get help with this kind of bug.
The problem is that you somehow duplicated your edge by mistake. Orientdb let you do it for some unknown reason.
Here is the bug discussion on the orientdb google group : https://groups.google.com/forum/#!topic/orient-database/cAR7yUjCZcI
In the discussion Luca(creator of orientdb) says this :
"the problem is that without a transaction the creation of edge could
be dirty. OrientDB tries to fix dirty reference, so maybe that's the
reason why the next time the exception is raised. I've changed the
default behavior of all SQL commands against Graphs to be always
transactional"
Upgrading to the most recent version of orientdb would be good ideal. Maybe the bug has been fixed.
Related
I noticed that I had a query stuck in the IN_HAS_NEXT state and I'm curious what its status means.
From the GraphDB SE Documentation 7.0 documentation,
But I'm not entirely sure what that amounts to.
IN_HAS_NEXT means, that the engine is evaluating the solutions from the binding set iterator (hasNext()). In simple words this is the "where" part of the update query which prepares the results before commit. It might seems stuck if there are many returned results. If you are still experiencing problem with this query you can send an email, describing the problem, to graphdb-support#ontotext.com
A report is throwing this error
insufficient parser memory , during optimizer phase
I am aware of the DBSControl parameter and how it relates to this.
My Questions are
Best of my K, the it would be a nay... but I just wanted to check ...is there any other ODBC driver related setting that can affect this error. We know the Server DBSControl setting is there already.
Another hopelessly hopeful hope .....if you are not given Console privs. Is there any table in the DD out there where DBSControl settings would be stored ( like for FYI purpose ). I know it wasn't till V6 and V12. But I wondered if it got any wiser with the newer versions
So this is not getting to know the error. Pl don't explain what it means- I know what it means. My questions are specific to the above ones.
has anyone had any/know of any issues with ibatis submitting several duplicate queries?
we have been seeing (intermittently) the same sql statement being executed up to 5 times. Originally we thought we were dealing with over zealous click happy users, but we freeze the submit buttons to prevent multiple clicking and we still get this.
I seem to remember reading somewhere that this is a bug in ibatis, but i cant find it again (or maybe i dreamt it, my dreams are often weird).
Thanks
Are you talking about this?
https://issues.apache.org/jira/browse/IBATIS-369
They say the bug is fixed.
I've experienced a couple of days ago, that a problem with the name of the fields in the final query. It was a bug from a version before the 2.0.GA.
To not drag too long on this, it was a problem when the query is too big, and you use SetMaxResults altogether. It got me a though:
Would there any way to control how NHibernate is going to name your fields in the SQL query?
Because as I have seen for a while, and in this case more than ever, the relathionship between tables and the naming convention for the fields that will be rendered whit a not so pretty as I exaustly set on my criteria.
To directly answer your question, yes, you could implement an IInterceptor to change anything in the generated SQL. See this question.
However, that's very likely not the way to fix your problem...
Wondering if anyone has gotten the infamous "database is locked" error from Trac and how you solved it. It is starting to occur more and more often for us. Will we really have to bite the bullet and migrate to a different DB backend, or is there another way?
See these two Trac bug entries for more info:
http://trac.edgewall.org/ticket/3446
http://trac.edgewall.org/ticket/3503
Edit 1 Thanks for the answer and the recommendation, which seems to confirm our suspicion that migrating to PostgreSQL seems to be the best option. The SQLite to PostgreSQL script is here: http://trac-hacks.org/wiki/SqliteToPgScript Here goes nothing...
Edit 2 (solved) The migration went pretty smooth and I expect we won't be seeing the locks any more. The speed isn't noticeably better as far as I can tell, but at least the locks are gone. Thanks!
That's a problem with the current SQLite adapter. There are scripts to migrate to postgres and I can really recommend that, postgres is a lot speeder for trac.
They just fixed this on Sept 10, and the fix will be in 0.11.6.
http://trac.edgewall.org/ticket/3446#comment:39
I don't think this is 100% fixed just yet. We experience this error a couple dozen times a day. In our case, we have 30+ people updating Trac constantly as we use it for tracking pretty much everything, and not just bugs. From ticket #3446:
Quite obviously, this is [...] due to
our database access patterns... which
currently limit our concurrency to at
most one write access each few seconds