I'm trying to validate a json against a somehow complex 'schema' (defined as in karate docs). The error I'm getting is not so explicit:
reason: not equal
How can I check where it actually fails validating?
Really sorry about that, I'm to blame. I know this won't help you but I'm re-writing the core of Karate right now and this is how a match failure looks like in the improved version:
Would it be possible for you to create a representative sample of what you have right now (create an issue on GitHub). I would like to see if that is handled better in the future or tweak things right away.
Related
When I run the following query:
match (n) return distinct labels(n);
I am seeing the following error:
DynamicRecord[396379,used=false,(0),type=-1,data=byte[],start=true,next=-1] not in use
Other people have asked how to deal with this situation. I am asking a different set of questions: what is a DynamicRecord in Neo4j? And, what can be done to avoid this type of error?
What is DynamicRecord
The source for DynamicRecord is here. This is largely useless.
Anyhow, all I can gather is that it is:
It is a very low-level construct in store kernal.
A multitude of tests use it with relation to consistency checking.
It appears to be a record that is dynamically created (meaning, at run time - not stored on disk), and it can represent different type of data (property blocks, schema, etc.)
This is also largely useless. I know.
What can be done to avoid this type of error.
This seems to be a very generic error, but most online resources (Github issues / SO questions) seem to relate to DB upgrades. Some pointed out in changes to some consts used by DynamicRecord that yield data-corruption after upgrades.
Based on that, I guess that the following steps could prevent such error:
Backup your data.
Migrate your data properly when upgrading.
Do not use different versions of neo against the same data.
You've guessed it - This is also rather useless, but I hope it is better than nothing.
Super new to this sort of thing so please bear with me, I'm sure this is a facepalm-worthy question to anyone who knows anything about using APIs. I'm trying to use the CrowdTangle API (just got access) but their documentation isn't really helpful (at least to me). Even though it lists a bunch of parameters you can use, it doesn't give syntax examples so I'm not sure how to implement the parameters. For example, I tried to test a simple search for "dog" by doing https://api.crowdtangle.com/posts/search=dog?token=[my-token] and I got this error message:
{"status":400,"message":"Required String parameter 'searchTerm' is not present"}.
Does anyone know what the general syntax would be for this and how you use the parameters? I'm obviously looking to do more complicated searches than "dog", but I think if someone can just breakdown what the general syntax is I can probably manage from there.
Try:
https://api.crowdtangle.com/posts?token=your-api-token
just to get the ball rolling. Looks good? Then try:
https://api.crowdtangle.com/posts/search?token=MYTOKEN&searchTerm=waffles
And you should be good to go. You'll get a 401 if your token is valid but not good for that usage type.
Does it exists any way to change the U-SQL's behavior? I would like if it can ignore silently any statistics that is up to date instead of throw an exception. The reason why i need it. I just wanna schedule triggering of update the stats from ADF. (I would like to have a temporary solution like set #opt='cryptic__undocumented_option')
Answering explicitly to have an answer appear.
This will be fixed in the upcoming refresh. It was a regression that was introduced by another bug fix. We have now put regression tests into place.
Thanks for reporting!
A report is throwing this error
insufficient parser memory , during optimizer phase
I am aware of the DBSControl parameter and how it relates to this.
My Questions are
Best of my K, the it would be a nay... but I just wanted to check ...is there any other ODBC driver related setting that can affect this error. We know the Server DBSControl setting is there already.
Another hopelessly hopeful hope .....if you are not given Console privs. Is there any table in the DD out there where DBSControl settings would be stored ( like for FYI purpose ). I know it wasn't till V6 and V12. But I wondered if it got any wiser with the newer versions
So this is not getting to know the error. Pl don't explain what it means- I know what it means. My questions are specific to the above ones.
With the load data option that Liquibase provides, one can specify seed data in a CSV format. Is there a way I can provide say, a JSON or XML file with data that Liquibase would understand?
The use case is we are trying to put in some sample data which is hierarchical. E.g. Category - Subcategory relation which would require putting in parent id for all related categories. If there is a way to avoid including the ids in the seed data via say, JSON.
{
"MainCat1": ["SubCat11", "SubCat12"],
"MainCat2": ["SubCat21", "SubCat22"]
}
Very likely to have this as not supported (couldn't make Google help me) but is there a way to write a plugin or something that does this? Pointer to a guide (if any) would help.
NOTE: This is not about specifying the change log in that format.
This not currently supported and supporting it robustly would be pretty difficult. The main difficultly lies in the fact that Liquibase is designed to be database-platform agnostic, combined with the design goal of being able to generate the SQL required to do an operation without actually doing the operation live.
Inserting data like you want without knowing the keys and just generating SQL that could be run later is going to be very difficult, perhaps even impossible. I would suggest approaching Nathan, who is the main developer for Liquibase, more directly. The best way to do that might be through the JIRA bug database for Liquibase.
If you want to have a crack at implementing it, you could start by looking at the code for the LoadDataChange class (source in Github), which is where the CSV support currently lives.