What is the difference between Word: 0 and #in<Word>:(0) in RavenDb - ravendb

As the title says. What is the difference? I think
#in< Word>:(0)
is a filter. Where as
Word:0
searches for a value of 0. But is there a difference is one better than the other?
Am I completely wrong?
I ran a the below in a database and it worked then a ran it a couple more times and it stopped working. Why would that happen? I ran the same code on a duplicated database and it worked.
#in< Word>:(0)

They both will do effectively the same thing.
Word: 0 means match on having field Word with value 0.
#in does a IN search, so you can provide multiple values.
If only one is specified, it is the same as the previous one.

Related

How to find solutions randomly (nondeterministically) in SAT4J?

In the code examples from the SAT4J documentation, calling the solver multiple times on the same SAT problem always yields the same solution, even if multiple possible solutions exist - that is, the result is deterministic.
I'm looking for a way to get different solutions on multiple runs, that is, a nondeterministic/random result. For each possible solution, there should be a non-zero probability for the solution to be picked. Ideally, every solution should be picked with the same probability, but that's not a strict requirement.
I'm aware of the possibility to (deterministically) iterate over all solutions and just take a random one, but that's not a feasible solution in my case since there are too many solutions to begin with, and calculating them all takes too long.
Yes, Sat4j is by default deterministic: it will always find the same solution if you run it several times on the same problem from the command line.
The way to add some non determinism in the heuristics is to use the RandomWalkDecorator, as found for instance in the GreedySolver in org.sat4j.minisat.SolverFactory.
Note however that if you several times such solver from the command line :
java -jar org.sat4j.core.jar GreedySolver file.cnf
you will still be deterministic, since the pseudo random numbers generator is seeded by a constant.
Thus you need to ask several models within your Java code.
As mentioned in your question, you can use a ModelIterator decorator with a bound for that:
ISolver solver = SolverFactory.newGreedySolver();
ModelIterator mi = new ModelIterator(solver,10); // look for 10 models

Pentaho "Return value id can't be found in the input row"

I have a pentaho transformation, which is used to read a text file, to check some conditions( from which you can have errors, such as the number should be a positive number). From this errors I'm creating an excel file and I need for my job the number of the lines in this error file plus to log which lines were with problem.
The problem is that sometimes I have an error " the return values id can't be found in the input row".
This error is not every time. The job is running every night and sometimes it can work without any problems like one month and in one sunny day I just have this error.
I don't think that this is from the file, because if I execute the job again with the same file it is working. I can't understand what is the reason to fail, because it is saying the value "id", but I don't have such a value/column. Why it is searching a value, which doesn't exists.
Another strange thing is that normally the step, which fails should be executed at all( as far as I know), because no errors were found, so we don't have rows at all to this step.
Maybe the problem is connected with the "Prioritize Stream" step? Here I'm getting all errors( which use exactly the same columns). I tried before the grouping steps to put a sorting, but it didn't help. Now I'm thinking to try with "Blocking step".
The problem is that I don't know why this happen and how to fix it. Any suggestions?
see here
Check if all your aggregates ins the Group by step have a name.
However, sometimes the error comes from a previous step: the group (count...) request data from the Prioritize Stream, and if that step has an error, the error gets reported mistakenly as coming from the group rather than from the Prioritze.
Also, you mention a step which should not be executed because there is no data: I do not see any Filter which would prevent rows with missing id to flow from the Prioritize to the count.
This is a bug. It happens randomly in one of my transformations that often ends up with empty stream (no rows). It mostly works, but once in a while it gives this error. It seems to only fail when the stream is empty though.

How can I fetch (via GET) all JIRA issues? Do I go to the Search node?

It looks like /api/2/project easily returns all projects in a JIRA instance in JSON format.
I'd like to do the same for issues, but this does not appear to exist.
Is /api/2/search the standard way to do a mass-dump like this? And what is the best way to regularly update this to a database? Would I do something like search (update date > [last entry in database]) and then go through the pagination? Surely I can't be the first person attempting this, though I see no similar guide anywhere online to this (I checked Jira's own docs, no mass-issue-export guide really).
EDIT: Okay it looks like search really is the "issue dump" and not the issue node which, contrary to their documentation, does not default to a collection but really for creating issues or listing one at at time. I'll probably go the route of updated > [whatever last date is in the DB]
Unless you have very few issues, you can't fetch all of them at once.
What you can do is to execute the search step by step.
For example, lets say you have 1324 JIRA issues. In order to retrive all of them you have to execute a search similar to this several times:
/rest/api/2/search?&maxResults=100&startAt=0
This will retrive the first 100 JIRA issues starting from 0.
How to get the others?
When you execute the search, a field named total is returned. That field is the number of the total JIRA issues in your system (1324 issues).
The next query will be:
/rest/api/2/search?&maxResults=100&startAt=100
Repeat this operation, incrementing the value of startAt by 100 every time, until all the issues are returned.

jsFiddle API to get row count of user's fiddles

So, I had a nice thing going on a jsFiddle where I listed all my fiddles on one page:
jsfiddle.net/show
However, they have been changing things slowly this year, and I've already had to make some changes to keep it running. The newest change is rather annoying. Of course, I like to see ALL my fiddles all at once, make it easier to just hit ctrl+f and find what I might be looking for, but they' made it hard to do now. Used to I could just set the limit to 99999, and see everything, but now it appears I can't go past how many i actually have (186 atm).
I tried using a start to limit solution, but when it got to last 10|50 (i tried like start={x}&limit10 and start={x}&limit50) it would die. Namely because last pull had to be exact count. Example, I have 186, and use the by 10's solution, then it would die at start=180&limit=10.
I've search the API docs but can't seem to find a row count or anything of that manner. Anyone know of a good feasible solution that wont have me overloading there servers doing a constant single row check?
I'm having the same problem as you are. Then I checked the docs (Displaying user’s fiddles - Result) and found out that if you include callback=Api parameter, an additional overallResultSetCount field is included in the JSON response. I checked your fiddles and currently you have total of 229 public fiddles.
The solution I can think of will force you to only request twice. The first request's parameters doesn't matter as long as you have callback=Api. Then you send the second request in which your limit will be overallResultSetCount value.
Edit:
It's not in the documentation, however, I think the result set is limited to 200 entries only (hence your start/limit is from 0 - 199). I tried to query more than the 200 range but I get a Error 500. I couldn't find another user whose fiddle count is more than 200 (most of the usernames I tested only have less than 100 fiddles like zalun, oskar, and rpflorence).
Based on this new observation, you can update your script like this:
I have tested that if the total fiddle count is less than 200,
adding start=0&limit=199 parameter will only return all the
fiddles. Hence, you can add that parameter on your initial call.
Check if your total result set is more than 200. If yes, update your
parameters to reflect the range for the remaining result set (in
this case, start=199&limit=229) and add the new result set to your
old result set. Else, show/print the result set you initially got from your first query.
Repeat steps 1 and 2, if your total count reaches 400, 600, etc (any
multiple of 200).

mysqldumpslow: What does these fields indicate..?

Recently we have started on optimizing live slow queries. As part of that, we thought to use mysqldumpslow to prioritize slow queries. I am new to this tool. I am able to understand some basic info, but I would like to know what exactly the below fields in the out put will tell us.
OUTPUT: Count: 6 Time=22.64s (135s) Lock=0.00s (0s) Rows=1.0 (6)
What about the below fields ?
Time : Is it the average time taken of all these 6 times of occurance...?
135s : What is this 135 seconds....?
Rows=1.0 (6): again what does this mean...?
I didn't find a better explanation. Really thanks in advance.
Regards,
UDAY
I made a research for that coz i wanted to know that too.
I have a log from a pretty highly used DB server.
The command mysqldumpslow has several optional parameters (https://dev.mysql.com/doc/refman/5.7/en/mysqldumpslow.html), including sort by (-s)
thanks to many queries I can work with, I can tell, that:
value before brackets represents an average value from all the same queries within to group ('count' in total) and the value within brackets is the maximum value of one of the queries. Meaning, in your case:
you have a query that was called 6 times, it is executed within 22.64 seconds (average), but once it took about 135 seconds to execute it. The same applies for locks (if provided) and rows. So most of the time it returns about one row, however it returned 6 rows at least once