Can/does SigMap produce canonical output? - yosys

An instance of SigMap is guaranteed to produce the same output for every connected wire in a design. But does this hold true for different instances of SigMap running in different versions of yosys across different platforms?
What about if the initial queries are done in the same order? Is there some way to cause SigMap to return the same SigBit across multiple runs on multiple versions?

SigMap is not guaranteed to produce a canonical output with the SigMap(module) constructor, the exact behavior of such a SigMap object depends on the iterator order for module->connections() and the exact structure of the connections array.
You can "canonicalize" a SigMap using the following technique:
SigMap sigmap(module);
for (auto bit : sigmap.allbits())
if (my_canonical_cmp(sigmap(bit), bit))
sigmap.add(bit);
(I have added SigMap::allbits() just now. So you need to update to latest git head for this to work.)

Related

How to find solutions randomly (nondeterministically) in SAT4J?

In the code examples from the SAT4J documentation, calling the solver multiple times on the same SAT problem always yields the same solution, even if multiple possible solutions exist - that is, the result is deterministic.
I'm looking for a way to get different solutions on multiple runs, that is, a nondeterministic/random result. For each possible solution, there should be a non-zero probability for the solution to be picked. Ideally, every solution should be picked with the same probability, but that's not a strict requirement.
I'm aware of the possibility to (deterministically) iterate over all solutions and just take a random one, but that's not a feasible solution in my case since there are too many solutions to begin with, and calculating them all takes too long.
Yes, Sat4j is by default deterministic: it will always find the same solution if you run it several times on the same problem from the command line.
The way to add some non determinism in the heuristics is to use the RandomWalkDecorator, as found for instance in the GreedySolver in org.sat4j.minisat.SolverFactory.
Note however that if you several times such solver from the command line :
java -jar org.sat4j.core.jar GreedySolver file.cnf
you will still be deterministic, since the pseudo random numbers generator is seeded by a constant.
Thus you need to ask several models within your Java code.
As mentioned in your question, you can use a ModelIterator decorator with a bound for that:
ISolver solver = SolverFactory.newGreedySolver();
ModelIterator mi = new ModelIterator(solver,10); // look for 10 models

Additional PlanningEntity in CloudBalancing - bounded-space situation

I successfully amended the nice CloudBalancing example to include the fact that I may only have a limited number of computers open at any given time (thanx optaplanner team - easy to do). I believe this is referred to as a bounded-space problem. It works dandy.
The processes come in groupwise, say 20 processes in a given order per group. I would like to amend the example to have optaplanner also change the order of these groups (not the processes within one group). I have therefore added a class ProcessGroup in the domain with a member List<Process>, the instances of ProcessGroup being stored in a List<ProcessGroup>. The desired optimisation would shuffle the members of this List, causing the instances of ProcessGroup to be placed at different indices of the List List<ProcessGroup>. The index of ProcessGroup should be ProcessGroup.index.
The documentation states that "if in doubt, the planning entity is the many side of the many-to-one relationsship." This would mean that ProcessGroup is the planning entity, the member index being a planning variable, getting assigned to (hopefully) different integers. After every new assignment of indices, I would have to resort the list List<ProcessGroup in ascending order of ProcessGroup.index. This seems very odd and cumbersome. Any better ideas?
Thank you in advance!
Philip.
The current design has a few disadvantages:
It requires 2 (genuine) entity classes (each with 1 planning variable): probably increases search space (= longer to solve, more difficult to find a good or even feasible solution) + it increases configuration complexity. Don't use multiple genuine entity classes if you can avoid it reasonably.
That Integer variable of GroupProcess need to be all different and somehow sequential. That smelled like a chained planning variable (see docs about chained variables and Vehicle Routing example), in which case the entire problem could be represented as a simple VRP with just 1 variable, but does that really apply here?
Train of thought: there's something off in this model:
ProcessGroup has in Integer variable: What does that Integer represent? Shouldn't that Integer variable be on Process instead? Are you ordering Processes or ProcessGroups? If it should be on Process instead, then both Process's variables can be replaced by a chained variable (like VRP) which will be far more efficient.
ProcessGroup has a list of Processes, but that a problem property: which means it doesn't change during planning. I suspect that's correct for your use case, but do assert it.
If none of the reasoning above applies (which would surprise me) than the original model might be valid nonetheless :)

How do I get list of keys/values using Booksleeve?

I'm trying to get a list of values, where key name starts with let's say "monkey".
I really couldn't find a doc on this. :(
How can I do this? What API should I use? Keys, Sets, Strings? What method?
Or it is not available yet, but a workaround?
Thanks
Redis does not have a "get all keys like {x} along with their values" command, but it does have:
get all keys like {x}
get value of key / keys
Whether your approach is sensible in the first place depends a bit on which server version you are using. If you are on a recent version then the library will use SCAN, which is not terrible. On older server versions it will use KEYS, which is to be avoided at all costs. I am not at a PC so this is pseudo-code only, but:
foreach(var batch in db.GetKeys("monkey*")
.Batchify(100))
{
list.AddRange(await db.Strings.GetString(batch));
}
Note that this isn't optimized - the batches could be fun much more concurrently than the above - but I would need a keyboard and compiler to demonstrate that!

Alphabetical index with millions of rows in redis

For my application, I need an alphabetical index on a set with millions of rows.
When I use a sorted set, and give all members the same score, the result looks perfect.
Performance is also great, with a test set of 2 million rows, the last third does not perform noticably less than the first third of the set.
However, I need to query those results. For example, get the first (max) 100 items that start with "goo". I played around with zscan and sort, but it does not give me a working and performant result.
Since redis is very fast when inserting a new member to the sorted set, it must be technically possible to immediately (well, very quickly) go to the right memory location. I suppose redis uses some kind of quicksort mechanism to accomplish this.
But.. I don't seem to get the result when I just want to query the data, and not write to it.
We use replicated slaves for read actions, and we prefer the (default) read-only config switch. So creating a dummy key and deleting it afterward (however unelegant) is not really an option.
I'm stuck a bit, and I'm thinking about writing a ZLEX command in redis-server itself. Which I could use like this:
HELP "ZLEX" -> (ZLEX set score startswith)
-- Query the lexicographical index of a sorted set, supplying a 'startswith' string.
127.0.0.1:12345> ZLEX myset 0 goo LIMIT 0 100
1) goo
2) goof
3) goons
4) goozer
What are your thoughts? Am I missing something in the standard redis commands?
We're using Redis 2.8.4 x64 on Debian.
Kind regards, TW
Edits:
Note:
Related issue: indexing-using-redis-sorted-sets -> At least the name I gave to ZLEX seems to conform with Antirez' (Salvatore's) standards. As of 24-1-2014, I'm working on implementing ZLEX. It seems to be the easiest and most straight-forward solution for this use case, and Antirez could merge it into the main branch for everyone's benefit.
I've implemented ZLEX.
Here are the full specs.
You can grab the new functionality from here: github tw-bert
I also posted a pull request to Antirez here.
Kind regards, TW
Have you had a look at this ?
It can be useful depending on the length of the field by which you sort, this method requires b*(a^2) keys, where a is the length of the field , and b is amount of rows for this field.

groovy sql eachRow and rows method

I am new to grails and groovy.
Can anyone please explain to me the difference between these two groovy sql methods
sql.eachRow
sql.rows
Also, which is more efficient?
I am working on an application that retrieves data from the database(the resultset is very huge) and writes it to CSV file or returns a JSON format.
I was wondering which of the two methods mentioned above to use to have the process done faster and efficient.
Can anyone please explain to me the
difference between these two groovy
sql methods sql.eachRow sql.rows
It's difficult to tell exactly which 2 methods you're referring 2 because there are a large number of overloaded versions of each method. However, in all cases, eachRow returns nothing
void eachRow(String sql, Closure closure)
whereas rows returns a list of rows
List rows(String sql)
So if you use eachRow, the closure passed in as the second parameter should handle each row, e.g.
sql.eachRow("select * from PERSON where lastname = 'murphy'") { row ->
println "$row.firstname"
}
whereas if you use rows the rows are returned, and therefore should be handled by the caller, e.g.
rows("select * from PERSON where lastname = 'murphy'").each {row ->
println "$row.firstname"
}
Also, which is more efficient?
This question is almost unanswerable. Even if I had implemented these methods myself there's no way of knowing which one will perform better for you because I don't know
what hardware you're using
what JVM you're targeting
what version of Groovy you're using
what parameters you'll be passing
whether this method is a bottleneck for your application's performance
or any of the other factors that influence a method's performance that cannot be determined from the source code alone. The only way you can get a useful answer to the question of which method is more efficient for you is by measuring the performance of each.
Despite everything I've said above, I would be amazed if the performance difference between these two was in any way significant, so if I were you, I would choose whichever one you find more convenient. If you find later on that this method is a performance bottleneck, try using the other one instead (but I'll bet you a dollar to a dime it makes no difference).
If we set aside minor syntax differences, there is one difference that seems important. Let's consider
sql.rows("select * from my_table").each { row -> doIt(row) }
vs
sql.eachRow("select * from my_table") { row -> doIt(row) }
The first one opens connection, retrieves results, closes connection and returns them. Now you can iterate over the results while connection is released. The drawback is you now have entire result list in memory which in some cases might be a lot.
EachRow on the other hand opens a connection and while keeping it open executes your closure for each row. If your closure operates on the database and requires another connection your code will consume two connections from the pool at the same time. The connection used by eachRow is released after it iterates though all the resulting rows. Also if you don't perform any database operations but closure takes a while to execute, you will be blocking one database connection until eachRow completes.
I am not 100% sure but possibly eachRow allows you not to keep all resulting rows in memory but access them through a cursor - this may depend on the database driver.
If you don't perform any database operations inside your closure, closure executes fast and results list is big enough to impact memory then I'd go for eachRow. If you do perform DB operations inside closure or each closure call takes significant time while results list is manageable, then go for rows.
They differ in signature only - both support result sets paging, so both will be efficient. Use whichever fits your code.