going through an A4Solution - api

I am currently using the Alloy api in my project, and I need to display A4Solutions.I can do that easily with the vizualiser Alloy provides (vizGUI) , but it is a bit too limited for my purpose. So I am willing to generate my own graphs ( using any other graph api ) from an A4Solution objects.
I was able to get the Atoms without any problems (that was pretty straight forward ) but I can't really see how to retrieve the relations between those atoms.
I looked online for some example about how to parse an A4Solution, but found nothing unfortunately.

Relations, or fields, you can retrieve from sigs, and then you can evaluate them to obtains concrete atoms, something like this:
A4Solution sol = ...;
SafeList<Sig> sigs = sol.getAllReachableSigs();
for (Sig sig : sigs) {
SafeList<Field> fields = sig.getFields();
for (Field field : fields) {
A4TupleSet ts = (A4TupleSet)(sol.eval(field));
for(A4Tuple t: ts)
for(int i=0; i<t.arity(); i++)
t.atom(i);
}
}

Related

How can i know if a relationship is fetched or not in neo4j?

For example I load an entity in the following way:
Movie m = session.load(Movie.class, id, 0); // load properties but not relationships
m.getActors(); // empty since depth was 0
A bit later in another method:
// Do I need to load it?
if (needsLoad(m) {
m = session.load(Movie.class, m.getId(), 1);
}
for (Actor a : m.getActors()) {
// ...
}
The only solution I've found is to load it every time.
Is there a better approach?
There is no API for accessing the session cache.
As a result it is not possible to get information about how deep the object graph got loaded.
I wonder how you get in the situation to need this.
The standard approach would be: load data from the database with the "right" depth, manipulate the data and save it back.
All within one transaction.

PouchDB view lookup by key

I have an CouchDB database named t-customers. Using Fauxton I've created the following view t-customers/_design/t-cust-design/_view/by-custdes. Here is the map function:
function (doc) {
var custname = doc.CUSTNAME;
if(custname != undefined && custname.length != undefined && custname.length != ''){
for(var i = 0; i < custname.length - 1; i++)
for(var j = i + 1; j < custname.length + 1; j++)
emit(custname.substring(i, j),doc._id);
}
}
The view will contain all available sub-strings for custdes (e.g. custdes=abc -> a, ab, abc, bc) as key and doc._id as its value.
After the view is created I can query it with the following http requests:
http://127:0.0.1:5984/t-customers/_design/t-cust-design/_view/by-custdes?key="ab"
http://127:0.0.1:5984/t-customers/_design/t-cust-design/_view/by-custdes?key="abc"
It works as fast as lightning although my view has about 1.500.000 documents indexed.
First of all: I've noticed that the PouchBD syncs only the t-customers database and not it's view. Why? To make the view avaliable in PouchDB it requires for me to run the following command which takes up to 20 minutes to complete:
t-customers.query("t-cust-design/by-custdes").then(...).catch(...);
Only and only then I can see the view at IndexedDB in Chrome.
Second of all: What is the way to look up a doc in PouchDB view t-cust-design/by-custdes without triggering the whole map/reduce process every time I want to find the ab key? As I mentioned I can query the CouchDB _design/t-cust-design/_view/by-custdes view with http request and it works fast, but I'm unable to do the equivalent action using PouchDB API.
I've read tons of documentation but I'm still confused about it...
The answer to your second question first (well, sort of):
To avoid generating all possible combinations of characters (leading to 1.500.000 view results), emit only whole words and query for keys starting with your query string. You can use the startkey and endkey parameters for that.
function(doc) {
var custname = doc.CUSTNAME || '';
for(var i = 0; i < custname.length - 1; i++) {
emit(custname.substring(i)); // You don't need to emit the _id - it's available on each view row automatically.
}
}
Query the view with the parameters {startkey:'ab', endkey:'ab\ufff0'}.
See the CouchDB docs for details.
Regarding your first question:
Views are always built per CouchDB and PouchDB instance. One of the reasons is that you can do filtered replications, so each instance might have its own view of the "world" a.k.a. database contents.
I assume from your comments that you use PouchDB to replicate your database into the browser and then call the view locally, so technically you are using two instances of the database, which makes total sense if you're creating an offline-first app. But if you expect the CouchDB instance to always be available, just query the CouchDB server, so the views don't have to be rebuilt in each user's browser.

How to recode many fields in Kettle, all of which require the same recoding

I have many fields all needing the same recoding, is there a transformation that will allow me to specify all the fields requiring the same mapping, rather that having to create a Value Mapper transformation for each field? I am using Kettle Spoon 4.4
I am not a huge fan of Java Scripting on Kettle, but I don't know of any other standard step that does what you wish.
You can add a Modified Java Script step to your stream and write a simple code to map the values of a list of fields (columns). To accomplish this, suppose you have fields A and B both with two possible values S for small and B for big. To map them you should insert the following javascript code:
// list of fields you wish to map
var fieldsToMap = ["A", "B"];
var tmpField;
var fieldIndex;
// for each field to map...
for (var i=0; i < fieldsToMap.length; i++) {
//get the index of the field once you only have it's name
fieldIndex = getInputRowMeta().indexOfValue(fieldsToMap[i]);
//get the field
tmpField = row.getValue(fieldIndex);
//don't forget to trim as Kettle usually pads strings
switch (trim(tmpField)) {
case "S":
tmpField.setValue("Small");
break;
case "B":
tmpField.setValue("Big");
break;
}
}
Don't forget to check Compatibility mode? at the Java Script step, otherwise you'll have a javascript error once with the new standards you are not allowed to change a field value any more (see How to modify values (with compatibility off) at http://wiki.pentaho.com/display/EAI/Modified+Java+Script+Value)

Can you avoid for loops with solrj?

I was curious if there was a way to avoid using loops when using SolrJ.
For example. If I were to use straight SQL, using an appropriate java library, I could return a Query result and caste it as a List and pass it on up to my view (in a webapp).
SolrJ (SolrQuery and QueryResponse) have no way of returning succinct lists it seems. This would imply I have to create an iterator to go through each return doc and get the value I want which isn't ideal.
Is there something I am missing here, is there away to avoid these seemingly useless loops?
The SOLRJ wiki give an example that does what you want:
https://wiki.apache.org/solr/Solrj#Reading_Data_from_Solr
Basically:
QueryResponse rsp = server.query( query );
SolrDocumentList docs = rsp.getResults();
List<Item> beans = rsp.getBeans(Item.class);
EDIT:
Based on your comments below, it appears what you want is a non-looping transform of the SOLR response (e.g. a map function in a functional language). Google's guava library provides something like this. My example below assumes that your SOLR response has a "name" field that you want to return a list of:
QueryResponse rsp = server.query(query);
SolrDocumentList docs = rsp.getResults();
List<String> names = Lists.transform(docs, new Function<String,SolrDocument>() {
#Override
public String apply(SolrDocument d) {
return (String)d.get("name");
}
});
Unfortunately, java does not support this style of programming very well, so the functional approach ends up being more verbose (and probably less clear) than a simple loop:
QueryResponse rsp = server.query(query);
SolrDocumentList docs = rs.getResults();
List<String> names = new ArrayList<String>();
for (SolrDocument d : docs) names.add(d.get("name"));

Nhibernate Tag Cloud Query

This has been a 2 week battle for me so far with no luck. :(
Let me first state my objective. To be able to search entities which are tagged "foo" and "bar". Wouldn't think that was too hard right?
I know this can be done easily with HQL but since this is a dynamically built search query that is not an option. First some code:
public class Foo
{
public virtual int Id { get;set; }
public virtual IList<Tag> Tags { get;set; }
}
public class Tag
{
public virtual int Id { get;set; }
public virtual string Text { get;set; }
}
Mapped as a many-to-many because the Tag class is used on many different types. Hence no bidirectional reference.
So I build my detached criteria up using an abstract filter class. Lets assume for simplicity I am just searching for Foos with tags "Apples"(TagId1) && "Oranges"(TagId3) this would look something like.
SQL:
SELECT ft.FooId
FROM Foo_Tags ft
WHERE ft.TagId IN (1, 3)
GROUP BY ft.FooId
HAVING COUNT(DISTINCT ft.TagId) = 2; /*Number of items we are looking for*/
Criteria
var idsIn = new List<int>() {1, 3};
var dc = DetachedCriteria.For(typeof(Foo), "f").
.CreateCriteria("Tags", "t")
.Add(Restrictions.InG("t.Id", idsIn))
.SetProjection( Projections.ProjectionList()
.Add(Projections.Property("f.Id"))
.Add(Projections.RowCount(), "RowCount")
.Add(Projections.GroupProperty("f.Id")))
.ProjectionCriteria.Add(Restrictions.Eq("RowCount", idsIn.Count));
}
var c = Session.CreateCriteria(typeof(Foo)).Add(Subqueries.PropertyIn("Id", dc))
Basically this is creating a DC that projects a list of Foo Ids which have all the tags specified.
This compiled in NH 2.0.1 but didn't work as it complained it couldn't find Property "RowCount" on class Foo.
After reading this post I was hopeful that this might be fixed in 2.1.0 so I upgraded. To my extreme disappointment I discovered that ProjectionCriteria has been removed from DetachedCriteria and I cannot figure out how to make the dynamic query building work without DetachedCriteria.
So I tried to think how to write the same query without needing the infamous Having clause. It can be done with multiple joins on the tag table. Hooray I thought that's pretty simple. So I rewrote it to look like this.
var idsIn = new List<int>() {1, 3};
var dc = DetachedCriteria.For(typeof(Foo), "f").
.CreateCriteria("Tags", "t1").Add(Restrictions.Eq("t1.Id", idsIn[0]))
.CreateCriteria("Tags", "t2").Add(Restrictions.Eq("t2.Id", idsIn[1]))
In a vain attempt to produce the below sql which would do the job (I realise its not quite correct).
SELECT f.Id
FROM Foo f
JOIN Foo_Tags ft1
ON ft1.FooId = f.Id
AND ft1.TagId = 1
JOIN Foo_Tags ft2
ON ft2.FooId = f.Id
AND ft2.TagId = 3
Unfortunately I fell at the first hurdle with this attempt, receiving the exception "Duplicate Association Path". Reading around this seems to be an ancient and still very real bug/limitation.
What am I missing?
I am starting to curse NHibernates name at making what is you would think so simple and common a query, so difficult. Please help anyone who has done this before. How did you get around NHibernates limitations.
Forget reputation and a bounty. If someone does me a solid on this I will send you a 6 pack for your trouble.
I managed to get it working like this :
var dc = DetachedCriteria.For<Foo>( "f")
.CreateCriteria("Tags", "t")
.Add(Restrictions.InG("t.Id", idsIn))
.SetProjection(Projections.SqlGroupProjection("{alias}.FooId", "{alias}.FooId having count(distinct t1_.TagId) = " + idsIn.Count,
new[] { "Id" },
new IType[] { NHibernateUtil.Int32 }));
The only problem here is the count(t1_.TagId) - but I think that the alias should be generated the same every time in this DetachedCriteria - so you should be on the safe side hard coding that.
Ian,
Since I'm not sure what db backend you are using, can you do some sort of a trace against the produced SQL query and take a look at the SQL to figure out what went wrong?
I know I've done this in the past to understand how Linq-2-SQL and Linq-2-Entities have worked, and been able to tweak certain cases to improve the data access, as well as to understand why something wasn't working as initially expected.