dynamically check membership swi-prolog - dynamic

Given a list of natural numbers List I want to check if the sums of the elements of each subset are distinct. Initially I used this code
distinctSubsetSums(List,Sums) :- findall(Sum,(subset(Sub,List),sum_list(Sub,Sum)),Sums), all_distinct(Sums).
But I think there exists a better solution since with my code I found all the sums of all the possibile subsets and then I check if they are distinct. I think there is a way to check dynamically if a sum has been already calculated and then return false without searching for all the subsets.
Can anyone help me?

subset_sums(List,Sums) :-
sum_subset(List,[],[],Sums).
sum_subset([I|Is],Js,Sums0,Sums) :-
sum_subset(Is,Js,Sums0,Sums1),
sum_subset(Is,[I|Js],Sums1,Sums).
sum_subset([],Js,Sums0,Sums) :-
sum_list(Js,Sum),
\+ member(Sum,Sums0),
Sums = [Sum|Sums0].

Related

Simple gSheet Query returns wrong values for Count function based on data in columns not evaluated in the query

The problem is that a count function evaluating only column A get different results depending on what's in other columns if those columns are included in the range even if not included in the select criteria.
This is a stripped down version of the problem. The additional columns are needed because of a where condition dependent on data in a different column. But the problem manifests even when the where condition is removed from the query.
A sample spreadsheet including a loom vid of the problem can be found here:
https://docs.google.com/spreadsheets/d/1BJ6qaTcEiXZlzD1opxMHNhFyIBM1CDJPCJ5gr9x_32k/edit?usp=sharing
Any insights or work-arounds will be appreciated.
There've been 18 views and no insights or work-arounds. Nor have I found a work-around so I'm assuming this is a gSheet bug. How does one go about submitting a bug to G?

How to use a WHERE statement in Postgres with an array of OR combinations?

I'm not sure how to phrase this question, but the premise is I have a table where the primary key is defined by two columns: row and col. I also want to query for many individual columns, which is where my problem comes into play.
If I had a simple column named id, I would be able to do a clause such as WHERE id=ANY($1) where $1 is an array of integers. However, with a primary key consisting of two columns, I wouldn't be able to the same.
WHERE row=ANY($1) AND col=ANY($2) gives me a region of what I want, but not the exact set of tuples that I need. Right now I'm generating a template query string with many conditions, such as:
WHERE row=$1 AND col=$2 OR
row=$3 AND col=$4 OR ...
How can I avoid generating this "query template"? I don't think this is a very elegant solution, but it's the solution I have right now. Any insight would be appreciated!
where (row,col) = any(array[(1,2),(3,4)])
or
where (row,col) in ((1,2),(3,4))

Endeca- EQL negation query

I want to write a dimension value query to filter the records based on dimension values. I have a requirement to use the "!=" operator in the EQL query. I know that EQL queries support this operator and it also given in the manual that even dimension value queries support this. In the manual examples are given only for property value queries. I tried to implement the same for dimension value queries but the application did not return any records for that. Can anyone help me with an example for dimension value queries for this operator?
below is the query I am trying, but it does not return any result as there is some problem with the query:
Nrs=collection()/record[GROUP=collection("dimensions")/dval[name="GROUP"]/dval[name!="G001"]]
Any help will be appreciated.
Thanks in advance,
Sav
Put "not" in front of the whole expression. Try:
collection()/record[ not ( Genre = collection("dimensions")/dval[name="Genre"]//id ) ]
Note that there are some minor wrinkles. For more info, check out page 105 in this document. http://docs.oracle.com/cd/E55324_01/Mdex.651/pdf/DevGuide.pdf

Join two files with out any common key

I have 2 input files The first file is having 2 columns and second file is having three column both have different values in it like
First file :
Type:
(String)|(Integer)
Value:
City1|Value1
City2|Value2
City3|Value3
Second File:
Type:
(String)|(String)|(Integer)
Value:
String1|Text1|Int1
String2|Text2|Int2
String3|Text3|Int3
I need Output as
Text1|City1|Value1
Text2|City2|Value2
Text3|City3|Value3
I can use any program skill to get this, If it is not possible in pig then i can go with other programs also. Please suggest me which one will be better and how to do that.
Please help me on this.Thanks in advance
Your example is not clear. If first relation has M values and second has N values, do you expect M*N values in results? Or do you expect M=N values in result
Assuming first (M*N values), you can use CROSS operation.
Assuming second (M=N values), you can:
a. Use Enumerate on both the relations to add numbers (unique enumerator number) to each tuples.
b. Then join on the enumerator number to make sure 1st row from both relation joins, then 2nd row and so on.
Hope this helps.
You can't join without a common key in Pig. Try using concat function for your usecase

Django distinct on a specific field

class A:
name = Char...
class B:
base = ForeignKey(A)
value = Integer..
B.objects.values('a__name','value').distinct('a__name')
As you understand above, I try to get the B objects grouping by its related object's name. However, distinct function doesn't take parameter.
I have tried by annotation and aggregation but I couldn't group by a__name
I have also tried values_list with flat=True but it only takes one column name but I need both a__name and value fields.
How can I do that in Django?
Thanks
First, you need Django 1.4+. If you're running a lesser version, you're out of luck. Then, you must be using PostgreSQL. Passing a parameter to distinct does not work with other databases.
See the documentation for distinct and pay attention to the "Note" lines.
You could always issue a raw query, I suppose, as well, if you don't meet the above conditions.