Coq - Passing parameters to a record - record

I'm having trouble in comparing elements of sets belonging to two distinct
instances of the same record type. Consider the following record.
Record ToyRec := {
X:Set;
Labels:Set;
r:X->Labels
}
Say that two objects T1 and T2 of type ToyRec form a good pair if
for every element in T1.(X) there exists an element in T2.(X) with the
same label.
Definition GoodPair(T1 T2:ToyRec):Prop :=
forall x1:T1.(X), exists x2:T2.(X), T1.(r) x1 = T2.(r) x2.
The problem is that I get an error saying that T1.(r) x1 is of type X1.(Labels),
and T2.(r) x2 is of type X2.(Labels).
I understand the problem, and I imagine it could be solved if I could somehow
declare the set Labels outside the record, and then pass it as a parameter.
Is there a way to do this in Coq? Or what would be the most elegant way of
defining the records I want and the property GoodPair?

The closest thing I got from your code his the following:
Record ToyRec {Labels : Set} := {
X:Set;
r:X->Labels
}.
Definition GoodPair {Labels:Set} (T1 T2 : #ToyRec Labels) : Prop :=
forall x1: X T1, exists x2: X T2, r T1 x1 = r T2 x2.
By having Labels as a dependency for ToyRec you can be sure that both records are using the same type.
PS: I used {Labels : Set} instead of (Labels : Set) to specify that this argument is implicit and should be inferred whenever possible.

Related

How to retrieve the list of dynamic nested keys of BigQuery nested records

My ELT tools imports my data in bigquery and generates/extends automatically the schema for dynamic nested keys (in the schema below, under properties)
It looks like this
How can I get the list of nested keys of a repeated record ? so for example I can group by properties when those items have said property non-null ?
I have tried
select column_name
from my_schema.INFORMATION_SCHEMA.COLUMNS
where
table_name = 'my_table
But it will only list first level keys
From the picture above, I want, as a first step, a SQL query that returns
message
user_id
seeker
liker_id
rateable_id
rateable_type
from_organization
likeable_type
company
existing_attempt
...
My real goal through, is to group/count my data based on a non-null value of a 2nd level nested properties properties.filters.[filter_type]
The schema may evolve when our application adds more filters, so this need to be dynamically generated, I can't just hard-code the list of nested keys.
Note: this is very similar to this question How to extract all the keys in a JSON object with BigQuery but in my case my data is already in a shcema and it's not a JSON object
EDIT:
Suppose I have a list of such records with nested properties, how do I write a SQL query that adds a field "enabled_filters" which aggregates, for each item, the list of properties for wihch said property is not null ?
Example input (properties.x are dynamic and not known by the programmer)
search_id
properties.filters.school
properties.filters.type
1
MIT
master
2
Princetown
null
3
null
master
Example output
search_id
enabled_filters
1
["school", "type"]
2
["school"]
3
["type"]
Have you looked at COLUMN_FIELD_PATHS? It should give you the paths for all columns.
select field_path from my_schema.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS where table_name = '<table>'
[https://cloud.google.com/bigquery/docs/information-schema-column-field-paths]
The field properties is not nested by array only by structures. Then a UDF in JavaScript to parse thise field should work fast enough.
CREATE TEMP FUNCTION jsonObjectKeys(input STRING, shownull BOOL,fullname Bool)
RETURNS Array<String>
LANGUAGE js AS """
function test(input,old){
var out=[]
for(let x in input){
let te=input[x];
out=out.concat(te==null ? (shownull?[x+'==null']:[]) : typeof te=='object' ? test(te,old+x+'.') : [fullname ? old+x : x] );
}
return out;
Object.keys(JSON.parse(input));
}
return test(JSON.parse(input),"");
""";
with tbl as (select struct(1 as alpha,struct(2 as x, 3 as y,[1,2,3] as z ) as B) A from unnest(generate_array(1,10*1))
union all select struct(null,struct(null,1,[999])) )
select *,
TO_JSON_STRING (A ) as string_output,
jsonObjectKeys(TO_JSON_STRING (A),true,false) as output1,
jsonObjectKeys(TO_JSON_STRING (A),false,true) as output2,
concat('["', array_to_string(jsonObjectKeys(TO_JSON_STRING (A),false,true),'","' ) ,'"]') as output_sring,
jsonObjectKeys(TO_JSON_STRING (A.B),false,true) as outpu
from tbl

Is it still possible to get 'NullPointerException' on property 'x' when building an Object from a result of a query with 'Where x is not null'

I am using exposed in one project, and I have a table lets call it TableX with two properties
property1 and x knowing that x is nullable
I added TableX.x.isNotNull() to my query so I can ignore null rows!.
And I have Object1 with also two properties as TableX which are: property1 and x knowing that x is not null in Object1
Then when I create Object1 out of the rows from the query, the compiler will nag about x because it should not be null and we are receiving a nullable x from TableX.
So I have added !! when setting x in Object1, given that I am sure that the query will never return any row with x is null because of the constraint that I have added.
But still, I receive KotlinNullPointerException some times. So how is this possible?
I thought of some compatibility issues between MySQL and exposed! But couldn't find any
val result = listOf<Object1>()
transaction {
val query = TableX.select {
TableX.property1.eq(123) and
TableX.x.isNotNull()
}
.fetchSize(1000)
result = query.map {
Object1(
property1 = it[TableX.property1],
x = it[TableX.x]!!
)
}
}
I'm facing the same problem in a company I am working now. And it looks like you have race condition in place. Try to explicitly check for null before building result list. At least you won't get an exception.

jOOQ - create value for field

I have a Field Field<T>. I want to create a named value for that field, to be able to use it in a query. The name of the value should be the name of the field.
select value as field from ...
Is the the correct way to do it?
public <T> Field<T> namedValue(Field<T> field, T value) {
return DSL.val(value, field).as(field);
}
Although it works, I was wondering if there is a shorter way to do this. I might be pedantic here :).
update
I am creating the following construction:
UPADTE table SET x = alias.x, y = alias.y
FROM (SELECT constant value for x, table2.y FROM table2 WHERE ...) AS alias.
Let's simplify this to (for the sake of this example, to focus on the constant selection):
SELECT
FROM (SELECT constant value for x) AS alias.
First, I started with:
Select s1 = context.select(DSL.val("TEST"));
Select s2 = context.select(s1.fields()).from(s1);
This resulted in an incorrect query:
select "alias_66794930"."TEST" from (select 'TEST') as "alias_66794930"
(I am not really sure if this is correct behavior from jOOQ.)
So, I added an alias:
Select s1 = context.select(DSL.val("TEST").as(X));
Select s2 = context.select(s1.fields()).from(s1);
This resulted in:
select "alias_76324565"."x" from (select 'TEST' as "x") as "alias_76324565"
This works fine. Then, I ran into problems when the constant vale was null:
Select s1 = context.select(DSL.val(null).as(X));
Select s2 = context.select(s1.fields()).from(s1);
This resulted in:
select "alias_85795854"."x" from (select cast(? as varchar) as "x") as "alias_85795854"
1400 [localhost-startStop-1] TRACE org.jooq.impl.DefaultBinding - Binding variable 1 : null (class java.lang.Object)
This makes sense, the field type is not known. So I added the field (with its type) as following:
Select s1 = context.select(DSL.val(null, X).as(X));
Select s2 = context.select(s1.fields()).from(s1);
Binding is now correct:
1678 [localhost-startStop-1] TRACE org.jooq.impl.DefaultBinding - Binding variable 1 : null (class java.lang.String)
All done!
I don't think you can get much shorter than what you already have. I mean, your SQL reads:
value as field
And your Java/jOOQ code reads:
DSL.val(value, field).as(field)
You could of course static import DSL.val or DSL.*:
import static org.jooq.impl.DSL.*;
And then shorten things to:
val(value, field).as(field)
And if you're very sure about value's type, you don't need to coerce it to that of field
val(value).as(field)
Now, you definitely can't go any shorter, and there's no more need for your namedValue() function...

Partial SQL insert in haskelldb

I just started a new project and wanted to use HaskellDB in the beginning. I created a database with 2 columns:
create table sensor (
service text,
name text
);
..found out how to do the basic HaskellDB machinery (ohhh..the documentation) and wanted to do an insert. However, I wanted to do a partial insert (there are supposed to be more columns), something like:
insert into sensor (service) values ('myservice');
Translated into HaskellDB:
transaction db $ insert db SE.sensor (SE.service <<- (Just $ senService sensor))
But...that simply doesn't work. What also does not work is if I specify the column names in different order, which is not exactly conenient as well. Is there a way to do a partial insert in haskelldb?
The error codes I get are - when I just inserted a different column (the 'name') as the first one:
Couldn't match expected type `SEI.Service'
against inferred type `SEI.Name'
Expected type: SEI.Intsensor
Inferred type: Database.HaskellDB.HDBRec.RecCons
SEI.Name (Expr String) er
When using functional dependencies to combine
Database.HaskellDB.Query.InsertRec
(Database.HaskellDB.HDBRec.RecCons f (e a) r)
(Database.HaskellDB.HDBRec.RecCons f (Expr a) er),
etc..
And when I do the 'service' as the first - and only - field, I get:
Couldn't match expected type `Database.HaskellDB.HDBRec.RecCons
SEI.Name
(Expr String)
(Database.HaskellDB.HDBRec.RecCons
SEI.Time
(Expr Int)
(Database.HaskellDB.HDBRec.RecCons
SEI.Intval (Expr Int) Database.HaskellDB.HDBRec.RecNil))'
against inferred type `Database.HaskellDB.HDBRec.RecNil'
(I have a couple of other columns in the table)
This looks really like 'by design', unfortunately :(
You're right, that does look intentional. The HaskellDB.Query docs show that insert has a type of:
insert :: (ToPrimExprs r, ShowRecRow r, InsertRec r er) => Database -> Table er -> Record r -> IO ()
In particular, the relation InsertRec r er must hold. That's defined elsewhere by the recursive type program:
InsertRec RecNil RecNil
(InsertExpr e, InsertRec r er) => InsertRec (RecCons f (e a) r) (RecCons f (Expr a) er)
The first line is the base case. The second line is an inductive case. It really does want to walk every element of er, the table. There's no short-circuit, and no support for re-ordering. But in my own tests, I have seen this work, using _default:
insQ db = insert db test_tbl1 (c1 <<- (Just 5) # c2 << _default)
So if you want a partial insert, you can always say:
insC1 db x = insert db test_tbl1 (c1 <<- (Just x) # c2 << _default)
insC2 db x = insert db test_tbl2 (c1 << _default # c2 <<- (Just x))
I realize this isn't everything you're looking for. It looks like InsertRec can be re-written in the style of HList, to permit more generalization. That would be an excellent contribution.

Currying and compiler design

This is a homework question:
Explain the transformations the type
of a routine undergoes in partial
parameterization.
So far I understand currying. But I cannot find any resources on how a function like this is implemented by the compiler in memory. Could I be pointed in the right direction, maybe keywords to search for or links to resources or possibly an explanation here of how the compiler generates the type and symbol table among other things thats related to the question.
Thanks.
Currying is the conversion of n argument functions into n unary functions:
Example, if you have a ternary function f :: t1 x t2 x t3 -> t you can represent this function as
f1 :: t1 -> (t2 -> (t3 -> t))
In other words, f1 is a function which takes an argument of type t1 and returns a function of type f2.
f2 :: t2 -> (t3 -> t)
f2 is a function which takes an argument of type t2 and returns a function of type f3.
f3 :: t3 -> t
f3 is a function which takes an argument of type t3 and returns type t.
Example, if f(a,b,c) = a+b*c then:
f3(C) == c1+c2*C where c1 and c2 are constant.
f2(B) == f3(C) where c1 is constant and c2 is replaced with B.
f1(A) == f2(B) where c1 is replaced with A.
In functional languages, functions are first class citizens so its common to have them as return type.
Currying is like fixing a parameter of the function. What you really need to modify is the prototype of the function called.. if you have for example retn_type function(param1, param2) and you currying it on first parameter you set it to a fixed value and you obtain a new function retn_type(param2) that can be called and passed in a different way from the original one.
Actually you can obtain it in a compiler in many ways or hacks but the core of everything to its simplicity is just to redefine a new version that is linked to first one. So when you call retn_type(param2) you execute the same code of first function assuming that parameter1 is specified by curryfication.