How to flatten Fuzz ( Fuzz ( Fuzz A))) without having andThen in the fuzz test module? - elm

I have "a parent" A type which contains "a child" B type.
This is a simplified version of the main data structures i have in my app.
A and B and A_id and B_id are all separate elm modules.
If i can make this simplification work, then maybe is easier to solve my actual problem.
Basically my problem is how to create a fuzzer for A.
with the condition that both A_id and B_id .. need to share the same A_id.
type A
= A { id : A_id
, b : B -- A contains B.
}
type A_id
= A_id String
type B
= B { id : B_id } -- B contains B_id
type B_id
= B_id ( A_id, String ) -- B_id contains A_id. The exact same A_id as its parent A
a_idFuzzer : Fuzzer A_id
a_idFuzzer =
Fuzz.string
|> Fuzz.map A_id
aFuzzer : Fuzzer A
aFuzzer =
a_idFuzzer
|> Fuzz.map
(\a_id ->
bFuzzer a_id
|> Fuzz.map
(\b ->
-- here i just need the b,
-- but in reality i need more children
-- for assembling the A data structure.
-- i need a C and D with a cFuzzer and a dFuzzer...
-- and both C and D depend on having the same A_id value.
-- like B does.
A
{ id = a_id
, b = b
}
)
)
-- im passing A_id as an argument since is generated only once on the parent ( A )
-- and is shared with this B child.
bFuzzer : A_id -> Fuzzer B
bFuzzer a_id =
Fuzz.string
|> Fuzz.map (\s -> B_id ( a_id, s ))
|> Fuzz.map (\id -> B { id = id })
So how to create this Fuzzer A?
For the code above i get the Fuzzer (Fuzzer A) error as opposed to Fuzzer A.
In my actual app i get the more complicated error:
Fuzzer ( Fuzzer ( Fuzzer ( Fuzzer Exchange ))) vs Fuzzer Exchange.
I basically need to flatten it with andThen - but no such function exists in the fuzz elm test package - for some not so obvious reason.
What i tried:
I'm battling this problem for 3 days - somebody in slack suggested that andthen was removed on purpose and i'm supposed to use the custom fuzzer - i learned deeper how shrinkers work (i didn't knew them before) and how to use Fuzz.custom just to test if they are right.
Fuzz.custom needs a generator and a shrinker.
I can build the generator and generate everything i need, but i can't build shrinkers - since the B and A and C and D.. so on are all opaque data structures - in their own module - so i need to get all their properties with getters - in order to shrink them.
So for the example above - to shrink B i need to extract the b_id and run it trough a shrinker.. and then put that back into B by creating a new B - using the public api for B.. and I don't have public getter api for all the properties I keep on B, C , D etc.. and it just seems wrong to do it this way ( to add getters that i don't need in the app - just for testing purposes.. )
All this mess because andThen on the fuzz module was removed... but maybe there is a way, maybe they were right - and i'm not seeing the solution.
Link to the fuzzer module: here
So how to build a fuzzer for A datatype?
Any ideas how to deal with this nested fuzzers? How to flatten them back to one level?
Or to phrase it differently, How to build fuzzes that depend on each other like above? (an example i have on my mind would be - like running an http request that depends on another http request completing before it can start - since it needs the data form the previous request .. this model is seen thought functional programming and is usually done with andThen or bind or stuff.)
Any insight is appreciated. Thanks :)

I can build the generator and generate everything i need, but i can't build shrinkers
Then don't bother. Pass Shrink.noShrink to Fuzz.custom. The only disadvantage will come when you have a test that fails, and you will be given several large values of type A rather than (ideally) one small one.
As you work with your complex type, you'll get a better sense for how to shrink its values that cause test failures into "smaller" values that still cause test failures. For that matter, you'll get better at generating "interesting" values that find test failures.
In the next major release of elm-test (timeline not set), there will considerable improvements to Shrinkers, including better docs, the removal of lazy lists in favor of regular Elm lists, and renaming to "Simplifier".

Related

How do I setup model for table object with arrays of responses in sequelize

I am having challenge on how to setup model for table object with arrays of responses in Sequelize ORM. I use Postgres DB. I have a table say foo. Foo has columns
A
B
C
> C1_SN
C1_Name
C1_Address
C1_Phone
D
E
The column C has a boolean question, if the user select true, he will need to provide array of responses for C1. Such as we now have:
C1_SN1
C1_Name1
C1_Address1
C1_Phone1
------
C1_SN2
C1_Name2
C1_Address2
C1_Phone2
-----
C1_SN3
C1_Name3
C1_Address3
C1_Phone3
I expect multiple teams to be filling this table. How do I setup the model in sequelize? I have two options in mind.
Option 1
The first option I think of is to create an extra 1:1 table between Foo and C1. But going with this option, I don't know how to bulkCreate the array of C1 responses in the C1 table.
Option 2
I think it's also possible to make C1 column in Foo table have a nested array of values. Such that if userA submit his data, it will have the nested array of C1. But I don't know how to go about this method as well.
You need to create separate table for C if user select true then need pass array of object and then pass in bulkCreate like.
C1_SN AutoIncrement
C1_NAME
C1_Address
C1_Phone
value=[{"C1_NAME":"HELLo","C1_Address":"HELLo","C1_Phone":"987456321"},{"C1_NAME":"HELLo1","C1_Address1":"HELLo","C1_Phone":"987456321s"}]
foo.bulkCreate(value).then(result=>{
console.log(result)
}).catch(error=>{
console.log(error)
})
From the official you can check this link:
Sequelize bulkCreate

Many to many query joins in aqueduct

I have A -> AB <- B many to many relationship between 2 ManagedObjects (A and B), where AB is the junction table.
When querying A from db, how do i join B values to AB joint objects?
Query<A> query = await Query<A>(context)
..join(set: (a) => a.ab);
It gives me a list of A objects which contains AB joint objects, but AB objects doesn't include full B objects, but only b.id (not other fields from class B).
Cheers
When you call join, a new Query<T> is created and returned from that method, where T is the joined type. So if a.ab is of type AB, Query<A>.join returns a Query<AB> (it is linked to the original query internally).
Since you have a new Query<AB>, you can configure it like any other query, including initiating another join, adding sorting descriptors and where clauses.
There are some stylistic syntax choices to be made. You can condense this query into a one-liner:
final query = Query<A>(context)
..join(set: (a) => a.ab).join(object: (ab) => ab.b);
final results = await query.fetch();
This is OK if the query remains as-is, but as you add more criteria to a query, the difference between the dot operator and the cascade operator becomes harder to track. I often pull the join query into its own variable. (Note that you don't call any execution methods on the join query):
final query = Query<A>(context);
final join = query.join(set: (a) => a.ab)
..join(object: (ab) => ab.b);
final results = await query.fetch();

Table based declarative reactive programming

Is there a programming language or package that supports table based reactive declarative programming in memory very similar to the SQL language and trigger facility?
For example, I could define PERSON and JOB tables as functions
name: PERSON -> STRING
female: PERSON -> BOOLEAN
mother: PERSON -> PEOPLE
father: PERSON -> PEOPLE
title: JOB -> STRING
company: JOB -> STRING
salary: JOB -> INTEGER
empoyee: JOB -> PERSON
Then I would like to calculate functions like:
childcount: PERSON -> INTEGER
childcount(P) = |{ Q in PERSON : father(Q) = P or mather(Q) = P }|
income: PERSON -> INTEGER
income(P) = SUM { salary(J) : J in JOB and empoyee(J) = P }
incomeperchild: PERSON -> INTEGER
incomeperchild(P) = income(P) / childcount(P)
parent: PERSON x PERSON -> BOOLEAN
person(P,Q) = (P = father(Q)) or (P = mother(Q))
error: PERSON -> BOOLEAN
error(P) = (female(P) and (exists Q in PERSON)(father(Q) = P))
or (not female(P) and (exists Q in PERSON)(mother(Q) = P))
or (exists Q in PERSON)(parent(P,Q) and error(Q))
So essentially I would like to have calculated columns in tables that are automatically updated whenever values in the tables change. Similar things could be expressed with SQL triggers, but I would like to have such functionality built into a language and executed in memory. The propagation of changes need to be optimized. Are there frameworks to do this?
The observer patter and reactive programming focuses on individual objects. But I do not want to maintain pointers and extra structure for each row in my tables as there could be million of rows. All the rules are generic (although they can refer to different rows via parent/children relations, etc), so some form of recursion is required.
One way to approach this is via attribute grammars: http://www.haskell.org/haskellwiki/Attribute_grammar

How to simplify this LINQ to Entities Query to make a less horrible SQL statement from it? (contains Distinct,GroupBy and Count)

I have this SQL expression:
SELECT Musclegroups.Name, COUNT(DISTINCT Workouts.WorkoutID) AS Expr1
FROM Workouts INNER JOIN
Series ON Workouts.WorkoutID = Series.WorkoutID INNER JOIN
Exercises ON Series.ExerciseID = Exercises.ExerciseID INNER JOIN
Musclegroups ON Musclegroups.MusclegroupID = Exercises.MusclegroupID
GROUP BY Musclegroups.Name
Since Im working on a project which uses EF in a WCF Ria LinqToEntitiesDomainService, I have to query this with LINQ (If this isn't a must then pls inform me).
I made this expression:
var WorkoutCountPerMusclegroup = (from s in ObjectContext.Series1
join w in ObjectContext.Workouts on s.WorkoutID equals w.WorkoutID
where w.UserID.Equals(userid) && w.Type.Equals("WeightLifting")
group s by s.Exercise.Musclegroup into g
select new StringKeyIntValuePair
{
TestID = g.Select(n => n.Exercise.MusclegroupID).FirstOrDefault(),
Key = g.Select(n => n.Exercise.Musclegroup.Name).FirstOrDefault(),
Value = g.Select(n => n.WorkoutID).Distinct().Count()
});
The StringKeyIntValuePair is just a custom Entity type I made so I can send down the info to the Silverlight client. Also this is why I need to set an "TestID" for it, as it is an entity and it needs one.
And the problem is, that this linq query produces this horrible SQL statement:
http://pastebay.com/144532
I suppose there is a better way to query this information, a better linq expression maybe. Or is it possible to just query with plain SQL somehow?
EDIT:
I realized that the TestID is unnecessary because the other property named "Key" (the one on which Im grouping) becomes the key of the group, so it will be a key also. And after this, my query looks like this:
var WorkoutCountPerMusclegroup = (from s in ObjectContext.Series1
join w in ObjectContext.Workouts on s.WorkoutID equals w.WorkoutID
where w.UserID.Equals(userid) && w.Type.Equals("WeightLifting")
group w.WorkoutID by s.Exercise.Musclegroup.Name into g
select new StringKeyIntValuePair
{
Key = g.Key,
Value = g.Select(n => n).Distinct().Count()
});
This produces the following SQL: http://pastebay.com/144545
This seems far better then the previous sql statement of the half-baked linq query.
But is this good enough? Or this is the boundary of LinqToEntities capabilities, and if I want even more clear sql, I should make another DomainService which operates with LinqToSQL or something else?
Or the best way would be using a stored procedure, that returns Rowsets? If so, is there a best practice to do this asynchronously, like a simple WCF Ria DomainService query?
I would like to know best practices as well.
Compiling of lambda expression linq can take a lot of time (3–30s), especially using group by and then FirstOrDefault (for left inner joins meaning only taking values from the first row in the group).
The generated sql excecution might not be that bad but the compilation using DbContext which cannot be precompiled with .NET 4.0.
As an example 1 something like:
var q = from a in DbContext.A
join b ... into bb from b in bb.DefaultIfEmtpy()
group new { a, b } by new { ... } into g
select new
{
g.Key.Name1
g.Sum(p => p.b.Salary)
g.FirstOrDefault().b.SomeDate
};
Each FirstOrDefault we added in one case caused +2s compile time which added up 3 times = 6s only to compile not load data (which takes less than 500ms). This basically destroys your application's usability. The user will be waiting many times for no reason.
The only way we found so far to speed up the compilation is to mix lambda expression with object expression (might not be the correct notation).
Example 2: refactoring of previous example 1.
var q = (from a in DbContext.A
join b ... into bb from b in bb.DefaultIfEmtpy()
select new { a, b })
.GroupBy(p => new { ... })
.Select(g => new
{
g.Key.Name1
g.Sum(p => p.b.Salary)
g.FirstOrDefault().b.SomeDate
});
The above example did compile a lot faster than example 1 in our case but still not fast enough so the only solution for us in response-critical areas is to revert to native SQL (to Entities) or using views or stored procedures (in our case Oracle PL/SQL).
Once we have time we are going to test if precompilation works in .NET 4.5 and/or .NET 5.0 for DbContext.
Hope this helps and we can get other solutions.

Speed Performance In Recursive SQL Requests

I have so category and this categories have unlimited sub category.
In Database Table, Fields are ID, UpperID and Title.
If I call a category and its subcategory in DataTable with recursive method in program(ASP.NET project)
performance is very bad.
And many user will use this application so everything goes bad.
Maybe All categories Fill to A Cache object and then we musnt go to Database.
But category count is 15000 or 20000.
So I think isn't a good method.
What can I do for fast performance?
Are you give me any suggestion?
caching or other in-memory-persistance is by far better than doing this on a relational system :) ... hey... it's oop!
just my 2 cents!
eg.
var categories = /* method for domain-objects*/.ToDictionary(category => category.ID);
foreach (var category in categories.Values)
{
if (!category.ParentCategoryID.HasValue)
{
continue;
}
Category parentCategory;
if (categories.TryGetValue(category.ParentCategoryID.Value, out parentCategory))
{
parentCategory.AddSubCategory(category);
}
}
et voila ... your tree is ready to go!
edit:
do you exactly know where your performance bottle-neck is?...
to give you some ideas, eg:
loading from database
building up the structure
querying the structure
loading from database:
then you should load it once and be sure to have some changetracking/notifying to get changes (if made) or optimize your query!
building up the structure:
the way i create the tree (traversal part) is the wastest you can do with a Dictionary<TKey, TValue>
querying the structure:
the structure i've used in my example is faster than List<T>. Dictionary<TKey, TValue> uses an index over the keys - so you may use int for the keys (IDs)
edit:
So you use DataTable to fix the
problem. Now you've got 2 problems: me
and DataTable
what do you have right now? where are you starting from? can you determine where your mudhole is? give us code!
Thanks All,
I find my solution with Common Table Expressions(CTE) fifty- fifty.
Its allow fast recursive queries.
WITH CatCTE(OID, Name, ParentID)
AS
(
SELECT OID, Name, ParentID FROM Work.dbo.eaCategory
WHERE OID = 0
UNION ALL
SELECT C.OID, C.Name, C.ParentID FROM Work.dbo.eaCategory C JOIN CatCTE as CTE ON C.ParentID= CTE.OID
)
SELECT * FROM CatCTE