Specman/e constraint (for each in) iteration - aop

Can I iterate through only a part of a list in e, in a constraint.
For example, this code will go through the whole layer_l list:
<'
struct layer_s {
a : int;
keep soft a == 3;
};
struct layer_gen_s {
n_layers : int;
keep soft n_layers == 8;
layer_l : list of layer_s;
keep layer_l.size() == read_only(n_layers);
};
extend sys {
layer_gen : layer_gen_s;
run() is also {
messagef(LOW, "n_layers = %0d", layer_gen.n_layers);
for each in layer_gen.layer_l{
messagef(LOW, "layer[%2d]: a = %0d", index, it.a);
};
};
};
-- this will go through all layer_l
extend layer_gen_s {
keep for each (layer) using index (i) in layer_l {
layer.a == 7;
};
};
But, I would like to only iterate the for each in through, for example, 2 items. I tried the code below, but it doesn't work:
-- this produces an error
extend layer_gen_s {
keep for each (layer) using index (i) in [layer_l.all(index < 2)] {
layer.a == 7;
};
};
Also I don't want to use implication, so this is not what I want:
-- not what I want, I want to specify directly in iterated list
extend layer_gen_s {
keep for each (layer) using index (i) in layer_l {
(i < 2) => {
layer.a == 7;
};
};
};

Using the list slicing operator doesn't work either, because the path in a for..each constraint is limited to a simple path (e.g. a list variable). The following doesn't work either:
keep for each (layer) using index (i) in layer_l[0..2] {
//...
};
This is a Specman limitation.
To force looping over a sub-list your only bet is to create that sub-list as a separate variable:
layer_subl: list of layer_s;
keep layer_subl.size() == 3;
keep for each (layer) using index (i) in layer_subl {
layer == layer_l[i];
};
Now you can loop on only the first 3 elements within your for..each constraint:
keep for each (layer) in layer_subl {
layer.a == 7;
};
This avoids using implication inside the constraint. Whether this is worth is for you to decide. Also note that the lists will contain the same objects (this is good). No extra struct objects get created.
Creation of the sub-list like this is boilerplate code that could be handled by the tool itself. This would make the code much more concise and readable. You could contact your vendor and request this feature.

Related

Run a regex on a Supply or other stream-like sequence?

Suppose I have a Supply, Channel, IO::Handle, or similar stream-like source of text, and I want to scan it for substrings matching a regex. I can't be sure that matching substrings do not cross chunk boundaries. The total length is potentially infinite and cannot be slurped into memory.
One way this would be possible is if I could instantiate a regex matching engine and feed it chunks of text while it maintains its state. But I don't see any way to do that -- I only see methods to run the match engine to completion.
Is this possible?
After some more searching, I may have answered my own question. Specifically, it seems Seq.comb is capable of combining chunks and lazily processing them:
my $c = supply {
whenever Supply.interval(1.0) -> $v {
my $letter = do if ($v mod 2 == 0) { "a" } else { "b" };
my $chunk = $letter x ($v + 1);
say "Pushing {$chunk}";
emit($chunk);
}
};
my $c2 = $c.comb(/a+b+/);
react {
whenever $c2 -> $v {
say "Got {$v}";
}
}
See also the concurrency features used to construct this example.

CGAL: What is join_facet() really doing with circulators?

I'm trying to use join_facet() iteratively to grow a single facet starting from a given facet_handle. However, I'm running into trouble when using the Halfedge_around_facet_circulator in combination with join_facet(). My while-loop does not become false anymore which works fine if I don't use join_facet() and the circulator seems to point to something else.
I assume that the join operation is somehow changing that Halfedge_around_facet_circulator. But why and how to solve this?
Polyhedron P_out; // is a valid pure triangle Polyhedron
bool merge_next = true;
while (merge_next == true) {
Polyhedron::Halfedge_around_facet_circulator hit = facet_handle->facet_begin(); // facet_handle pointing to facet of P_out
merge_next = false;
do {
if(!(hit->is_border_edge())) {
if (coplanar(hit->facet(), hit->opposite()->facet())) {
if (CGAL::circulator_size(hit->opposite()->vertex_begin()) >= 3 && CGAL::circulator_size(hit->vertex_begin()) >= 3
&& hit->facet()->id() != hit->opposite()->facet()->id()) {
Polyhedron::Halfedge_handle hit2 = hit;
P_out->join_facet(hit2);
merge_next = true;
}
}
}
} while (++hit != facet_handle->facet_begin());
}
What this code should do:
Given the facet_handle, iterate over the corresponding halfedges of facet and merge if possible. Then taking facet_handle of created new facet again and doing the same until no neighboring facets are left to merge.
Edit:
There are areas on which the code runs fine and others where it crashes at hit->is_border_edge() after the first join_facet().

GoLang, REST, PATCH and building an UPDATE query

since few days I was struggling on how to proceed with PATCH request in Go REST API until I have found an article about using pointers and omitempty tag which I have populated and is working fine. Fine until I have realized I still have to build an UPDATE SQL query.
My struct looks like this:
type Resource struct {
Name *string `json:"name,omitempty" sql:"resource_id"`
Description *string `json:"description,omitempty" sql:"description"`
}
I am expecting a PATCH /resources/{resource-id} request containing such a request body:
{"description":"Some new description"}
In my handler I will build the Resource object this way (ignoring imports, ignoring error handling):
var resource Resource
resourceID, _ := mux.Vars(r)["resource-id"]
d := json.NewDecoder(r.Body)
d.Decode(&resource)
// at this point our resource object should only contain
// the Description field with the value from JSON in request body
Now, for normal UPDATE (PUT request) I would do this (simplified):
stmt, _ := db.Prepare(`UPDATE resources SET description = ?, name = ? WHERE resource_id = ?`)
res, _ := stmt.Exec(resource.Description, resource.Name, resourceID)
The problem with PATCH and omitempty tag is that the object might be missing multiple properties, thus I cannot just prepare a statement with hardcoded fields and placeholders... I will have to build it dynamically.
And here comes my question: how can I build such UPDATE query dynamically? In the best case I'd need some solution with identifying the set properties, getting their SQL field names (probably from the tags) and then I should be able to build the UPDATE query. I know I can use reflection to get the object properties but have no idea hot to get their sql tag name and of course I'd like to avoid using reflection here if possible... Or I could simply check for each property it is not nil, but in real life the structs are much bigger than provided example here...
Can somebody help me with this one? Did somebody already have to solve the same/similar situation?
SOLUTION:
Based on the answers here I was able to come up with this abstract solution. The SQLPatches method builds the SQLPatch struct from the given struct (so no concrete struct specific):
import (
"fmt"
"encoding/json"
"reflect"
"strings"
)
const tagname = "sql"
type SQLPatch struct {
Fields []string
Args []interface{}
}
func SQLPatches(resource interface{}) SQLPatch {
var sqlPatch SQLPatch
rType := reflect.TypeOf(resource)
rVal := reflect.ValueOf(resource)
n := rType.NumField()
sqlPatch.Fields = make([]string, 0, n)
sqlPatch.Args = make([]interface{}, 0, n)
for i := 0; i < n; i++ {
fType := rType.Field(i)
fVal := rVal.Field(i)
tag := fType.Tag.Get(tagname)
// skip nil properties (not going to be patched), skip unexported fields, skip fields to be skipped for SQL
if fVal.IsNil() || fType.PkgPath != "" || tag == "-" {
continue
}
// if no tag is set, use the field name
if tag == "" {
tag = fType.Name
}
// and make the tag lowercase in the end
tag = strings.ToLower(tag)
sqlPatch.Fields = append(sqlPatch.Fields, tag+" = ?")
var val reflect.Value
if fVal.Kind() == reflect.Ptr {
val = fVal.Elem()
} else {
val = fVal
}
switch val.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
sqlPatch.Args = append(sqlPatch.Args, val.Int())
case reflect.String:
sqlPatch.Args = append(sqlPatch.Args, val.String())
case reflect.Bool:
if val.Bool() {
sqlPatch.Args = append(sqlPatch.Args, 1)
} else {
sqlPatch.Args = append(sqlPatch.Args, 0)
}
}
}
return sqlPatch
}
Then I can simply call it like this:
type Resource struct {
Description *string `json:"description,omitempty"`
Name *string `json:"name,omitempty"`
}
func main() {
var r Resource
json.Unmarshal([]byte(`{"description": "new description"}`), &r)
sqlPatch := SQLPatches(r)
data, _ := json.Marshal(sqlPatch)
fmt.Printf("%s\n", data)
}
You can check it at Go Playground. The only problem here I see is that I allocate both the slices with the amount of fields in the passed struct, which may be 10, even though I might only want to patch one property in the end resulting in allocating more memory than needed... Any idea how to avoid this?
I recently had same problem. about PATCH and looking around found this article. It also makes references to the RFC 5789 where it says:
The difference between the PUT and PATCH requests is reflected in the way the server processes the enclosed entity to modify the resource identified by the Request-URI. In a PUT request, the enclosed entity is considered to be a modified version of the resource stored on the origin server, and the client is requesting that the stored version be replaced. With PATCH, however, the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version. The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources; i.e., new resources may be created, or existing ones modified, by the application of a PATCH.
e.g:
[
{ "op": "test", "path": "/a/b/c", "value": "foo" },
{ "op": "remove", "path": "/a/b/c" },
{ "op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ] },
{ "op": "replace", "path": "/a/b/c", "value": 42 },
{ "op": "move", "from": "/a/b/c", "path": "/a/b/d" },
{ "op": "copy", "from": "/a/b/d", "path": "/a/b/e" }
]
This set of instructions should make it easier to build the update query.
EDIT
This is how you would obtain sql tags but you will have to use reflection:
type Resource struct {
Name *string `json:"name,omitempty" sql:"resource_id"`
Description *string `json:"description,omitempty" sql:"description"`
}
sp := "sort of string"
r := Resource{Description: &sp}
rt := reflect.TypeOf(r) // reflect.Type
rv := reflect.ValueOf(r) // reflect.Value
for i := 0; i < rv.NumField(); i++ { // Iterate over all the fields
if !rv.Field(i).IsNil() { // Check it is not nil
// Here you would do what you want to having the sql tag.
// Creating the query would be easy, however
// not sure you would execute the statement
fmt.Println(rt.Field(i).Tag.Get("sql")) // Output: description
}
}
I understand you don't want to use reflection, but still this may be a better answer than the previous one as you comment state.
EDIT 2:
About the allocation - read this guide lines of Effective Go about Data structures and Allocation:
// Here you are allocating an slice of 0 length with a capacity of n
sqlPatch.Fields = make([]string, 0, n)
sqlPatch.Args = make([]interface{}, 0, n)
With make(Type, Length, Capacity (optional))
Consider the following example:
// newly allocated zeroed value with Composite Literal
// length: 0
// capacity: 0
testSlice := []int{}
fmt.Println(len(testSlice), cap(testSlice)) // 0 0
fmt.Println(testSlice) // []
// newly allocated non zeroed value with make
// length: 0
// capacity: 10
testSlice = make([]int, 0, 10)
fmt.Println(len(testSlice), cap(testSlice)) // 0 10
fmt.Println(testSlice) // []
// newly allocated non zeroed value with make
// length: 2
// capacity: 4
testSlice = make([]int, 2, 4)
fmt.Println(len(testSlice), cap(testSlice)) // 2 4
fmt.Println(testSlice) // [0 0]
In your case, may want to the following:
// Replace this
sqlPatch.Fields = make([]string, 0, n)
sqlPatch.Args = make([]interface{}, 0, n)
// With this or simple omit the capacity in make above
sqlPatch.Fields = []string{}
sqlPatch.Args = []interface{}{}
// The allocation will go as follow: length - capacity
testSlice := []int{} // 0 - 0
testSlice = append(testSlice, 1) // 1 - 2
testSlice = append(testSlice, 1) // 2 - 2
testSlice = append(testSlice, 1) // 3 - 4
testSlice = append(testSlice, 1) // 4 - 4
testSlice = append(testSlice, 1) // 5 - 8
Alright, I think the solution I used back in 2016 was quite over-engineered for even more over-engineered problem and was completely unnecessary. The question asked here was very generalized, however we were building a solution that was able to build its SQL query on its own and based on the JSON object or query parameters and/or Headers sent in the request. And that to be as generic as possible.
Nowadays I think the best solution is to avoid PATCH unless truly necessary. And even then you still can use PUT and replace the whole resource with patched property/ies coming already from the client - i.e. not giving the client the option/possibility to send any PATCH request to your server and to deal with partial updates on their own.
However this is not always recommended, especially in cases of bigger objects to save some C02 by reducing the amount of redundant transmitted data. Whenever today if I need to enable a PATCH for the client I simply define what can be patched - this gives me clarity and the final struct.
Note that I am using a IETF documented JSON Merge Patch implementation. I consider that of JSON Patch (also documented by IETF) redundant as hypothetically we could replace the whole REST API by having one single JSON Patch endpoint and let clients control the resources via allowed operations. I also think the implementation of such JSON Patch on the server side is way more complicated. The only use-case I could think of using such implementation is if I was implementing a REST API over a file system...
So the struct may be defined as in my OP:
type ResourcePatch struct {
ResourceID some.UUID `json:"resource_id"`
Description *string `json:"description,omitempty"`
Name *string `json:"name,omitempty"`
}
In the handler func I'd decode the ID from the path into the ResourcePatch instance and unmarshall JSON from the request body into it, too.
Sending only this
{"description":"Some new description"}
to PATCH /resources/<UUID>
I should end up with with this object:
ResourcePatch
* ResourceID {"UUID"}
* Description {"Some new description"}
And now the magic: use a simple logic to build the query and exec parameters. For some it may seem tedious or repetitive or unclean for bigger PATCH objects, but my reply to this would be: if your PATCH object consists of more than 50% of the original resource' properties (or simply too many for your liking) use PUT and expect the clients to send (and replace) the whole resource instead.
It could look like this:
func (s Store) patchMyResource(r models.ResourcePatch) error {
q := `UPDATE resources SET `
qParts := make([]string, 0, 2)
args := make([]interface{}, 0, 2)
if r.Description != nil {
qParts = append(qParts, `description = ?`)
args = append(args, r.Description)
}
if r.Name != nil {
qParts = append(qParts, `name = ?`)
args = append(args, r.Name)
}
q += strings.Join(qParts, ',') + ` WHERE resource_id = ?`
args = append(args, r.ResourceID)
_, err := s.db.Exec(q, args...)
return err
}
I think there's nothing simpler and more effective. No reflection, no over-kills, reads quite good.
Struct tags are only visible through reflection, sorry.
If you don't want to use reflection (or, I think, even if you do), I think it is Go-like to define a function or method that "marshals" your struct into something that can easily be turned into a comma-separated list of SQL updates, and then use that. Build small things to help solve your problems.
For example given:
type Resource struct {
Name *string `json:"name,omitempty" sql:"resource_id"`
Description *string `json:"description,omitempty" sql:"description"`
}
You might define:
func (r Resource) SQLUpdates() SQLUpdates {
var s SQLUpdates
if (r.Name != nil) {
s.add("resource_id", *r.Name)
}
if (r.Description != nil) {
s.add("description", *r.Description)
}
}
where the type SQLUpdates looks something like this:
type SQLUpdates struct {
assignments []string
values []interface{}
}
func (s *SQLUpdates) add(key string, value interface{}) {
if (s.assignments == nil) {
s.assignments = make([]string, 0, 1)
}
if (s.values == nil) {
s.values = make([]interface{}, 0, 1)
}
s.assignments = append(s.assignments, fmt.Sprintf("%s = ?", key))
s.values = append(s.values, value)
}
func (s SQLUpdates) Assignments() string {
return strings.Join(s.assignments, ", ")
}
func (s SQLUpdates) Values() []interface{} {
return s.values
}
See it working (sorta) here: https://play.golang.org/p/IQAHgqfBRh
If you have deep structs-within-structs, it should be fairly easy to build on this. And if you change to an SQL engine that allows or encourages positional arguments like $1 instead of ?, it's easy to add that behavior to just the SQLUpdates struct without changing any code that used it.
For the purpose of getting arguments to pass to Exec, you would just expand the output of Values() with the ... operator.

Building a linked list in yacc with left recursive Grammar

I want to build a linked list of data in yacc.
My Grammar reads like this:
list: item
| list ',' item
;
I have put the appropriate structures in place in the declarations section. But I am not able to figure out a way to get a linked list out of this data. I have to store the recursively obtained data and then redirect it for other purposes.
Basically I am looking for a solution like this one:
https://stackoverflow.com/a/1429820/5134525
But this solution is for right recursion and doesn't work with left.
It depends heavily on how you implement your linked list, but once you have that, it is straight-forward. Something like:
struct list_node {
struct list_node *next;
value_t value;
};
struct list {
struct list_node *head, **tail;
};
struct list *new_list() {
struct list *rv = malloc(sizeof(struct list));
rv->head = 0;
rv->tail = &rv->head;
return rv; }
void push_back(struct list *list, value_t value) {
struct list_node *node = malloc(sizeof(struct list_node));
node->next = 0;
node->value = value;
*list->tail = node;
list->tail = &node->next; }
allows you to write your yacc code as:
list: item { push_back($$ = new_list(), $1); }
| list ',' item { push_back($$ = $1, $3); }
;
of course, you should probably add checks for running out of memory, and exit gracefully in that case.
If you use a left recursive rule, then you need to push the new item at the end of the list rather than the beginning.
If your linked list implementation doesn't support push_back, then push the successive items at the front and reverse the list when its finished.
Very simple.
list
: item
{
$$ = new MyList<SomeType>();
$$.add($1);
}
| list ',' item
{
$1.add($3);
$$ = $1;
}
;
assuming you are using C++, which you didn't state, and assuming you have some MyList<T> class with an add(T) method.

Counter as variable in for-in-loops

When normally using a for-in-loop, the counter (in this case number) is a constant in each iteration:
for number in 1...10 {
// do something
}
This means I cannot change number in the loop:
for number in 1...10 {
if number == 5 {
++number
}
}
// doesn't compile, since the prefix operator '++' can't be performed on the constant 'number'
Is there a way to declare number as a variable, without declaring it before the loop, or using a normal for-loop (with initialization, condition and increment)?
To understand why i can’t be mutable involves knowing what for…in is shorthand for. for i in 0..<10 is expanded by the compiler to the following:
var g = (0..<10).generate()
while let i = g.next() {
// use i
}
Every time around the loop, i is a freshly declared variable, the value of unwrapping the next result from calling next on the generator.
Now, that while can be written like this:
while var i = g.next() {
// here you _can_ increment i:
if i == 5 { ++i }
}
but of course, it wouldn’t help – g.next() is still going to generate a 5 next time around the loop. The increment in the body was pointless.
Presumably for this reason, for…in doesn’t support the same var syntax for declaring it’s loop counter – it would be very confusing if you didn’t realize how it worked.
(unlike with where, where you can see what is going on – the var functionality is occasionally useful, similarly to how func f(var i) can be).
If what you want is to skip certain iterations of the loop, your better bet (without resorting to C-style for or while) is to use a generator that skips the relevant values:
// iterate over every other integer
for i in 0.stride(to: 10, by: 2) { print(i) }
// skip a specific number
for i in (0..<10).filter({ $0 != 5 }) { print(i) }
let a = ["one","two","three","four"]
// ok so this one’s a bit convoluted...
let everyOther = a.enumerate().filter { $0.0 % 2 == 0 }.map { $0.1 }.lazy
for s in everyOther {
print(s)
}
The answer is "no", and that's a good thing. Otherwise, a grossly confusing behavior like this would be possible:
for number in 1...10 {
if number == 5 {
// This does not work
number = 5000
}
println(number)
}
Imagine the confusion of someone looking at the number 5000 in the output of a loop that is supposedly bound to a range of 1 though 10, inclusive.
Moreover, what would Swift pick as the next value of 5000? Should it stop? Should it continue to the next number in the range before the assignment? Should it throw an exception on out-of-range assignment? All three choices have some validity to them, so there is no clear winner.
To avoid situations like that, Swift designers made loop variables in range loops immutable.
Update Swift 5
for var i in 0...10 {
print(i)
i+=1
}