F# ASP.NET Core Minimal Web API - Why does the generated endpoint ignore parameter names when not supplied a lambda? - asp.net-core

I created a .NET core web API from the standard F# template and added these lines:
let add x y = x + y
app.MapGet("addUgly", new Func<_,_,_>(add))
app.MapGet("addPretty", new Func<_,_,_>(fun x y -> add x y))
When I access the addPretty endpoint, I can supply parameters with the desired names:
https://localhost:7129/addPretty?x=1&y=2
However, in order to access the addUgly endpoint, I must supply these parameters:
https://localhost:7129/addUgly?delegateArg0=3&delegateArg1=4
Is there a way to have the generated endpoint use the desired parameter names without using a lambda? (or other constructs that involve unnecessary boilerplate, like creating a controller)?
I checked whether add and (fun x y -> add x y) had differently structured type definitions, but they both have Invoke methods with parameters named x and y, so I don't know why those names get lost in one case but not the other.

Is there a way to have the generated endpoint use the desired parameter names without using a lambda?
I think the answer is No for now. This appears to an open issue in the F# compiler. See language suggestion: Use same parameters names when constructing a delegate from a function.

Related

query parameter books accepts an array of enum values. Is there a way I can give 400 bad request if a particular combination of values is received?

books?:
type: array
items:
enum: [a,b,c,d]
Let's say I want to give a bad request whenever b,c come together. Eg:
[a,b,c,d] -> invalid request
[a,b,c] -> invalid request
[b,c] -> invalid
In short, if a request has both b & c together, can 400 be displayed using RAML ?
You can declare different types of valid combinations and then use them as possible input types.
Something like:
types:
validCombinationA:
type: array
items:
enum:
- a
- b
- d
validCombinationB:
type: array
items:
enum:
- a
- c
- d
And then:
books?:
type: validCombinationA | validCombinationB
That way is going to fail whenever you use an invalid combination.
If the valid combinations are static and the probability of new future values is small, then it's not a big deal using this approach but if that's not the case, you will need to create X number of types for each valid combination.
Maybe it worths a thought of consideration look for other possibilities for your use case (eg: with OAS that can be done with the usage of elements such as oneOf, anyOf, allOf, not).
If the validation is quite simple, then I'd prefer to do it that way instead of using the Validation Module or something else inside a flow given that probably has an impact on performance (do some quick tests to verify it).
That's not possible. RAML is not expected to be used to define data validation. RAML only define validations of types and the structure of requests. You need to implement that kind of rule in the implementation of the API. In this particular case it seems that you are using Mule to implement the API. Inside the Mule application project you need to perform the validation in the flows.

Modify Scilab/Xcos Block in Scilab 6 Gateway Function

I would like to modify an Xcos block from within a gateway function using the new (non-legacy) Scilab API, for example, replace the block's model property by a new model structure. In other words, do the same as the Scilab command(s):
m = scicos_model()
block.model = m
However, I did not manage to achieve this behavior with the functions from Scilab 6 API: a block created by standard_define() is correctly passed to my gateway function, where this argument is available as scilabVar of type 128. On the other hand, the Scilab help claims that a block is a "scilab tlist of type "Block" with fields : graphics, model, gui and doc".
Attempts
Assume scilabVar block taken from gateway function argument, string constants of type wchar_t[], scilabVar model holding the result of scicos_model():
Application of function scilab_setTListField (env, block, "model", model) returns error status (as its equivalents for MList and List do)
Knowing that property .model is at index 3, a setfield (3, model, block) called through scilab_call ("setfield", ...) also fails.
This is not surprising: when called directly from the Scilab command line, it ends up with
setfield: Wrong type for input argument #3: List expected. .
However, a getfield (3, block) works, so that at least read access to the block's data fields is possible.
An external helper function
function block = blockSetModel (block, model)
block.model = model
endfunction
also called through scilab_call("blockSetModel", ...) actually returns a block with changed model,
but the original block passed to this function remains unchanged.
Although ugly, this gives at least a way to construct an individual block structure
which needs to be returned as a copy.
Summary
So, is there simply a function missing in the API, which returns the TList (or whatever) behind a type 128 pointer variable?
Or is there any other approach to this problem I was unable to discover?
Background
The goal behind is to move the block definition task from the usual interfacing "gui" function (e.g. a Scilab script MyBlock.sci) into own C code. For this purpose, the interfacing function is reduced to a wrapper around a C gateway, which, for example, usesscilab_call ("standard_define",...) to create a new block when being called with parameter job=="define".
Modification of the contained model and graphics objects through the Scilab API works fine since these are standard list types. However, getting or setting these objects as attributes .model and .graphics of the
original block fails as described above.
Starting from Scilab/Xcos 6.0.0, the data-structure behind a block is no more an MList (or TList) so you cannot upgrade the model to your own MList. All the data behind are stored using a classical MVC within a C++ coded Block.hxx.
On each try you made, a serialization/deserialization happens to reconstruct the block model field as a Scilab value.
Could you describe what kind of field you want to append/edit regarding the block structure ? Some of the predefined fields might be enough to pass extra information.

Is it possible to push a lookup parameter into multiple block definitions using Autolisp

I'll give a hypothetical example to demonstrate my problem. Imagine that I have a lookup parameter "Color" on a dynamic block definition for a chair and I've given it the possible values of "Red", "Blue", and "Green". Now I need to push this lookup parameter to tons and tons of other dynamic block definitions for other types of chairs. I don't want to have to go into the UI and the block editor for each definition and add this lookup parameter. Instead I would like to automate this by writing an Autolisp routine and passing in the different blocks.
Is this possible using Autolisp? Is it possible using any of the other AutoCAD APIs?
Note below:
I want to edit different block definitions, not references.
I don't want to use a block properties table because I'm already using that for other purposes.
In short: No, this functionality was never exposed to the LISP API.
Whilst you can retrieve and change the values of existing dynamic block parameters (using the getdynamicblockproperties method of a block reference object), you cannot create or modify dynamic block parameters within a block definition, nor will such objects be visible through the Visual LISP API.
Curiously, the parameters are visible when interrogating the DXF data of a block definition through Vanilla AutoLISP, by inspecting the ACAD_ENHANCEDBLOCK dictionary found within the Extension Dictionary of the BLOCK_RECORD entity:
(dictsearch
(cdr
(assoc 360
(entget
(cdr
(assoc 330
(entget
(tblobjname "block" "YourDynamicBlockName")
)
)
)
)
)
)
"acad_enhancedblock"
)
However, this area of DXF data is entirely undocumented and could likely produce unexpected and unstable results if modified directly, given that it isn't officially supported by the API.

Confusion about the Argument< T > and Variable< T > in .NET 4.0 Workflow Foundation

I am using Windows Workflow Foundation in .NET 4.0. Below is some syntax/semantic confusion I have.
I have 2 equivalent way to declare an Assign activity to assign a value to a workflow variable (varIsFreeShipping).
(1) Using XAML in the designer.
(2) Using code.
But in approach 2, the it seems I am creating a new OutArgument< Boolean > and assign value to it, not to the original Variable< Boolean> varIsFreeShipping. And OutArgument and Variable are totally different types.
So how could the value assigned to this new Argument finally reach the original Variable?
This pattern seems common in WF 4.0. Could anybody shed some light on this?
Thanks!
As a matter of fact, the second (2) method can be written just as:
Then = new Assign<bool>
{
To = varIsFreeShipping,
Value = true
}
This all works because OutArgument<T> can be initialized through a Variable<T> using an implicit operator.
In your first (1) assign, using the editor, that's what's happening behind the scene; the variable is being implicitly converted from Variable to OutArgument.
WF4 uses alot of implicit operators mainly on Activity<T> from/to Variable<T>, OutArgument<T> from/to Variable<T>, etc. If you look at it, they all represent a piece of data (already evaluated or not), that is located somewhere. It's exactly the same as in C#, for example:
public int SomeMethod(int a)
{
var b = a;
return a;
}
You can assign an argument to a variable, but you can also return that same variable as an out argument. That's what you're doing with that Assign<T> activity (using the variable varIsFreeShipping as the activity's out argument).
This answers your question?

Lambdas with captured variables

Consider the following line of code:
private void DoThis() {
int i = 5;
var repo = new ReportsRepository<RptCriteriaHint>();
// This does NOT work
var query1 = repo.Find(x => x.CriteriaTypeID == i).ToList<RptCriteriaHint>();
// This DOES work
var query1 = repo.Find(x => x.CriteriaTypeID == 5).ToList<RptCriteriaHint>();
}
So when I hardwire an actual number into the lambda function, it works fine. When I use a captured variable into the expression it comes back with the following error:
No mapping exists from object type
ReportBuilder.Reporter+<>c__DisplayClass0
to a known managed provider native
type.
Why? How can I fix it?
Technically, the correct way to fix this is for the framework that is accepting the expression tree from your lambda to evaluate the i reference; in other words, it's a LINQ framework limitation for some specific framework. What it is currently trying to do is interpret the i as a member access on some type known to it (the provider) from the database. Because of the way lambda variable capture works, the i local variable is actually a field on a hidden class, the one with the funny name, that the provider doesn't recognize.
So, it's a framework problem.
If you really must get by, you could construct the expression manually, like this:
ParameterExpression x = Expression.Parameter(typeof(RptCriteriaHint), "x");
var query = repo.Find(
Expression.Lambda<Func<RptCriteriaHint,bool>>(
Expression.Equal(
Expression.MakeMemberAccess(
x,
typeof(RptCriteriaHint).GetProperty("CriteriaTypeID")),
Expression.Constant(i)),
x)).ToList();
... but that's just masochism.
Your comment on this entry prompts me to explain further.
Lambdas are convertible into one of two types: a delegate with the correct signature, or an Expression<TDelegate> of the correct signature. LINQ to external databases (as opposed to any kind of in-memory query) works using the second kind of conversion.
The compiler converts lambda expressions into expression trees, roughly speaking, by:
The syntax tree is parsed by the compiler - this happens for all code.
The syntax tree is rewritten after taking into account variable capture. Capturing variables is just like in a normal delegate or lambda - so display classes get created, and captured locals get moved into them (this is the same behaviour as variable capture in C# 2.0 anonymous delegates).
The new syntax tree is converted into a series of calls to the Expression class so that, at runtime, an object tree is created that faithfully represents the parsed text.
LINQ to external data sources is supposed to take this expression tree and interpret it for its semantic content, and interpret symbolic expressions inside the tree as either referring to things specific to its context (e.g. columns in the DB), or immediate values to convert. Usually, System.Reflection is used to look for framework-specific attributes to guide this conversion.
However, it looks like SubSonic is not properly treating symbolic references that it cannot find domain-specific correspondences for; rather than evaluating the symbolic references, it's just punting. Thus, it's a SubSonic problem.