Symfony2 - Validation.yml with Class and Extended Class - oop

I've got an 'addressClass' and a 'shippingAddressClass'. 'ShippingAdress' extends 'Address' and they both validate identically EXCEPT 'shippingAddress' invalidates when a PO Box is detected. While a PO box is a perfectly valid billing address, UPS doesn't ship to them.
Hypothetically, What's the SF2 Best Practice for validating Bird which extends Animal?
Should we duplicate the .yml which we used to validate Animal? Essentially giving us two fairly identical sections (see below). In this case, the getters are a little different from Animal to Bird but the properties require pretty identical validation rules.
Acme\BlogBundle\Entity\Animal:
properties:
name:
- NotBlank: ~
getters:
isAnimal:
- "True"
Acme\BlogBundle\Entity\Bird:
properties:
name:
- NotBlank: ~
getters:
isAnimal:
- "True"
isBird
- "True"

Validator Service is smart and validates against the constraints of the origin classes. So in my Animal,Bird example, we would only need:
Acme\BlogBundle\Entity\Animal:
properties:
name:
- NotBlank: ~
getters:
isAnimal:
- "True"
Acme\BlogBundle\Entity\Bird:
getters:
isBird
- "True"

Related

How to specify XML element names in bpmn-js

If I define a moddle file with bpmn-js like this
{
name: "thisArgument",
superClass: [
"Element"
],
properties: []
},
{
name: "myData",
superClass: [
"Element"
],
properties: [
{
name: "argument",
type: "thisArgument"
}
]
},
Then the resulting XML (when I call saveXML) will have an element called thisArgument, despite the fact that the name is "argument". First, is that a bug? If not, how do I control the output so that the XML contains argument rather than thisArgument? I've searched the docs and examples but can't find how to do this.
The only workaround I found was to make it type: "argument" and then define argument with a superClass of thisArgument and no extra properties (essentially making an alias). However, that only works if all instances of argument are identical. Eg. if the XML needed to be
<A><argument/></A>
<B><argument/></B>
where the argument in A has a different shape than the argument in B, then there would be a conflict since I can't define argument twice.
I can sort of answer my own question. I found this serialize option and experimented, and it mostly does what I want, but sometimes it adds an unwanted xsi:type="originalType" attribute and sometimes it doesn't. Maybe it depends on isBody but I'm not sure. If anyone knows the details of how it works, please reply.
properties: [
{
name: "argument",
type: "thisArgument",
xml: {
serialize: "xsi:type"
},
}
]
The closest thing I found to documentation on it is https://forum.bpmn.io/t/bpmn-json-documentation/1304 which describes it as "additional meta-data impecting XML serialization of a type", so I'd appreciate any extra details anyone can supply.
Update:
The docs don't mention this, but it turns out that serialize: "property" is exactly what I need. This does the same as serialize: "xsi:type" but doesn't add the xsi:type attribute.
xml: {
serialize: "property"
},
I found this by hunting the code in one of the related packages, moddle-xml.
In write.js, there's code that looks for the xsi:type or property entry:
// allow serialization via type
// rather than element name
var asType = serializeAsType(p),
asProperty = serializeAsProperty(p);
In the same file, I found some code that appears to explain why the xsi:type didn't always show up, too:
// only serialize xsi:type if necessary
if (descriptor.name === this.propertyDescriptor.type) {
return attributes;
}

NestJS serialization from snake_case to camelCase

I want to achieve automatic serialization/deserialization of JSON request/response body for NestJS controllers, to be precise, automatically convert snake_case request body JSON keys to camelCase received at my controller handler and vice versa.
What I found is to use class-transformer's #Expose({ name: 'selling_price' }), as on the example below (I'm using MikroORM):
// recipe.entity.ts
#Entity()
export class Recipe extends BaseEntity {
#Property()
name: string;
#Expose({ name: 'selling_price' })
#Property()
sellingPrice: number;
}
// recipe.controller.ts
#Controller('recipes')
export class RecipeController {
constructor(private readonly service: RecipeService) {}
#Post()
async createOne(#Body() data: Recipe): Promise<Recipe> {
console.log(data);
return this.service.createOne(data);
}
}
// example request body
{
"name": "Recipe 1",
"selling_price": 50000
}
// log on the RecipeController.createOne handler method
{ name: 'Recipe 1',
selling_price: 50000 }
// what I wanted on the log
{ name: 'Recipe 1',
sellingPrice: 50000 }
There can be seen that the #Expose annotation works perfectly, but going further I want to be able to convert it as the attribute's name on the entity: sellingPrice, so I can directly pass the parsed request body to my service and to my repository method this.recipeRepository.create(data). Current condition is the sellingPrice field would be null because there exists the selling_price field instead. If I don't use #Expose, the request JSON would need to be written on camelCase and that's not what I prefer.
I can do DTOs and constructors and assigning fields, but I think that's rather repetitive and I'll have a lot of fields to convert due to my naming preference: snake_case on JSON and database columns and camelCase on all of the JS/TS parts.
Is there a way I can do the trick cleanly? Maybe there's a solution already. Perhaps a global interceptor to convert all snake_case to camel_case but I'm not really sure how to implement one either.
Thanks!
You could use mapResult() method from the ORM, that is responsible for mapping raw db results (so snake_case for you) to entity property names (so camelCase for you):
const meta = em.getMetadata().get('Recipe');
const data = {
name: 'Recipe 1',
selling_price: 50000,
};
const res = em.getDriver().mapResult(data, meta);
console.log(res); // dumps `{ name: 'Recipe 1', sellingPrice: 50000 }`
This method operates based on the entity metadata, changing keys from fieldName (which defaults to the value based on selected naming strategy).

How to add subscripts to my custom Class in Perl 6?

I am new to Perl 6. I have the following code in my Atom Editor, but I still don't understand how this works. I copied the following code, as the docs.raku.org said, but it seems that it does not work. So I changed the code to this:
use v6;
class HTTPHeader { ... }
class HTTPHeader does Associative {
has %!fields handles <self.AT-KEY self.EXISTS-KEY self.DELETE-KEY self.push
list kv keys values>;
method Str { say self.hash.fmt; }
multi method EXISTS-KEY ($key) { %!fields{normalize-key $key}:exists }
multi method DELETE-KEY ($key) { %!fields{normalize-key $key}:delete }
multi method push (*#_) { %!fields.push: #_ }
sub normalize-key ($key) { $key.subst(/\w+/, *.tc, :g) }
method AT-KEY (::?CLASS:D: $key) is rw {
my $element := %!fields{normalize-key $key};
Proxy.new(
FETCH => method () { $element },
STORE => method ($value) {
$element = do given $value».split(/',' \s+/).flat {
when 1 { .[0] } # a single value is stored as a string
default { .Array } # multiple values are stored as an array
}
}
);
}
}
my $header = HTTPHeader.new;
say $header.WHAT; #-> (HTTPHeader)
"".say;
$header<Accept> = "text/plain";
$header{'Accept-' X~ <Charset Encoding Language>} = <utf-8 gzip en>;
$header.push('Accept-Language' => "fr"); # like .push on a Hash
say $header.hash.fmt;
"".say;
say $header<Accept-Language>.values;
say $header<Accept-Charset>;
the output is:
(HTTPHeader)
Accept text/plain
Accept-Charset utf-8
Accept-Encoding gzip
Accept-Language en fr
(en fr)
utf-8
I konw it works, but the document in docs.raku.org is a little different to this, which doesn't have "self" before the AT-KEY method in the 7th line. Is there any examples that more detail about this?
Is there any examples that more detail about this?
Stack overflow is not really the place to request more detail on a published example. This is the perl6 doco on the community itself - I would suggest that the most appropriate place if you have further queries is the Perl6 users mailing list or, failing that, the IRC channel, perhaps.
Now that you've posted it though, I'm hesitant to let the question go unaddressed so, here are a couple of things to consider;
Firstly - the example you raised is about implementing associative subscripting on a custom (ie user defined) class - it's not typical territory for a self-described newbie. I think you would be better off looking at and implementing the examples at Perl6 intro by Naoum Hankache whose site has been very well received.
Option 1 - Easy implementation via delegation
Secondly, it's critical to understand that the example is showing three options for implementing associative subscripting; the first and simplest uses delegation to a private hash attribute. Perl6 implements associative and positional subscripts (for built-in types) by calling well-defined methods on the object implementing the collection type. By adding the handles trait on the end of the definition of the %!fields attribute, you're simply passing on these method calls to %!fields which - being a hash - will know how to handle them.
Option 2 - Flexible keys
To quote the example:
However, HTTP header field names are supposed to be case-insensitive (and preferred in camel-case). We can accommodate this by taking the *-KEY and push methods out of the handles list, and implementing them separately...
Delegating all key-handling methods to the internal hash means you get hash-like interpretation of your keys - meaning they will be case-sensitive as hash keys are case-sensitive. To avoid that, you take all key-related methods out of the handles clause and implement them yourself. In the example, keys are ran through the "normalizer" before being used as indexes into %!fields making them case-insensitive.
Option 3 - Flexible values
The final part of the example shows how you can control the interpretation of values as they go into the hash-like container. Up to this point, values supplied by assigning to an instance of this custom container had to either be a string or an array of strings. The extra control is achieved by removing the AT_KEY method defined in option 2 and replacing it with a method that supplies a Proxy Object. The proxy object's STORE method will be called if you're assigning to the container and that method scans the supplied string value(s) for ", " (note: the space is compolsory) and if found, will accept the string value as a specification of several string values. At least, that's what I think it does.
So, the example has a lot more packed into it than it looks. You ran into trouble - as Brad pointed out in the comments - because you sort-of mashed option 1 togeather with option 3 when you coppied the example.

How to expose enum values in a REST API

In a mobility context of use of the API, an advanced research proposes several dynamic filters that must be returned by the server. (We don't want to make too many exchange with server to initialize our filters)
In a REST api, how to expose a enum of possible values ​​for filter search?
Thank you for your suggestions/ideas?
My initial thought would be to treat the search like a normal resource. In an object oriented perspective, a search can have a collection of fields which can be used to filter by. These fields can be numeric, boolean, string based, or whatever.
So, if I understand your question correctly, then I would propose doing this:
GET /search_fields
If your API have multiple type searches that can be performed, then they can be identified by an id or maybe their name, as long as it is unique, like so:
GET /searches/{search_id}/fields
which would return a collection of search fields like so:
[{
name: 'Field1',
type: 'boolean'
},
{
name: 'Field2',
type: 'number'
},
{
name: 'Field3',
type: 'string'
}]
or if your fields are really just simple enums then:
[{
name: 'Field1',
id: 1
},
{
name: 'Field2',
id: 2
},
{
name: 'Field3',
id: 3
}]
That's my suggestion. Remember, there's no one right way to expose an API.

RestKit Dynamic nested mapping

I see that the restkit document is quite nice and has variety of examples on object modelling. There is also an example of nested mapping but I find my scenario little bit different than this. RestKit documentation provides the example mapping of the nested attribute with the following json format.
Sample JSON structure from the RestKit Documentation :
{
"blake": {
"email": "blake#restkit.org",
"favorite_animal": "Monkey"
},
"sarah": {
"email": "sarah#restkit.org",
"favorite_animal": "Cat"
}
}
Suppose that my json is a bit different as this;
My JSON structure :
{
"id" : 1,
"author" : "RestKit",
"blake": {
"email": "blake#restkit.org",
"favorite_animal": "Monkey"
},
"sarah": {
"email": "sarah#restkit.org",
"favorite_animal": "Cat"
}
}
I created two different managedobject model with the following attributes and to many relations.
Two different entities for my structure Product and creator to map the above JSON object.
Product Creator
identifier <------------------- >> name
author email
favouriteAnimal
Now, my mapping would look like this for Product model would be;
This is how I map the Product entity,
[mapping mapKeyPath:"id" toAttribute:"identifier"];
[mapping mapKeyPath:"author" toAttribute: "author"];
But note here, mapping the nested dictionary attribute does not work for me.
// [mapping mapKeyOfNestedDictionaryToAttribute:#"creators"];
Now, in the authors class.
I could not figure out the usual way to map the above JSON structure.
If you have control over the web service, I would strongly recommend reorganizing your response data like this:
{
product:
{
id: 1,
author: 'RestKit',
creators: [
{
id: 1,
name: 'Blake',
email: '...',
favorite_animal: 'Monkey'
},
{
id: 2,
name: 'Sarah',
email: '...',
favorite_animal: 'Cat'
}
]
}
}
Following this structure, you'd be able to use RestKit's nested mapping features, and the relationship would be correctly reflected in the deserialized objects received by the object loader delegate. RestKit relies on naming and structure standards to simplify the code required to achieve the task. Your example deviates from key-value coding standards, so RK doesn't provide an easy way to interact with your data format.
If you don't have access or you can't change it, I think you'll need to map known key-value pairs with a mapping and perform the remaining assignments with a custom evaluator. You'd need to assume the unknown keys are actually name values for associated creators and their associated values contain the attribute hash for each. Using that, you'd then reconstruct each object manually.