Is it possible to input one argument out of two in GraphQL? - kotlin

I have a situation where in I have to input one parameter out of the given two.
Suppose the schema is
input fruit {
apple: Apple
banana: banana
}
Now, if user inputs apple, then he cannot input banana and vice-versa. Is this possible in GraphQL?

Keep your current schema as defined (you have both optional right now). You can then add below code to throw error if both are added. (it is java code but does explain how to do it)
#DgsQuery
public String hello(#InputArgument Fruit fruit) {
if (fruit.getApple() != null && fruit.getBanana() != null) {
throw new DgsInvalidInputArgumentException("Only one fruit is allowed", null);
}
}
You can create your custom error using GraphQLError object or throw your own exception and then write handler for it.
In any case, if you are looking schema only solution then it is not possible. GraphQL specs does not support this scenario out of the box.

An additional option depending on your requirements is to use the #oneOf directive. This directive is in RFC #825 and in currently in RFC 2 status (Draft status; RFC 3 is accepted status), and should be in the main branch relatively soon.
OneOf Input Objects are a special variant of Input Objects where the type system asserts that exactly one of the fields must be set and non-null, all others being omitted.
With this directive, your schema would look as follows:
input fruit #oneOf {
apple: Apple
banana: banana
}

Related

Spock Extension - Extracting variable names from Data Tables

In order to extract data table values to use in a reporting extension for Spock, I am using the
following code:
#Override
public void beforeIteration(IterationInfo iteration) {
Object[] values = iteration.getDataValues();
}
This returns to me the reference to the objects in the data table. However, I would like to get
the name of the variable that references the value.
For example, in the following test:
private static User userAge15 = instantiateUserByAge(15);
private static User userAge18 = instantiateUserByAge(18);
private static User userAge19 = instantiateUserByAge(19);
private static User userAge40 = instantiateUserByAge(40);
def "Should show popup if user is 18 or under"(User user, Boolean shouldShowPopup) {
given: "the user <user>"
when: "the user do whatever"
...something here...
then: "the popup is shown is <showPopup>"
showPopup == shouldShowPopup
where:
user | shouldShowPopup
userAge15 | true
userAge18 | true
userAge19 | false
userAge40 | false
}
Is there a way to receive the string “userAge15”, “userAge18”, “userAge19”, “userAge40” instead of their values?
The motivation for this is that the object User is complex with lots of information as name, surname, etc, and its toString() method would make the where clause unreadable in the report I generate.
You can use specificationContext.currentFeature.dataVariables. It returns a list of strings containing the data variable names. This should work both in Spock 1.3 and 2.0.
Edit: Oh sorry, you do not want the data variable names ["a", "b", "expected"] but ["test1", "test1", "test2"]. Sorry, I cannot help you with that and would not if I could because that is just a horrible way to program IMO. I would rather make sure the toString() output gets shortened or trimmed in an appropriate manner, if necessary, or to (additionally or instead) print the class name and/or object ID.
Last but not least, writing tests is a design tool uncovering potential problems in your application. You might want to ask yourself why toString() results are not suited to print in a report and refactor those methods. Maybe your toString() methods use line breaks and should be simplified to print a one-line representation of the object. Maybe you want to factor out the multi-line representation into other methods and/or have a set of related methods like toString(), toShortString(), toLongString() (all seen in APIs before) or maybe something specific like toMultiLineString().
Update after OP significantly changed the question:
If the user of your extension feels that the report is not clear enough, she could add a column userType to the data table, containing values like "15 years old".
Or maybe simpler, just add an age column with values like 15, 18, 19, 40 and instantiate users directly via instantiateUserByAge(age) in the user column or in the test's given section instead of creating lots of static variables. The age value would be reported by your extension. In combination with an unrolled feature method name using #age this should be clear enough.
Is creating users so super expensive you have to put them into static variables? You want to avoid statics if not really necessary because they tend to bleed over side effects to other tests if those objects are mutable and their internal state changes in one test, e.g. because someone conveniently uses userAge15 in order to test setAge(int). Try to avoid premature optimisations via static variables which often just save microseconds. Even if you do decide to pre-create a set of users and re-use them in all tests, you could put them into a map with the age being the key and conveniently retrieve them from your feature methods, again just using the age in the data table as an input value for querying the map, either directly or via a helper method.
Bottom line: I think you do not have to change your extension in order to cater to users writing bad tests. Those users ought to learn how to write better tests. As a side effect, the reports will also look more comprehensive. 😀

Are extensible records useless in Elm 0.19?

Extensible records were one of the most amazing Elm's features, but since v0.16 adding and removing fields is no longer available. And this puts me in an awkward position.
Consider an example. I want to give a name to a random thing t, and extensible records provide me a perfect tool for this:
type alias Named t = { t | name: String }
„Okay,“ says the complier. Now I need a constructor, i.e. a function that equips a thing with specified name:
equip : String -> t -> Named t
equip name thing = { thing | name = name } -- Oops! Type mismatch
Compilation fails, because { thing | name = ... } syntax assumes thing to be a record with name field, but type system can't assure this. In fact, with Named t I've tried to express something opposite: t should be a record type without its own name field, and the function adds this field to a record. Anyway, field addition is necessary to implement equip function.
So, it seems impossible to write equip in polymorphic manner, but it's probably not a such big deal. After all, any time I'm going to give a name to some concrete thing I can do this by hands. Much worse, inverse function extract : Named t -> t (which erases name of a named thing) requires field removal mechanism, and thus is not implementable too:
extract : Named t -> t
extract thing = thing -- Error: No implicit upcast
It would be extremely important function, because I have tons of routines those accept old-fashioned unnamed things, and I need a way to use them for named things. Of course, massive refactoring of those functions is ineligible solution.
At last, after this long introduction, let me state my questions:
Does modern Elm provides some substitute for old deprecated field addition/removal syntax?
If not, is there some built-in function like equip and extract above? For every custom extensible record type I would like to have a polymorphic analyzer (a function that extracts its base part) and a polymorphic constructor (a function that combines base part with additive and produces the record).
Negative answers for both (1) and (2) would force me to implement Named t in a more traditional way:
type Named t = Named String t
In this case, I can't catch the purpose of extensible records. Is there a positive use case, a scenario in which extensible records play critical role?
Type { t | name : String } means a record that has a name field. It does not extend the t type but, rather, extends the compiler’s knowledge about t itself.
So in fact the type of equip is String -> { t | name : String } -> { t | name : String }.
What is more, as you noticed, Elm no longer supports adding fields to records so even if the type system allowed what you want, you still could not do it. { thing | name = name } syntax only supports updating the records of type { t | name : String }.
Similarly, there is no support for deleting fields from record.
If you really need to have types from which you can add or remove fields you can use Dict. The other options are either writing the transformers manually, or creating and using a code generator (this was recommended solution for JSON decoding boilerplate for a while).
And regarding the extensible records, Elm does not really support the “extensible” part much any more – the only remaining part is the { t | name : u } -> u projection so perhaps it should be called just scoped records. Elm docs itself acknowledge the extensibility is not very useful at the moment.
You could just wrap the t type with name but it wouldn't make a big difference compared to approach with custom type:
type alias Named t = { val: t, name: String }
equip : String -> t -> Named t
equip name thing = { val = thing, name = name }
extract : Named t -> t
extract thing = thing.val
Is there a positive use case, a scenario in which extensible records play critical role?
Yes, they are useful when your application Model grows too large and you face the question of how to scale out your application. Extensible records let you slice up the model in arbitrary ways, without committing to particular slices long term. If you sliced it up by splitting it into several smaller nested records, you would be committed to that particular arrangement - which might tend to lead to nested TEA and the 'out message' pattern; usually a bad design choice.
Instead, use extensible records to describe slices of the model, and group functions that operate over particular slices into their own modules. If you later need to work accross different areas of the model, you can create a new extensible record for that.
Its described by Richard Feldman in his Scaling Elm Apps talk:
https://www.youtube.com/watch?v=DoA4Txr4GUs&ab_channel=ElmEurope
I agree that extensible records can seem a bit useless in Elm, but it is a very good thing they are there to solve the scaling issue in the best way.

Value object in event sourcing

Is there a place for value objects in an event sourced domain model?
Lets define a value object as an object with immutable state that guards its invariants and has no particular identifier.
An event sourced domain model in this context is a domain that is entirely or partially event sourced, meaning that its current state can be derived from applying all events that have occurred in the past. Events themselves are considered immutable, even over time.
Debate has taken place about the validity of using value objects within events - this question goes slightly further: Do value objects have a place in event sourced domains at all?
The (potential) problem with using value objects is that it becomes rather tricky to alter the domain in such a way that invariants are tightened.
An example of this scenario would be to have a Username value object, with the sole constraint that the name must be anywhere between 2 and 16 characters.
While this has been working well for some time, the business decides to only allow usernames of at least 5 characters.
A migration period begins and users with names of less than 5 characters are asked to update their names.
Lets say the process was successful, correction events are applied and everyone is happy.
We tighten the constraints on our Username value object to require at least 5 characters.
For a while everyone is happy, but then we discover a problem with the snapshots and replay all events.
We now face an exception from our Username object: by loading the historic data, we're breaking an invariant of our domain.
The rules of a value objects apply retroactively - does this make them inherently unsuitable for event sourcing? Would it be worth applying versioning of value objects? Is there a simpler way of avoiding such problems?
I would say, that at the moment you redefined what Username means, and you don't migrate historical data somehow, you've essentially created 2 different Username meanings.
Because there are 2 different meanings of the word, you have to make it explicit in the code somehow. "Versioning" is one way, although I wouldn't use such a generic solution, there are different modeling options.
You could make it explicit that the history of a "username" is just that, a history. So for example create a HistoricUsername, which is the event-sourced object, even a value object if you want. And create a Username which is at all times the username with the most current rules, which is not persisted at all, but created from a HistoricUsername if it can.
Some people suggest sometimes to extract the "rules" from the object, and re-apply it later. That way the object itself is valid at all times and you can ask it to validate itself against rules that might change. I don't really prefer these kinds of solutions, but it's an option, and the Username would still be a value-object.
So the problem is not really that value-objects don't fit into event-sourcing, it's just that the modeling has to be more accurate.
Do value objects have a place in event sourced domains at all?
Yes.
Is there a simpler way of avoiding such problems?
"Don't do that."
The problem you are describing is really one about messaging - if we make backwards incompatible changes to our messages, then things break.
(More precisely, you have a "Username" message, and you are trying to re-use that message with a new set of constraints that reject some previously valid uses of the message).
The answer is that you don't introduce backwards incompatible changes - instead, introduce new names that match the new requirements, and deprecated the old ones.
Which is to say, adding support for new messages, and removing support for the old messages, become two separately managed options.
Greg Young's book Versioning in an Event Sourced System dedicates some chapters to this idea. Also, Rich Hickey ends up touching on these important ideas in most of his talks -- I'd suggest starting from Spec-ulation.
The "value object", meaning that the type that the current implementation of the domain model uses to move the information around, is a separate concern from the messages. The data structures we use in memory don't need to be coupled to our serialization formats.
The representation of the information on the wire is distinct from the representation of information in memory, and that in turn is distinct from the abstractions that manipulate the information in memory.
The challenging thing is that, at the beginning of a project, you have the least amount of information about when the different representations are going to diverge.
We've solved this in a slightly different way. By separating the public API of our value objects from the internal (domain only) API, we are able to evolve one without affecting the other.
For example:
public class Username
{
private readonly string value;
// Domain-only (internal) constructor.
// Does not enforce constriants and can only be called within the domain.
internal Username(string value)
{
this.value = value;
}
// Public factory method.
// Enforces business constraints. Used by consumers of the domain (application layer etc.)
// to create new instances of the value object.
public static Username Create(string value)
{
// Business constraints. These will evolve and grow over time.
if (value == null)
{
// throw exception etc.
}
if (value.Length < 2)
{
// throw exception etc.
}
return new Username(value);
}
}
Consumers of the domain must use the static Create method to create a new instance of the value object. This factory method contains all of our business constraints and prevents an instance being created in an invalid state.
Inside the domain, classes have access to the internal (constraint-less) constructor. Since this does not enforce any business constraints, an instance of the value object can always be created in this way (regardless of its value). By using this constructor when replaying events we can ensure that historical data will always succeed.
The benefits of this design are:
A single class is used to represent the domain concept (no need for multiple classes, versioning etc.).
Business rules are free to evolve over time.
Historical data always works. A Username from a year ago is still a user name, even if our rules have changed.
Although already answered I do find this an interesting situation.
I agree with others that the event data should be record-based and, therefore, nothing more than a data container that may be used to reconstitute the aggregate.
That being said when the rules change so does the domain. A major portion of domain-driven design is to capture as much of the domain (rules/structure) as is required. If this is the case should the changes in the rules not also be kept?
For instance, if we have a Username Value Object and it starts out with the 2 to 16 characters rules then that is coded as such:
public class Username
{
public string Value { get; }
public Username(string value)
{
if (value.Length < 2 || value.Length > 16)
{
throw new DomainException("Username must be between 2 and 16 characters");
}
Value = value;
}
}
Now we get to 1 March 2018 and the rule changes. We can keep the rule around:
public class Username
{
public string Value { get; }
public Username(string value, DateTime registrationDate)
{
if (registrationDate < new Date(2018, 3, 1) &&
(value.Length < 2 || value.Length > 16))
{
throw new DomainException("Username must be between 2 and 16 characters");
}
if (registrationDate >= new Date(2018, 3, 1) &&
(value.Length < 5 || value.Length > 16))
{
throw new DomainException("Username must be between 5 and 16 characters");
}
Value = value;
}
}
That is the basic idea. In this way we keep our "old" rules around as well. This may become quite a hassle but I don't have enough experience to say. Changing our rules retroactively may introduce some pretty tricky situation so I guess one would need to evaluate this on a case-by-case basis.
Just a thought.

Are Bond field names used in deserialization?

Suppose I serialized a given Bond struct with a single field:
struct NameBond
{
1: string name;
}
And then I renamed the field in the .bond file (without changing its ordinal):
struct NameBond
{
1: string displayName;
}
Would I still be able to deserialize it?
What about the name of the struct? (NameBond in the example.)
Would changing that prevent me from deserializing?
This depends on which protocol you are using.
Your change will cause no problems in the CompactBinary serializer.
It may cause trouble with other protocols.
You may want to consult the Bond schema evolution guide, where it says:
Caution should be used when changing or reusing field names as this could break text-based protocols like SimpleJsonProtocol
See also this related SO question.

How to identify a Core Foundation object if Apple recommend to not hardcode the type IDs?

My application is interfaced with the CoreFoundation library. Some functions of another library returns a core foundation object and I need to identify the kind of object in order to process the data.
Now, looking on the CFType library reference, Apple clearly states the following:
"Because the value for a type ID can change from release to release, your code should not rely on stored or hard-coded type IDs nor should it hard-code any observed properties of a type ID (such as, for example, it being a small integer)."
Based on that, I have to avoid any enum (CFArray = 18, CFBoolean = 21 and so on).
The only thing that should work and be immune to changes of new releases is something like:
int ID = CFGetTypeID(obj);
if ID = CFBooleanGetTypeID() then...
if ID = CFStringGetTypeID() then...
if ID = CFDataGetTypeID() then..
and so on...
This is really something horrible. Lots of calls only to identify an object.
Apple also recommends to not create dependencies on the content or format of the information returned from CFCopyTypeIDDescription and therefore I have to exclude also this option.
Anyone know how I can easily identify a returned core foundation type and why Apple always try to break existing code with the new releases ?
Unfortunately you do have to compare if you don't want to risk your app breaking with future OS updates:
if( CFGetTypeID(myUnknownCFObject) == CFArrayGetTypeID() ) {
// handle the object as a CFArray
} else if( /* ... etc. ... */ ) {
} else {
// we don't know how to deal with this object
}
In your initialization code, you could set up a static structure, maybe a dictionary or std::map, associating CFTypeIDs to function pointers or selectors. That way you'll be using CFBooleanGetTypeID() and friends, but only calling each such function once.