Karate- how to check the particular word contains in json array. and how to write the contains inside json - karate

iam getting below response from api.
{
"error": {
"serverTime": 1564066755618,
"id": "VALIDATION_EXCEPTION",
"category": "system",
"message": "errors: [property: username; value: ; constraint: EMAIL_INLINE_ERROR_MESSAGE_1; property: username; value: ; constraint: EMAIL_INLINE_ERROR_MESSAGE_1; property: username; value: ; constraint: EMAIL_INLINE_ERROR_MESSAGE_1"
}
}
and iam checking the assertion using below way.
{"error":
{"serverTime":"#notnull",
"id":"VALIDATION_EXCEPTION",
"category":"system",
"message":"#notnull"
}
}
now I want to write a assertion for the above respose, like I want to check for field message contains the words "EMAIL_INLINE_ERROR_MESSAGE_1" and how many time it came.

Use Java interop. Then you can figure out the solution on your own. Normally no one needs this kind of validation, so it is not built in.
Please read the documentation: https://github.com/intuit/karate#calling-java

Related

Add Date-Variable to Camunda-BPMN via Rest

I have a camunda bpmn workflow where the start-events required some variables.
One of the needed variable is of type 'date':
{
"variables": {
"stichtagFrist": {
"value": "2021-09-08T00:00:00",
"type": "date"
}
}
}
now trying to add a new Instance having the above mentioned json, I get the following exception:
Cannot instantiate process definition d1f43d8e-211f-11ec-8fdf-0242ac110002: Cannot convert value '2021-09-08T00:00:00' of type 'date' to java type java.util.Date"
How do I need to POST the json that Camunda can interpret it as "date" so that I can use this variable e.g. in timer-events etc.?
Are there any rules for booleans too?
https://github.com/camunda/camunda-bpm-platform/blob/7c5bf37307d3eeac3aee5724b6e4669a9992eaba/engine-rest/engine-rest/src/main/java/org/camunda/bpm/engine/rest/dto/VariableValueDto.java#L109
Uses a Jackson ObjectMapper to parse the value. It requires this format:
{
"variables": {
"stichtagFrist": {
"value": "2021-09-08T00:00:00.0+0000",
"type": "date"
}
}
}
From the documentation:
In the REST API, the type names start with a capital letter, i.e., String instead of string.
Try "Date" instead of "date", that should do the trick.

Proper error handling when performing multiple mutations in graphql

Given the following GraphQL mutations:
type Mutation {
updateUser(id: ID!, newEmail: String!): User
updatePost(id: ID!, newTitle: String!): Post
}
The Apollo docs state that it's totally possible to perform multiple mutations in one request, say
mutation($userId: ID!, $newEmail: String!, $postId: ID!, $newTitle: String!) {
updateUser(id: $userId, newEmail: $newEmail) {
id
email
}
updatePost(id: $postId, newTitle: $newTitle) {
id
title
}
}
1. Does anyone actually do this? And if you don't do this explicitly, will batching cause this kind of mutation merging?
2. If you perform run multiple things within on mutation, how would you handle errors properly?
I've seen a bunch of people recommending to throw errors on the server so that the server would respond with something that looks like this:
{
errors: [
{
statusCode: 422,
error: 'Unprocessable Entity'
path: [
'updateUser'
],
message: {
message: 'Validation failed',
fields: {
newEmail: 'The new email is not a valid email address.'
}
},
},
{
statusCode: 422,
error: 'Unprocessable Entity'
path: [
'updatePost'
],
message: {
message: 'Validation failed',
fields: {
newTitle: 'The given title is too short.'
}
},
}
],
data: {
updateUser: null,
updatePost: null,
}
}
But how do I know which error belongs to which mutation? We can't assume, that the first error in the errors array belongs to the first mutation, because if updateUser succeeds, the array would simple contain one entry. Would I then have to iterate over all errors and check if the path matches my mutation name? :D
Another approach is to include the error in a dedicated response type, say UpdateUserResponse and UpdatePostResponse. This approach enables me to correctly address errors.
type UpdateUserResponse {
error: Error
user: User
}
type UpdatePostResponse {
error: Error
post: Post
}
But I have a feeling that this will bloat my schema quite a lot.
In short, yes, if you include multiple top-level mutation fields, utilize the path property on the errors to determine which mutation failed. Just be aware that if an error occurs deeper in your graph (on some child field instead of the root-level field), the path will reflect that field. That is, an execution error that occurs while resolving the title field would result in a path of updatePost.title.
Returning errors as part of the data is an equally valid option. There's other benefits to this approach to:
Errors sent like this can include additional meta data (a "code" property, information about specific input fields that may have generated the error, etc.). While this same information can be sent through the errors array, making it part of your schema means that clients will be aware of the structure of these error objects. This is particularly important for clients written in typed languages where client code is often generated from the schema.
Returning client errors this way lets you draw a clear distinction between user errors that should be made visible to the user (wrong credentials, user already exists, etc.) and something actually going wrong with either the client or server code (in which case, at best, we show some generic messaging).
Creating a "payload" object like this lets you append additional fields in the future without breaking your schema.
A third alternative is to utilize unions in a similar fashion:
type Mutation {
updateUser(id: ID!, newEmail: String!): UpdateUserPayload!
}
union UpdateUserPayload = User | Error
This enables clients to use fragments and the __typename field to distinguish between successful and failed mutations:
mutation($userId: ID!, $newEmail: String!) {
updateUser(id: $userId, newEmail: $newEmail) {
__typename
... on User {
id
email
}
... on Error {
message
code
}
}
}
You can get even create specific types for each kind of error, allowing you to omit any sort of "code" field:
union UpdateUserPayload = User | EmailExistsError | EmailInvalidError
There's no right or wrong answer here. While there are advantages to each approach, which one you take comes ultimately comes down to preference.

How do I use JSONSchema to accept any object string value, regardless of its key?

I have a system that is receiving JSON messages which contain metadata from a static analysis of a file. The names of these fields are dynamically generated from the scan and can be any valid string, but the value is always a valid string.
e.g.
{
"filename": "hello.txt",
...
"meta": {
"some file property": "any string",
"some other file property": "another string",
...
}
}
I have no way of knowing what the keys in meta will be before receiving the message, nor do I know how many keys there will be. Is there a way of capturing in a JSONSchema that it doesn't matter what keys are present, so long as their values are always strings?
I think you're looking for additionalProperties
Validation with "additionalProperties" applies only to the child
values of instance names that do not match any names in "properties",
and do not match any regular expression in "patternProperties".
The value of additionalProperties can be a JSON Schema, like so
...
"additionalProperties" : {
"type": "string"
}
...
Feel free to let me know if I've missed anything in my explanation, or ask any further questions.

Design pattern - update join table through REST API

I'm struggling with a REST API design concept. I have these classes:
user:
- first_name
- last_name
metadata_fields:
- field_name
user_metadata:
- user_id
- field_id
- value
- unique index on [user_id, field_id]
Ok, so users have many metadata and the type of metadata is defined in metadata_fields. Typical HABTM with extra data in the join table.
If I were to update user_metadata through a Rails form, the data would look like this:
user_metadata: {
id: 1,
user_id: 2,
field_id: 3,
value: 'foo'
}
If I posted to the user#update controller, the data would look like this:
user: {
user_metadata: {
id: 1,
field_id: 3,
value: 'foo'
}
}
The trouble with this approach is that we're ignoring the uniqueness of the user_id/field_id relationship. If I change the field_id in either update, I'm not just changing data, I'm changing the meaning of that data. This tends to work fine in Rails because it's somewhat of a walled garden, but it breaks down when you open up an API endpoint.
If I allow this:
PATCH /api/user_metadata
Then I'm opening myself up to someone modifying the user_id or field_id or both. Similarly with this:
PATCH /api/user/:user_id/metadata
Now user_id is set but field_id can still change. So really the only way to solve this is to limit the update to a single field:
PATCH /api/user/:user_id/metadata/:field_id
Or a bulk update:
PATCH /api/user/:user_id/metadata
But with that call, we have to modify the data structure so that the uniqueness of the user_id/field_id relationship is intact:
user_metadata: {
field_id1: 'value1',
field_id2: 'value2',
...
}
I'd love to hear thoughts here. I've scoured Google and found absolutely nothing. Any recommendations?
As metadata belongs to a certain user /api/user/{userId}/metadata/{metadataId} is probably the clean URI for a single metadata resource of a user. The URI of your resource is already the unique-key you are looking for. There can't be 2 resources with the same URI! Furthermore, the URI already contains the user and field IDs.
A request like GET /api/user/1 HTTP/1.1 could return a HAL-like representation like the one below:
{
"user" : {
"id": "1",
"firstName": "Max",
"lastName": "Sample",
...
"_links": {
"self" : {
"href": "/api/user/1"
}
},
"_embedded": {
"metadata" : {
"fields" : [{
"id": "1",
"type": "string",
"value": "foo",
"_links": {
"self": {
"href": "/api/user/1/metadata/1"
}
}
}, {
"id": "2",
"type": "string",
"value": "bar",
"_links": {
"self": {
"href": "/api/user/1/metadata/2"
}
}
}],
"_links": {
"self": {
"href": "/api/user/1/metadata"
}
}
}
}
}
}
Of course you could send a PUT or a PATCH request to modify an existing metadata field. Though, the URI of the resource will still be the same (unless you move or delete a resource within a PATCH request).
You also have the possibility to ignore certain fields on incomming PUT requests which prevents modification of certain fields like id or _link. I'll assume this should also be valid for PATCH requests, though will have to re-read the spec again therefore.
Therefore, I'd suggest to ignore any id or _link fields contained in requests and update the remaining fields. But you also have the option to return a 403 Forbidden or 409 Conflict response if someone tries to update an ID-field.
UPDATE
If you want to update multiple fields within a single request, you have two options:
Using PUT and replace the current set of fields with the new version
Using PATCH and send the server the necessary steps to transform the current field-set to the new field-set
Example PUT:
PUT /api/user/1/metadata HTTP/1.1
{
"metadata": {
"fields": [{
"type": "string",
"value": "newFoo"
}, {
"type": "string",
"value": "newBar"
}]
}
}
This request would first delete every stored metadata field of the user the metadata belong to and afterwards create a new resoure for each contained field in the request. While this still guarantees unique URIs, there are a couple of drawbacks to this approach however:
all the data which should be available after the update, even fields that do not change, need to be transmitted
clients which have a URI pointing to a certain resource may point to a false representation. F.e. a client has retrieved /user/1/metadata/2right before a further client updated all the metadata, the IDs are dispatched via auto-increment, the update however introduced a new second item and therefore moved the former 2 to position 3, client1 has now a reference to /user/1/metadata/2 while the actual data is /user/1/metadata/3 however. To prevent this, unique UUIDs could be used instead of autoincrement IDs. If client 1 later on tries to retrieve or update former resource 2, his can be notified that the resource is not available anymore, even a redirect to the new location could be created.
Example PATCH:
A PATCH request contains the necessary steps to transform the state of a resource to the new state. The request itself can affect multiple resources at the same time and even create or delete other resources as needed.
The following example is in json-patch+json format:
PATCH /api/user/1/metadata HTTP/1.1
[
{
"op": "add",
"path": "/0/value",
"value": "newFoo"
},
{
"op": "add",
"path": "/2",
"value": { "type": "string", "value": "totally new entry" }
},
{
"op": "remove",
"path": "/1"
},
]
The path is defined as a JSON Pointer for the invoked resource.
The add operation of the JSON-Patch type is defined as:
If the target location specifies an array index, a new value is inserted into the array at the specified index.
If the target location specifies an object member that does not already exist, a new member is added to the object.
If the target location specifies an object member that does exist, that member's value is replaced.
For the removal case however, the spec states:
If removing an element from an array, any elements above the specified index are shifted one position to the left.
Therefore the newly added entry would end up in position 2 in the array. If not an auto-increment value is used for the ID, this should not be a big problem though.
Besindes add, and remove the spec also contains definitions for replace, move, copy and test.
The PATCH should be transactional - either all operations succeed or none. The spec states:
If a normative requirement is violated by a JSON Patch document, or if an operation is not successful, evaluation of the JSON Patch document SHOULD terminate and application of the entire patch document SHALL NOT be deemed successful.
I'll interpret this lines as, if it tries to update a field which it is not supposed to update, you should return an error for the whole PATCH request and therefore do not alter any resources.
Drawback to the PATCH approach is clearly the transactional requirement as well as the JSON Pointer notation, which might not be that popular (at least I haven't used it often and had to look it up again). Same as with PUT, PATCH allows to add new resources inbetween existing resources and shifting further ones to the right which may lead to an issue if you rely on autoincrement values.
Therefore, I strongly recommend to use randomly generated UUIDs as identifier rather than auto-increment values.

IBM Worklight - How to construct a JSON object in SQL adapter

To construct a JSON object in a SQL adapter I have tried the following:
{
'PatientID':4,
'FName':'test',
'LName':'test',
'AGE':1,
'DOB':1988-09-01,
'GENDER':'m',
'BG':'A+'
}
However I get an error:
{
"errors": [
"Runtime: Method createSQLStatement was called inside a JavaScript function."
],
"info": [
],
"isSuccessful": false,
"warnings": [
]
}
Full size image
First, in the "Invoke Procedure Data" window for your adapter, don't wrap the object in quotes. If you do, it will think that the entire thing is a string.
If you remove the beginning and ending quotes then you almost have it correct. The window will take valid JSON objects, but only if all non integers are strings. Since 1988-09-01 is not a valid integer, it must be wrapped in quotes. You should be able to copy/paste this object into the wizard:
{
'PatientID':4,
'FName':'test',
'LName':'test',
'AGE':1,
'DOB':"1988-09-01",
'GENDER':'m',
'BG':'A+'
}
createSQLStatement API should not be used inside of your functions. You it outside of functions, just like tutorial shows (slide 10) http://public.dhe.ibm.com/software/mobile-solutions/worklight/docs/v600/04_03_SQL_adapter_-_Communicating_with_SQL_database.pdf