I have made demo application for Pact-Contract testing. Following is the link, I referred. I have change few things out of that like patternmatcher and bodytype.
https://www.javacodegeeks.com/2017/03/consumer-driven-testing-pact-spring-boot.html
I am able to publish pact from consumer and verify it from provider side.
I have been asked to verify pact from consumer end as well.
E.g. consumer posts following json to provider for creating new user.
{
"address": {
"city": "string",
"houseNumber": 0,
"postalCode": "string",
"street": "string"
},
"name": "string",
"registrationId": 0,
"surname": "string"
}
But now consumer changes the model classes. (as it is also provider for some other service. it might be possible to get request to change change contract). Following is the new request json that will be generated.
{
"address": {
"city": "string",
"houseNumber": 0,
"postalCode": "string",
"street": "string"
},
"firstname": "string",
"registrationId": 0,
"surname": "string"
}
As the request object is changed. If I verify consumer against pact. It should fail.
Problem: When I run mvn:verify from consumer, it is always OK. I want it to fail.
P.S. Let me know if it is not correct way of doing it.
The consumer test is analogous to a unit test. It will always pass if your code does what you expect it to in the test. It isn't dependent on prior state (such as a previous generated contract).
The part of the process where you would check for a breaking change is in CI with the can I deploy tool (https://docs.pact.io/pact_broker/can_i_deploy).
Related
Imagine there are to separate apps: producer and consumer.
The code of producer:
import os
from confluent_kafka import avro
from confluent_kafka.avro import AvroProducer
avsc_dir = os.path.dirname(os.path.realpath(__file__))
value_schema = avro.load(os.path.join(avsc_dir, "basic_schema.avsc"))
config = {'bootstrap.servers': 'localhost:9092', 'schema.registry.url': 'http://0.0.0.0:8081'}
producer = AvroProducer(config=config, default_value_schema=value_schema)
producer.produce(topic='testavro', value={'first_name': 'Andrey', 'last_name': 'Volkonsky'})
basic_schema.avsc file is located within producer app. Its content:
{
"name": "basic",
"type": "record",
"doc": "basic schema for tests",
"namespace": "python.test.basic",
"fields": [
{
"name": "first_name",
"doc": "first name",
"type": "string"
},
{
"name": "last_name",
"doc": "last name",
"type": "string"
}
]
}
For now it does not matter what's inside consumer.
We run producer once and everything is ok. Then I want to add age field:
basic_schema.avsc:
{
"name": "basic",
"type": "record",
"doc": "basic schema for tests",
"namespace": "python.test.basic",
"fields": [
{
"name": "first_name",
"doc": "first name",
"type": "string"
},
{
"name": "last_name",
"doc": "last name",
"type": "string"
},
{
"name": "age",
"doc": "age",
"type": "int"
}
]
}
Here I got error:
confluent_kafka.avro.error.ClientError: Incompatible Avro schema:409
They say here https://docs.confluent.io/platform/current/schema-registry/avro.html#summary that for compitability type == BACKWARD consumers should be updated first.
I cannot understand technically. I mean do I have to copy basic_schema.avsc file to consumer
and run it?
If you registered schema with BACKWARDS compatibility (the default), confluent schema registry simply wont allow you to make an incompatible change - adding a mandatory field.
you can add optional field or use forward compatibility
the rules about what should be upgraded first is correct regardless of what changes the compatibility rule actually allows you to make.
edit - additional info
don't use forward compatibility simply because you might have a need to add mandatory fields. Use the compatibility that makes sense for your case based on who can update first e.g. it may be impossible to make all producers upgrade at the same time.
so if using backwards compatibility AND need to add a mandatory field, you probably need a new version of the service e.g. topic.v1 and topic.v2 where v2 of the service uses the schema with the new mandatory field and you can deprecate v1 service...for example
Is there a way to automate user creation on GridGain Web Console's docker container deployment?
Our test stand deployment is fully automated, and we'd like to deploy Web Agent automatically as well, copying token and starting Agent's container manually every time is not very convenient in our case.
There are several options:
Create a Web Console user with HTTP REST API, grab their token and pass it to the Agent.
Generate your own token (a UUID), pass it to Agent, create a Web Console user with API calls and set their token.
Please keep in mind that the Web Console HTTP API is considered private. It has been stable for a while, especially the user-related parts, so I wouldn't expect any changes soon. Use it at your own discretion.
Before sending any requests, make sure you use a cookie jar. Send a "/api/v1/user" GET to initialize a session. The host is the same as WC, but you can also send requests to backend directly. CORS might be an issue.
In general, you can open browser network inspector, perform actions manually, note what requests are made and perform same requests with a tool of your choice, like curl. Some communications are handled by a Web Socket connection, but not for user management.
Endpoints you are interested in:
POST "/api/v1/user". Creates a user. Example payload:
{
"email": "user#example",
"password": "1",
"firstName": "User",
"lastName": "Name",
"phone": "+790000000",
"country": "Russia",
"company": "GridGain",
"industry": "Software"
}
POST "/api/v1/profile/save". Edits user. Example payload:
{
"firstName": "User",
"lastName": "Name",
"email": "test#example",
"phone": null,
"country": "Russia",
"company": "GridGain",
"industry": "Other",
"permitEmailContact": false,
"permitPhoneContact": false,
"token": "fcf99d68-5a4c-4a43-8abc-1f93e19af26a"
}
GET "/api/v1/user". Gets a user. Example payload:
{
"email": "test#example",
"firstName": "User",
"lastName": "name",
"phone": null,
"company": "GridGain",
"country": "Russia",
"admin": false,
"becomeUsed": false,
"industry": "Other",
"permitEmailContact": false,
"permitPhoneContact": false,
"token": "fcf99d68-5a4c-4a43-8abc-1f93e19af26a",
"lastEvent": 0
}
I'm running Mirth 3.6.1 with CKAN 2.8 and being a newbie to this I've run into an issue: Is there a way to access resources in private datasets in CKAN through API requests? I can't seem to do it.
I have an organization with a public dataset and I can can go through Mirth via the API router to the correct Mirth channel and get the data from CKAN, like normal, with an API request. But if I make the dataset private, it all falls apart. Even though I use the correct API-key. Although, that doesn't seem to make a difference. I get success true regardless of whether I use the API-key or not (or if it's even the correct key)
The API-key included in the request is that of the sysadmin.
When I directly access the CKAN resource through a CKAN-endpoint and the dataset is public, I get this response:
{
"help": "https://URL/api/3/action/help_show?name=resource_search", (URL instead of real url)
"success": true,
"result": {
"count": 1,
"results": [
{
"mimetype": null,
"cache_url": null,
"state": "active",
"hash": "REDACTED__", (sensitive data)
"description": "",
"format": "",
"url": "https://URL/datastore/dump/0696c0a1-b249-4fd5-ba80-caf7046a650b", (URL instead of real url)
"datastore_active": true,
"created": "2019-03-19T00:30:04.313593",
"cache_last_updated": null,
"package_id": "11211598-34f8-4d67-ab34-b7fd590ae08d",
"mimetype_inner": null,
"last_modified": null,
"position": 1,
"revision_id": "17b85d36-4ec1-4645-b9b1-dcfe310a54e6",
"size": null,
"url_type": "datastore",
"id": "0696c0a1-b249-4fd5-ba80-caf7046a650b",
"resource_type": null,
"name": "REDACTED" (sensitive data)
}
]
}
}
When the dataset is private, regardless of whether I include the API-key or not (or if it's even the real api-key), I get this response:
{
"help": "https://URL/api/3/action/help_show?name=resource_search",
"success": true,
"result": {
"count": 0,
"results": []
}
}
So, how can I do a resource_search for a resource in a private dataset?
Thanks in advance.
Yes you can do that by using include_private:True in the dataset
Please see the below link
https://docs.ckan.org/en/2.8/api/index.html#ckan.logic.action.get.package_search
I'm trying to create an event using the API console and keep getting errors. Any ideas why?
I've been using different versions of the example value:
{
"name": "string",
"description": "string",
"status": "string",
"event_id": "string",
"start_epoch": 0,
"end_epoch": 0,
"industry": "string",
"archived": true,
"deleted": true,
"legacy_id": 0,
"is_public": true
}
I get the following back. Any thoughts?
{
"code": "BadRequestError",
"message": "[\"Has time can't be blank\",\"true is not included in the list\"]"
}
You will need to fetch the user/team information first
Once you have your oauth token from above and set to the Authorization header, make a call to https://developer-portal.socialtables.com/api-console#!/Authentication/get_4_0_oauth_token
This will give you the user and team object back to make subsequent calls to make events
Once you have the team_id you can now make events
You can POST to /4.0/events
Swagger doc: https://developer-portal.socialtables.com/api-console#!/Events/post_4_0_events
Example POST payload:
{
"name": "NAME",
"description": "DESCRIPTION",
"status": "new",
"start_epoch": TIME_IN_MS,
"end_epoch": TIME_IN_MS,
"industry": "INDUSTRY_TYPE",
“has_time”: 1 // 0 = all day event, 1 = from/to a specific time in day
}
- This will return the event ID under data.event.id in the response from the above POST
- You can then link the user to:
https://home.socialtables.com/events/EVENT_ID
In order to receive Azure IotHub Device Twin change notifications, it appears that it's necessary to create a custom endpoint and create a route to send notifications to that endpoint. This seems straightforward enough on the Azure Portal, but as one might expect we want to automate it.
I haven't been able to find any documentation for the the az cli or even the REST API, though I might have missed something. I didn't find anything promising looking in the SDKs either.
How do I automate adding a custom endpoint and then setting up the route for device twin notifications?
You can check IotHubs template to see if it helps.
Route:
"routing": {
"endpoints": {
"serviceBusQueues": [
{
"connectionString": "string",
"name": "string",
"subscriptionId": "string",
"resourceGroup": "string"
}
]
},
"routes": [
{
"name": "string",
"source": "string",
"condition": "string",
"endpointNames": [
"string"
],
"isEnabled": boolean
}
],
Consumer group:
{
"apiVersion": "2016-02-03",
"type": "Microsoft.Devices/IotHubs/eventhubEndpoints/ConsumerGroups",
"name": "[concat(parameters('hubName'), '/events/cg1')]",
"dependsOn": [
"[concat('Microsoft.Devices/Iothubs/', parameters('hubName'))]"
]
},
For more detailed information you can reference:
Microsoft.Devices/IotHubs template reference
Create an IoT hub using Azure Resource Manager template (PowerShell)