How to Consume Kafka Topic from ZeroCode Framework - testing

I want to consume one kafka topic via ZeroCode framework. I can consume my localhost kafka server from ZeroCode scenario. Also I can consume the topic that is got from my actual remote kafka server with using kafka-consumer.bat via command line but I can not consume same topic from ZeroCode.
What Do I need to add special configuration?
{
"name": "Consume Message From doob-ship-topic",
"url": "kafka-topic:my-sample-topic",
"operation": "consume",
"request": {
"consumerLocalConfigs": {
"recordType": "JSON",
"commitSync": false,
"showRecordsConsumed": true,
"maxNoOfRetryPollsOrTimeouts": 3
}
},
"assertions": {
"size": 1,
"records": [
{
"value": {
"key": "99930000000000260001"
}
}
]
}
}
I tried different configurations but I didn't handle with that.

I solved.
"consumerLocalConfigs": {
"recordType": "RAW",
"commitSync": true,
"showRecordsConsumed": true,
"maxNoOfRetryPollsOrTimeouts": 2,
"pollingTime": 1595 // I added this.
}
Added pollingTime is solved my problem.

Related

Can you set an Application's Client Secret using a kickstart file? FusionAuth

I am using a kickstart.json file to setup FusionAuth in developer environments. Everything is automated except I still need to manually go and get the client secret from the fusion auth instance.
Is there anyway I can predefine the client secret in the kickstart file so I can pre-configure it in my app?
you should absolutely be able to set the client secret from kickstart.json. Any API call should work from within Kickstart.
https://fusionauth.io/docs/v1/tech/apis/applications#create-an-application indicates you can POST an application including the client secret.
So a kickstart file like this should work:
{
"variables": {
"defaultTenantId": "30663132-6464-6665-3032-326466613934"
},
"apiKeys": [
{
"key": "mykey",
"description": "API key"
}
],
"requests": [
{
"method": "POST",
"url": "/api/application/85a03867-dccf-4882-adde-1a79aeec50df",
"body": {
"application": {
"name": "Pied Piper",
"roles": [
{
"name": "dev"
},
{
"name": "ceo"
},
{
"name": "intern"
}
],
"oauthConfiguration" : {
"clientSecret": "shhh-your-desired-secret"
}
}
}
}
]
}
I haven't tested that, but don't see any reason why it would not work. (Note that 1.37, the most recent version, has an issue with kickstart as documented here: https://github.com/FusionAuth/fusionauth-issues/issues/1816 but that should be fixed soon.)
If this doesn't work for you, please share the error message and a scrubbed kickstart file.

Chaostoolkit istio extension hangs when playing experiment

I'm trying to use the chaos toolkit istio extension, my problem is as follows:
I have a experiment.json file which contains a single probe to retrieve a virtual service. The file looks similar to the following:
{
"version": "1.0.0",
"title": "test",
"description": "N/A",
"tags": []
"secrets": {
"istio": {
"KUBERNETES_CONTEXT": {
"type": "env",
"key": "KUBERNETES_CONTEXT"
}
}
},
"method": [
{
"type": "probe",
"name": get_virtual_service:,
"provider": {
"type": "python",
"module": "chaosistio.fault.probes",
"func": "get_virtual_service",
"arguments": {
"virtual_service_name": "test"
"ns": "test-ns"
}
}
}
}
I have set KUBERNETES_CONTEXT and http/https proxy as env vars. My authorisation is using $HOME/.kube/config.
When playing the experiment it validates the file fine and tries to perform the action but becomes stuck and just hangs until it times out.
The error I see in the logs is a HTTPSConnectionPool error (failed to establish a new connection, operation timed out).
Am I missing any settings? All help appreciated.

Connect Node-RED to a RabbitMQ Priority Queue?

I have a RabbitMQ queue that is declared with the following options:
{
"queue": "events/online",
"durable": true,
"args": {
"x-max-priority": 10
}
}
I am trying to connect to the queue from Node-RED, using the node-red-contrib-amqp plugin, with the following Topology that is set under the connection source:
{
"queues": [
{
"name": "events/online",
"durable": true,
"options": {
"x-max-priority": 10
}
}
]
}
I am getting the following error:
"AMQP input node disconnect error: Operation failed: QueueDeclare; 406
(PRECONDITION-FAILED) with message "PRECONDITION_FAILED - inequivalent
arg 'x-max-priority' for queue 'myqueue' in vhost 'vhost': received
none but current is the value '10' of type 'signedint'""
Turns out the answer is as follows.
Make sure the following checkbox is selected: "use AMQP topology definition (JSON) defined below"
{
"queues": [
{
"name": "events/online",
"durable": true,
"options": {
"maxPriority": 10
}
}
]
}

Declaring service UI in DCOS results in broken links

I need a mini-hdfs service that can run in a single agent, so I started building one in a docker container. I then deployed it to DCOS. The Namenode UI comes up, but un-styled. It turns out that the references inside the UI we not prefixed.
My service is at http://m1.dcos/service/small-hdfs/dfshealth.html
The browser generates requests such as http://m1.dcos/static/bootstrap-3.0.2/css/bootstrap.min.css
Instead of http://m1.dcos/service/small-hdfs/static/bootstrap-3.0.2/css/bootstrap.min.css
This is my marathon.json - very basic for now - I'll expose the volumes after I get it basically working ...
How do I fix this. If I can pass the prefix into the container, I may be able to configure a Hadoop property with the prefix, but not sure if that is possible. I also did not see any documented way of passing this prefix.
{
"id": "small-hdfs",
"cmd": "/root/docker_entrypoint.sh",
"cpus": 1.5,
"mem": 4096.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "slowenthal/small-hdfs",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 9000, "hostPort": 0, "protocol": "tcp" },
{ "containerPort": 50070, "hostPort": 0, "protocol": "tcp" }
]
}
},
"labels": {
"DCOS_SERVICE_NAME": "small-hdfs",
"DCOS_SERVICE_PORT_INDEX": "1",
"DCOS_SERVICE_SCHEME": "http"
}
}

Cannot create Virtual IP using the SoftLayer API

When using the API to place an order for a VIP, it is failing with a non intuitive error message. Please see the following REST API call JSON and the ensuing error:
JSON:
{
"loadBalancer": {
"name": "lbName_TEST",
"id" : 123,
"type": "HTTP",
"sourcePort": 80,
"virtualIpAddress": "123.123.123.123"
}
}
REST API URL USED:
https://user.name:longid4235234532#api.softlayer.com/rest/v3/SoftLayer_Network_Application_Delivery_Controller/15293/createLiveLoadBalancer.json
{
"error": "Invalid port supplied.",
"code": "SoftLayer_Exception_Public"
}
The question that arises is this. We are trying to script the ordering of a VPX Netscaler and the addition of all related configurations to the created VPX, are we doing something out of order?
Related to create the loadBalancer the JSON is wrong, try this JSON:
{
"parameters": [
{
"name": "lbName_TEST",
"type": "HTTP",
"sourcePort": 80,
"virtualIpAddress": "123.123.123.123",
"loadBalancingMethod": "pi"
}
]
}
Regards
To order a device or service, you need to use the SoftLayer_Product_Order::placeOrder method. here an example to order a NetScaler
URL: https://api.softlayer.com/rest/v3/SoftLayer_Product_Order/placeOrder
Method: POST
PAYLOAD: {
"parameters": [
{
"packageId": 192,
"location": 265592,
"prices": [
{
"id": 22315,
"complexType": "SoftLayer_Product_Item_Price"
},
{
"id": 17238,
"complexType": "SoftLayer_Product_Item_Price"
}
],
"complexType": "SoftLayer_Container_Product_Order_Network_Application_Delivery_Controller"
}
]
}
The price 22315 is for a "Citrix NetScaler VPX 10.1 10Mbps Standard" and the 17238 is for "2 Static Public IP Addresses"
To get all the prices use the http://sldn.softlayer.com/reference/services/SoftLayer_Product_Package/getItems method