Pact CDC: Cleaning up older consumer and provider version from Broker periodically - testing

I have set up a pack broker locally and ran few iterations of consumer and provider.
The list of verified pact over few days has gone to 100 now.
I am looking for options to clear the contracts and the verification results which are older than 15 days.
Is there a way of bulk cleaning the contracts? Can this be set up like a cron job in the Broker?
Can we set a retention period for the contracts and verification results?

Yes, you can now do exactly that - see https://github.com/pact-foundation/pact_broker/issues/360

Related

In Corda, configure the Artemis delivery retry rate, backoff, and timeout?

I'm on a Corda V2 project (but will be migrating to V3 shortly). Per the docs
Artemis is hidden behind a thin interface...
But I'm exploring some business cases around the queue. Specifically is any of the following exposed for configuration? (Couldn't find anything specific in node config, but about to look at source.) Or would I need to do my own broker and specify with messagingServerAddress?
delivery retry rate
backoff rate
timeout, or when Artemis gives up on delivering the message
Sorry, might be separate question but can the internal queue be queried to see if a node has proposed tx's still waiting to be sent to a different node?
As of Corda 3, these settings are not configurable.
It is recommended not to try and interfere with these settings, as many Corda components do not have timeouts configured.
If your use case absolutely requires configuring these settings, please update to original question to explain why :)

Client queue persistence

Amqp brokers have persistence settings that allow guaranteed delivery - but that only works if the message actually reaches the broker. If there is a network failure and a subsequent client crash/reboot messages could be lost. Is there some way in rabbitmq or activemq or some other messaging framework for the client (producer) to persist messages to disk so that in the event the client crashes or is rebooted any unsent messages will not be lost?
I have seen people run a broker locally in order to get around this issue. That seems like an unnecessary amount of work, especially if you don't have much control over the deployment of your client.
In reality you've answered your own question pretty well. Many people looking for client side persistence turn to embedded brokers because it's actually a very good solution. Having a local broker that can store and forward gives you a lot more flexibility than just an built in persistence layer in each client, all local clients can share one broker instance which can allow you to move storage as needed in cases where you find that your stored local messages are building up due to unforeseen remote downtime.
There are of course some client implementations that do offer storage but finding one depends on your chosen broker / protocol and of course your willingness to shell out the money to buy support or licensing if that client happens to not be from say an open source implementation. The MQTT Paho client does I think have a local storage option as do some others.

How to use NServiceBus with MSMQ

I am experimenting the new version of NServicsBus. I find following step by step sample on particular site.
https://docs.particular.net/samples/step-by-step/
Can any one tell me how to configure MSMQ for Transport. Here is my scenario.
Client create message
Client message should be stored in MSMQ
Server Application running on same machine which subscribe the message.
Server handler get message from MSMQ and process it further. i.e Store in DB or send to other web service.
Retry to process message if it does not worked first time
after 3 retries send message to error queue
How do i configure this sample to use MSMQ for my scenario.
Helpful information to include
Product name:NServiceBus.Core
Version: 6.3.4
Stacktrace:
Description:
Did you know that we have released a LearningTransport and LearningPersistence just for purposes like these? Have a look at it here.
Having said that, the transport swapping should be rather seamless so even if you have setup a small PoC using this transport/persistence, you can change it to MSMQ or other production-ready transports/persistence when you go live.
Again, as stated in the documentation page and as the name suggests, this is not for use in production.
I would recommend you walk through this.
https://docs.particular.net/tutorials/intro-to-nservicebus/
Will answer your questions, and future ones you have.

Activemq STOMP: detecting and clearing dead nondurable subscribers

I have the following situation that is affecting our ActiveMQ 5.8 broker.
Several Perl scripts on a Windows workstation connected to ActiveMQ using STOMP and subscribed (nondurable) to various topics. The power failed on the Workstation.
Using the Web console, I can see that ActiveMQ still thinks these subscribers are connected, based on the number of consumers shown and on the high temp message store being used. I had set for no producer flow control and set memory limits, so what I believe I am seeing is that ActiveMQ is spooling all messages to disk because it thinks the long dead subscribers are still connected and might eventually read them. It's been 30 days, and ActiveMQ still doesn't realize that these subscribers are no longer connected.
It there a way to configure ActiveMQ so that "undead" subscriber connections like these are eventually cleared automatically?
While the previous answer is basically correct, ActiveMQ does provide solutions for STOMP transports on the Broker to heart-beat connections, even if the client connects with STOMP v1.0. I blogged about this some time ago when ActiveMQ v5.6 was released, see the section on STOMP 1.0 default heartbeat configuration. Another option is to set tcp keepAlive on for the transport and tune your OS to use a shorter default check interval, the default is usually around two hours.
Though Stomp 1.1+ supports Heartbeating, Active MQ currently doesnt support inactive consumer detection for Stomp. (usually achieved with wireFormat.maxInactivityDuration).
Be Careful:
These values are currently not supported but are planned for a later release
ActiveMQ supports it for Openwire though. i,e after the configured duration the consumer would be considered DEAD !

What is the best alternative way of monitoring apache Active MQ other than using JMX API

I have tried and tested the JMX API and it is pretty simple to use and provides a vast number of statistics required for monitoring ActiveMQ.
But the problem is, i dont want to monitor my ActiveMQ remotely and also i dont want to use another API.To be more precise, i want to use the JMS API itself to get statistics related to various destinations and the broker itself.
Advisory messages seem to be an alternative but they provide limited Amount of Administrative Messages to monitor.
Any input is highly appreciated...
There is no built-in support for this. But you can implement a JMS topic which publishes the monitoring data every few seconds. Make the connection non-persistent so that it doesn't pile up when there are no listeners or when they loose connection.
Now you can write a client that connects to this topic and it will receive updates.
AMQ-2379 resulted in a broker plugin for grabbing statistics from destinations by sending a simple JMS message. Check out the docs that show how to use it here:
http://activemq.apache.org/statisticsplugin.html
The statistics plugin is available in the 5.3 release.
You can checkout this http://issues.apache.org/activemq/browse/AMQ-2379, it will be avaiable in upcoming 5.3.0 release
There's a blog post queued up to go on http://issues.apache.org/activemq/browse/AMQ-2379 - will post it in a couple of days or so