Puppet provider changes (rabbitmq module) - rabbitmq

I'm working on some changes to:
https://github.com/puppetlabs/puppetlabs-rabbitmq/
I haven't worked much with rspec or with puppet types / providers, so it's been slow going. Haven't had much input via their ticket system or in Github, so just wanted to get some advice in terms of the design changes.
Basically, the module doesn't currently support multiple bindings with the same source / destination / vhost combo, but different routing keys.
I had a (mostly) working fix that was backwards compatible involving making routing_key work as either a string or array (https://tickets.puppetlabs.com/browse/MODULES-3679)
However, this doesn't work too well, because the existing provider expects the resource to be unique, and seems to assume a predictable mapping between the title of the resource and the
I had thought about making the title and name attributes different from each other, or even completely abstracting the name / title from the attributes that must be unique:
rabbitmq_binding { 'exchange1#queue1#host3-1':
name => 'exchange1#queue1#host3',
destination_type => 'queue',
routing_key => 'routingkey1',
ensure => present,
}
rabbitmq_binding { 'exchange1#queue1#host3-2':
name => 'exchange1#queue1#host3',
destination_type => 'queue',
routing_key => 'routingkey2',
ensure => present,
}
The actual rabbitmqctl output that self.instances is collecting for each vhost (in an example where there are two routing keys for same source / dest / vhost combo (no vhost in this example) looks like:
foo.bar.exchange exchange axs.bar.baz.queue queue axs.bar.baz.key []
foo.bar.exchange exchange axs.bar.baz.queue queue axs.bar.baz.published.key []
I'd rather not try to encode the routing key in the title / name attribute of the resource, but is it possible to have routing_key also tied to the resource (presumably by updating the exists? method as well as self.instances)? What would be the best things to check in unit tests for this (I already have an acceptance test), and how can I modify lib/puppet/provider/rabbitmq_binding/rabbitmqadmin.rb to support this?
Or do I just need to encode the routing key in the name as well, which just seems really ugly, and would also make backwards compatibility more difficult?
rabbitmq_binding { 'exchange1#queue1#host3#routingkey':
or
rabbitmq_binding { 'exchange1#queue1#/#routingkey':

Related

Why am I getting duplicate key errors while using a ConsistentHashingPool?

According to the docs, one can specify the key they want to utilize for the consistent hash pool by passing a simple lambda to the pool like this
var props = Context.DI().Props<ProcessUserSessionActor>()
.WithRouter(new ConsistentHashingPool(10000)
.WithHashMapping(b => ((ProcessUserSession)b).UserId.ToString())); //<==== this line
However, when the first message through my system passes in I get an error like this
Couldn't route message with consistent hash key [882f862b-a502-4289-b1f1-fca9a9e1f3c8] due to [An item with the same key has already been added. Key: [2135871908, akka://TradeProcessingSystem/user/$a/$a/$a/$Drc]]
To get a better picture, here is all relevant code
var props = Context.DI().Props<ProcessUserSessionActor>()
.WithRouter(new ConsistentHashingPool(10000)
.WithHashMapping(b => ((ProcessUserSession)b).UserId.ToString()));
Receive<ProcessUserSession>(a =>
{
var userSessionProcessor = Context.ActorOf(props);
userSessionProcessor.Tell(a); //<=== breaks on this line
});
I've even tried deriving my command ProcessUserSession from IConsistentHashable but I still get the same error. I'm guessing there is something going on under the covers that I am unaware of. My DI setup has al actors registered as Transient and so it almost seems like some how the actor is being created on Context.ActorOf and some how again on userSessionProcess.Tell because the router should have no context on how to create the router until the message actually hits the handler.
Has anyone had any experience creating ConsistentHashable actors using DI and this routing strategy before that could help point out my mistake?

Apache Camel - Build both from and to endpoints dynamically

I have a camel route which processes a message from a process queue and sends it to upload queue.
from("activemq:queue:process" ).routeId("activemq_processqueue")
.process(exchange -> {
SomeImpl impl = new SomeImpl();
impl.process(exchange);
})
.to(ExchangePattern.InOnly, "activemq:queue:upload");
In impl.process I am populating an Id and destination server path. Now I need to define a new route which consumes messages from upload queue ,and copy a local folder (based on Id generated in previous route) and upload it to destination folder which is an ftp server (this is also populated in previous route)
So how to design a new route where both from and to endpoints are dynamic which would look something like below ?
from("activemq:queue:upload" )
.from("file:basePath/"+{idFromExchangeObject})
.to("ftp:"+{serverIpFromExchangeObject}+"/"+{pathFromExchangeObject});
I think there is a better alternative for your case, taking as granted that you are using a Camel version newer than 2.16.(alternatives for a previous version exist but the are more complicated and don't look elegant - ( e.g consumerTemplate & recipientList).
You can replace the first "dynamic from" with pollEnrich which enriches the message using a polling consumer and simple expression to build the dynamic file endpoint. For the second part, as already mentioned, a dynamic uri .toD will do the job. So your route would look like this:
from("activemq:queue:upload" )
.pollEnrich().simple("file:basePath/${header.idFromExchangeObject})
.aggregationStrategy(new ExampleAggregationStrategy()) // * see explanation
.timeout(2000) // the timeout is optional but recommended
.toD("ftp:${header.serverIpFromExchangeObject}/${header.pathFromExchangeObject}")
See content enricher section "Using dynamic uris"
http://camel.apache.org/content-enricher.html .
You will need an aggregation strategy, to combine the original exchange with the resource exchange in order to make sure that the headers serverIpFromExchangeObject, pathFromExchangeObject will be included in the aggregated exchange after the enrichment. If you don't include the custom strategy then Camel will by default use the body obtained from the resource. Have a look at the ExampleAggregationStrategy example in content-enricher.html to see how this works.
For the .toD() have a look at http://camel.apache.org/how-to-use-a-dynamic-uri-in-to.html
Adding a dynamic to endpoint in Camel (as noted in the comment) can be done with the .toD() which is described on this page on the Camel site.
I don't know of any fromD() equivalent. However, you could add a dynamic route by calling the addRoutes method on the CamelContext. This is described on this page on the Camel site.
Expanding slightly on the example from the Camel site here is something that should get you heading in the right direction.
public void process(Exchange exchange) throws Exception {
String idFromExchangeObject = ...
String serverIpFromExchangeObject = ...
String pathFromExchangeObject = ...
exchange.getContext().addRoutes(new RouteBuilder() {
public void configure() {
from("file:basePath/"+ idFromExchangeObject)
.to("ftp:"+ serverIpFromExchangeObject +"/"+pathFromExchangeObject);
}
});
}
There may be other options in Camel as well since this framework has an amazing number of EIP and capabilities.

How to use .withoutSizeLimit in Akka-http (client) HttpRequest?

I'm using Akka 2.4.7 to read a web resource that is essentially a stream of JSON objects, delimited with newlines. The stream is practically unlimited in size.
When around 8MB has been consumed, I get an exception:
[error] (run-main-0) EntityStreamSizeException: actual entity size (None) exceeded content length limit (8388608 bytes)! You can configure this by setting `akka.http.[server|client].parsing.max-content-length` or calling `HttpEntity.withSizeLimit` before materializing the dataBytes stream.
The "actual entity size (None)" seems a bit funny, but my real question is, how to use the HttpEntity.withSizeLimit (or in my case, rather .withoutSizeLimit that should be there, as well).
My request code is like this:
val chunks_src: Source[ByteString,_] = Source.single(req)
.via(connection)
.flatMapConcat( _.entity.dataBytes )
I tried adding a .map( (x: HttpResponse) => x.withoutSizeLimit ), but it does not compile. What's the role of the HttpEntity when doing client side programming, anyways?
I can change the global config, but that's kind of missing the point. I'd like to flag "no limits" only for a particular request.
As a further question, I understand the need for a max-content-length on the server side, but why affect the client?
References:
Akka 2.4.7: Limiting message entity length
Akka 2.4.7: HttpEntity
I'm far from an expert on this topic, but it would seem you need to add the .withoutSizeLimit() to the entity like:
Source.single(req)
.via(connection)
.flatMapConcat( _.entity.withoutSizeLimit().dataBytes )

Weird characters in RabbitMQ queue names created by ServiceStack

I'm trying to add some custom logic to messages in ServiceStack and RabbitMQ.
It seems that the queues created by ServiceStack have some illegible characters prepended to the queue name and that makes it hard to reference them by name. For example (link from the RabbitMQ admin tool):
http://localhost:15672/#/queues/%2F/%E2%80%8E%E2%80%8Emq%3ATestRequest.inq
Note the %E2%80%8E%E2%80%8E prepended to the queue name. Although the queue looks like mq:TestRequest.inq it seems to have a different name. I also checked on another machine and the behaviour is consistent. I also suspect routing keys are affected in the same manner.
However, if I manually create a queue like this (and as far as I can see, ServiceStack does it in a similar way):
RabbitMqServer mqServer = new RabbitMqServer(connectionString: hostName, username: userName, password: password);
RabbitMqMessageFactory factory = (RabbitMqMessageFactory)MqServer.MessageFactory;
using (var mqClient = new RabbitMqProducer(factory))
{
var channel = mqClient.Channel;
string qName = new QueueNames(typeof(TestRequest)).In;
channel.QueueDeclare(qName, true, false, false, null);
}
The creted queue has a "normal" name without extra characters.
http://localhost:15672/#/queues/%2F/mq%3ATestRequest.inq
Also, it seems that the exchanges are created with names as expected.
My questions:
How to force ServiceStack to create queues without appending these characters?
OR
How to construct queue names containing these characters?
EDIT:
It seems that the inserted character is Left-to-right mark (‎ or \u200e). Prepending these characters to the queue name / routing key seems to get the job done. However, this looks rather hacky so I'd like to avoid doing this.
This might be inside the internals of RabbitMQ and may depend if you are using AMQP or STOMP. Here is an except from the full page:
If /, % or non-ascii bytes are in the queuename, exchange_name or routing_key, they are each replaced with the sequence %dd, where dd is the hexadecimal code for the byte.
RabbitMQ - Stomp - Destinations - AMQP Semantics

How to use validation_messages and display_exceptions in Apigility?

From the Apigility documentation (Error Reporting):
The API Problem specification allows you to compose any other additional fields that you feel would help further clarify the problem and why it occurred. Apigility uses this fact to provide more information in several ways:
Validation error messages are reported via a validation_messages key.
When the display_exceptions view configuration setting is enabled, stack traces are included via trace and exception_stack properties.
I don't understand this part of the docu. What is the purpose and how to use the settings validation_messages and display_exceptions?
The display_exceptions setting is from ZF2's view manager (see docs here). Turning this on will cause Apigiltiy to include a stack trace with any error response.
In Apigility itself the validation_messages key population is handled automatically. You configure an input filter which validates the incoming data payload and if the input filter fails the error messages it returns are automatically injected into the API response under the validation_messages key. This functionality is provided by the module zf-content-validation. You can "do it yourself" by returning an ApiProblemResponse from your resource like so:
return new ApiProblemResponse(
new ApiProblem(422, 'Failed Validation', null, null, array(
'validation_messages' => [ /* array of messages */ ]
))
);